id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
119501287
pes2o/s2orc
v3-fos-license
Effective Kinetic Theory for High Temperature Gauge Theories Quasiparticle dynamics in relativistic plasmas associated with hot, weakly-coupled gauge theories (such as QCD at asymptotically high temperature $T$) can be described by an effective kinetic theory, valid on sufficiently large time and distance scales. The appropriate Boltzmann equations depend on effective scattering rates for various types of collisions that can occur in the plasma. The resulting effective kinetic theory may be used to evaluate observables which are dominantly sensitive to the dynamics of typical ultrarelativistic excitations. This includes transport coefficients (viscosities and diffusion constants) and energy loss rates. We show how to formulate effective Boltzmann equations which will be adequate to compute such observables to leading order in the running coupling $g(T)$ of high-temperature gauge theories [and all orders in $1/\log g(T)^{-1}$]. As previously proposed in the literature, a leading-order treatment requires including both $2<->2$ particle scattering processes as well as effective ``$1<->2$'' collinear splitting processes in the Boltzmann equations. The latter account for nearly collinear bremsstrahlung and pair production/annihilation processes which take place in the presence of fluctuations in the background gauge field. Our effective kinetic theory is applicable not only to near-equilibrium systems (relevant for the calculation of transport coefficients), but also to highly non-equilibrium situations, provided some simple conditions on distribution functions are satisfied. I. INTRODUCTION In a hot, weakly coupled gauge theory, such as QCD at asymptotically high temperature where the running coupling g(T ) is small, one might hope to achieve solid theoretical control over the dynamics of the theory. To date, however, very little has been derived about the dynamics of such theories at even leading order in the coupling -that is, neglecting all relative corrections to the leading weak-coupling behavior which are suppressed by powers of g. For example, hydrodynamic transport properties such as shear viscosity, electrical conductivity, and flavor diffusion are not known at leading order; they have only been calculated in a "leading-log" approximation, which has relative errors of order 1/ log(g −1 ). 1 To study transport or equilibration processes in a hot plasma quantitatively, the most efficient approach is first to construct an effective kinetic theory which reproduces, to the required level of precision, the relevant dynamics of the underlying quantum field theory, and then apply this kinetic theory to the processes of interest. Specifically, we would like to formulate an appropriate set of Boltzmann equations which will, on sufficiently long time and distance scales, correctly describe the dynamics of typical ultrarelativistic excitations (i.e., quarks and gluons) with sufficient accuracy to permit a correct leading-order evaluation of observables such as transport coefficients. Schematically, these Boltzmann equations will have the usual form, where f = f (x, p, t) represents the phase space density of (quasi-)particles at time t, v is the velocity associated with an excitation of momentum p, and C[f ] is a spatially-local collision term that represents the rate at which particles get scattered out of the momentum state p minus the rate at which they get scattered into this state. The challenge is to understand exactly what processes need to be included in the collision operator C[f ], and how to package them, so that the Boltzmann equation correctly reproduces the desired physics at the required level of accuracy. To compute transport coefficients or asymptotic equilibration rates, one does not actually need a general non-equilibrium (and non-linear) kinetic theory; it is sufficient merely to have Boltzmann equations linearized in small deviations away from an equilibrium state of given temperature T . 2 But more generally one would like to formulate a fully non-equilibrium kinetic theory which would also be applicable to systems (such as intermediate stages of a heavy ion collision [3]) in which deviations from equilibrium are substantial and quantities such as temperature are not unambiguously defined. This will be our goal. It should be emphasized that all of our analysis assumes the theory is weakly coupled on the scales of interest. Hence, the domain of validity includes QCD at asymptotically high temperatures, or intermediate stages of collisions between arbitrarily large nuclei at asymptotically high energies [3]. It is an open question to what extent this weak coupling analysis is applicable to real heavy ion collisions at accessible energies. However, we believe that understanding 1 Ref. [1] attempted to calculate the shear viscosity to next-to-leading logarithmic order [including relative corrections of order 1/ log(g −1 ) but neglecting 1/ log 2 (g −1 ) effects] but, among other things, missed the "1 ↔ 2" collinear processes described in this paper, as well as pair annihilation/creation processes which contribute even at leading-log order. The latter have been previously discussed in Ref. [2]. 2 See, for example, the discussion in Ref. [2]. the dynamics in weakly coupled asymptotic regimes is a necessary and useful prerequisite to understanding dynamics in more strongly coupled regimes. The domain of applicability of any kinetic theory depends on the time scales of the underlying scattering processes which are approximated as instantaneous transitions in the collision term of the Boltzmann equation. In the remainder of this introduction, we review the relevant scattering processes and associated time scales, describe the assumptions underlying our effective kinetic theory for near-equilibrium systems in which the temperature is at least locally well-defined, and then discuss how the required conditions can be generalized to a much wider class of non-equilibrium systems. A. Relevant scattering processes Consider a QCD plasma at sufficiently high temperature T so that the effective coupling g(T ) is small. 3 Quarks and gluons are well-defined quasiparticles of this system. The typical momentum (or energy) of a quark or gluon is of order T ; this will be referred to as "hard." The number density of either type of excitation is O(T 3 Scattering processes in the plasma will cause any excitation to have a finite lifetime. Imagine focusing attention on some particular excitation with a hard O(T ) momentum. inverse thermal mass (gT ) −1 mean free path: small-angle scattering (θ ∼ g, q ∼ gT ) (g 2 T ) −1 mean free path: very-small-angle scattering (θ ∼ g 2 , q ∼ g 2 T ) (g 2 T ) −1 duration (formation time) of "1 ↔ 2" collinear processes (g 2 T ) −1 non-perturbative magnetic length scale for colored fluctuations (g 2 T ) −1 mean free path: large-angle scattering (θ ∼ 1, q ∼ T ) (g 4 T ) −1 mean free path: hard "1 ↔ 2" collinear processes (g 4 T ) −1 TABLE I: Parametric dependence of various length scales for a weakly-coupled ultrarelativistic equilibrium plasma. Estimates for mean free paths apply to typical (hard) particles, with θ and q denoting the deflection angle and momentum transfer, respectively. The non-perturbative magnetic physics scale of (g 2 T ) −1 for colored fluctuations applies only to non-Abelian gauge theories. What determines the fate of this quasiparticle? One relevant process is ordinary Coulomb scattering, depicted in Fig. 1. If the particle of interest scatters off some other excitation in the plasma with a momentum transfer q, then the direction of the particle can change by an angle θ which is O(|q|/T ). The differential scattering rate is This form holds provided q > ∼ O(gT ). Below this scale, Debye screening (and Landau damping) in the plasma soften the small angle divergence of the bare Coulomb interaction. Consequently, the rate for a single large angle scattering with O(T ) momentum transfer is O(g 4 T ), while the rate for small angle scattering with O(gT ) momentum transfer is O(g 2 T ). For later use, let τ g = 1/(g 2 T ) denote the characteristic small angle mean free time, and τ * = 1/(g 4 T ) the characteristic large angle mean free time, 5 also known as the transport 5 Actually, because small angle scatterings are individually more probable than large angle scatterings, a particle can undergo N g ∼ 1/g 2 small angle scatterings, each with θ = O(g), during a time of order N g τ g ∼ 1/(g 4 T ) -the same as the time for a single q = O(T ) scattering. A succession of this many uncorrelated small angle scatterings will result in a net deflection of order N 1/2 g θ = O(1). Hence, a large deflection in the direction of a particle is equally likely to be the result of many small angle scatterings or a single large angle scattering. In fact, the multiplicity of possible combinations of scatterings with angles between θ ∼ g and θ ∼ 1 leads to a logarithmic enhancement in the large angle scattering rate, or a logarithmic decrease of the large angle mean free path, so that τ * ∼ [g 4 T log(1/g)] −1 . In this paper, we will ignore such logarithmic factors when making parametric estimates and simply write τ * ∼ 1/g 4 T . Nevertheless, it is important to keep in mind that contributions to τ * come from the entire range of scatterings from θ ∼ g to θ ∼ 1, which corresponds to momentum transfers from q ∼ gT to q ∼ T . mean free time. Neither time has a precise, quantitative definition; these quantities will only be used in parametric estimates. The small angle mean free time τ g is [to within a factor of O(log g −1 )] the same as the color coherence time of an excitation -this is the longest time scale over which it makes sense to think of an excitation has having a definite (non-Abelian) color [4][5][6][7]. These scales are summarized in Table I. It is also important to consider processes which change the type of an excitation. Consider, for example, the conversion of a quark of momentum p into a gluon of nearly the same momentum by the soft qq → gg process, depicted in Fig. 2, with momentum transfer q ∼ gT . The mean free path for this process (or its time-reverse) is O[1/(g 4 T )], just like the large angle scattering time τ * . As a result, such t-channel quark exchange processes are equally important as gluon exchange for quasiparticle dynamics, and must be correctly included even in leading-log evaluations of transport coefficients [2]. Crossed s-channel versions of Figs. 1 and 2, namely quark-antiquark annihilation and creation via a single virtual gluon, and gluo-Compton scattering, also proceed at O(g 4 T ) rates. Consequently, these processes must also be included in a leading-order treatment of quasiparticle dynamics. 6 Henceforth, whenever we refer to 2 ↔ 2 particle processes, we will mean all possible crossings of Figs. 1 and 2 in which two excitations turn into two excitations. In addition to the 2 ↔ 2 particle processes of Figs. 1 and 2, hard quasiparticles in the plasma can also undergo processes in which they effectively split into two different, nearly collinear, hard particles. Such processes cannot occur (due to energy-momentum conservation) in vacuum, but they become kinematically allowed when combined with a soft exchange involving some other excitation in the plasma. A specific example is the bremsstrahlung process depicted in the upper part of Fig. 3. Here, one hard quark undergoes a soft (q ∼ gT ) collision with another and splits into a hard quark plus a hard gluon, each of which carry an O(1) fraction of the hard momentum of the original quark and both of which travel in the same direction as the original quark to within an angle of O(g). The mean free path for this process, as well as the near-collinear pair production process also Yet softer momentum transfers are screened in the plasma and do not affect τ * at the order of interest. For instance, the mean free path for very-small angle scattering with θ ∼ g 2 (corresponding to momentum transfers of order g 2 T ) is only τ g 2 ∼ 1/g 2 T and is not enhanced over τ g . Over the time τ * ∼ 1/g 4 T , there can thus be only N g 2 ∼ 1/g 2 independent very-soft scatterings, which will only contribute a net deflection of order ∆θ ∼ (N g 2 ) 1/2 g 2 ∼ g to the O(1) deflection caused by other processes. Consequently, the large angle scattering time τ * is insensitive to non-perturbative magnetic physics in the plasma associated with very soft momentum transfers of order g 2 T . 6 These s-channel processes do not have the logarithmic enhancement mentioned in footnote 5, and so do not contribute to transport coefficients at leading-log order, but do contribute at next-to-leading log order. shown in Fig. 3, turns out to be O[1/(g 4 T )], which is once again the same order as the large angle scattering time τ * . 7 There is an important difference between these near-collinear processes and the 2 ↔ 2 particle processes of Figs. 1 and 2. The intermediate propagator in the diagrams of Fig. 3 which connects the "splitting" vertex with the soft exchange has a small O(g 2 T 2 ) virtuality. Physically, this means that the time duration of the near-collinear processes (also known as the scattering time or formation time) is of order 1/g 2 T , which is the same as the mean free time τ g for small-angle elastic collisions in the plasma. As a result, additional soft collisions are likely to occur during the splitting process and can disrupt the coherence between the nearly collinear excitations. This is known as the Landau-Pomeranchuk-Migdal (LPM) effect. Since the time between soft scatterings is comparable to the time duration of the emission process, multiple soft scatterings cannot be treated as independent classical events, but must be evaluated fully quantum mechanically. In other words, including the interference between different N + 1 → N + 2 amplitudes, such as depicted in Fig. 4, is required to evaluate correctly the rate for near-collinear splitting at leading order. We will refer to these processes collectively as "1 → 2" processes where the 1 → 2 refers to the nearly collinear splitting particles and the quotes are a reminder that there are other hard particles participating in the process via multiple soft gluon exchanges. Of course, the inverse "2 → 1" processes, in which two nearly collinear excitations fuse into one, are also required for detailed balance. The evaluation of the rate of these "1 ↔ 2" near-collinear processes 7 See, for example, Ref. [8] as well as the discussion of the closely related case of photon emission in Refs. [9,10]. More careful analysis shows that the rates of these near-collinear processes do not have the logarithmic enhancement of the large angle scattering rate discussed in footnote 5. Therefore these nearcollinear processes do not contribute to transport coefficients at leading-log order but must be included at next-to-leading log order. in an equilibrium ultrarelativistic plasma, complete to leading order, is discussed in Ref. [8], 8 which derives a two-dimensional linear integral equation whose solution determines the leading order rate. Having realized that near-collinear "1 ↔ 2" processes are just as important to the fate of a quasiparticle as 2 ↔ 2 particle processes, one might wonder if there are any further relevant processes which occur at an O(g 4 T ) rate and hence compete with the processes just discussed. We will argue in section VI B that this is not the case. In summary, a typical hard excitation travels a distance of order τ * ∼ 1/(g 4 T ) before it either experiences a large angle scattering, converts into another type of excitation, splits into two nearly collinear hard excitations, or merges with another nearly collinear excitation. These different "fates" are all comparably likely [up to factors of O(log g −1 )]. During the transport mean free time τ * , the excitation will experience many soft scatterings each of which can completely reorient the color of the quasiparticle, but only change its momentum by a small O(gT ) amount. B. Kinetic theory domain of validity -near-equilibrium systems A near-equilibrium system is one in which the phase space distribution can be written as an equilibrium distribution n(p) plus a small perturbation δf (x, p). However, we wish to include situations where the system is close to local rather than global equilibrium, so that equilibrium parameters like temperature may vary slowly with x. We will therefore define a near-equilibrium system to be one where distribution functions can be written in the form where n(p; T, u, µ) denotes an equilibrium distribution (Bose or Fermi, as appropriate) with temperature T , flow velocity u, and optionally one or more chemical potentials µ, provided: (i) the parameters T (x), u(x), and µ(x) do not vary significantly over distances (or times) of order 1/g 4 T , the large angle mean free path, 9 and (ii) throughout the system, δf ≪ f . 10 In the case of near-equilibrium systems, we can now summarize the conditions upon which our effective kinetic theory is based. 1. As already indicated, we assume that the theory is weakly coupled on the scale of the temperature, g(T ) ≪ 1. Consequently, there is a parametrically large separation between the different scales shown in Table I. 2. We assume that all zero-temperature mass scales (i.e., Λ QCD and current quark masses) are negligible compared to the O(gT ) scale of thermal masses. 11 3. Any kinetic theory can only be valid on time scales which are large compared to the duration of the scattering processes which are approximated as instantaneous inside the collision term of the Boltzmann equation. Since the formation time of near-collinear splitting processes is order 1/(g 2 T ), this means we must assume that the space-time variation of the deviation from local equilibrium δf (x, p) is small on the scale of 1/(g 2 T ). [Note that we've already assumed a stronger condition on the spacetime variation of the local equilibrium part of distributions, n(p; T (x), u(x), µ(x)).] 4. We will assume that hard particle distribution functions δf (x, p) have smooth dependence on momentum p and do not vary significantly with changes in momentum comparable to the O(gT ) size of thermal masses. [Again, this is automatic for the local equilibrium part of distributions.] This condition will allow us to simplify our treatment of distribution functions for near-collinear "1 ↔ 2" processes. 5. Finally, we assume that the observables of ultimate interest are dominantly sensitive to the dynamics of hard excitations, are smooth functions of the momenta of these excitations, do not depend on the spin of excitations, and are gauge invariant. Excluding spin-dependent observables is primarily a matter of convenience, and will allow us to use spin-averaged effective scattering rates. 12 Considering color-dependent observables would be senseless in an effective theory which is only applicable on spatial scales large compared to the 1/(g 2 T ) color coherence length. C. Kinetic theory domain of validity -non-equilibrium case It is impossible for any effective kinetic theory to be valid for all possible choices of phase space distribution functions. There will always exist sufficiently pathological initial states which are outside the domain of applicability of any kinetic theory. Describing the domain of applicability therefore requires a suitable characterization of acceptable distribution functions. In a general non-equilibrium setting, we will define "reasonable" distribution functions to be those which support a separation of scales similar to the weakly-coupled equilibrium 11 For hot electroweak theory, the required condition is that the Higgs condensate be small compared to the temperature, v(T ) ≪ T , so that the condensate-induced mass of the W -boson is negligible compared to its thermal mass. 12 If this condition is not met, distribution functions become density matrices in spin space, and the Boltzmann equation must be replaced by a generalization known as a Waldmann-Snider equation [12]. (See also the discussion in Ref. [4] for the analogous case of color-dependent density matrices.) We assume spin-independence to avoid needlessly complicating this paper, and because we are not aware of physically interesting problems in hot gauge theories where strong spin polarization is relevant. case. That is, the momenta of relevant excitations must be parametrically large compared to medium-dependent corrections to dispersion relations or to the inverse Debye screening length. Furthermore, the phase space density of excitations must not be so large as to drive the dynamics, on these scales, into the non-perturbative regime. There are potentially many more relevant scales one may need to distinguish in a nonequilibrium setting. In particular, in place of the single hard scale T that characterizes the momenta of typical excitations in equilibrium, one may need to consider at least three relevant scales, which we shall refer to as the typical momenta of (i) primaries, (ii) screeners, and (iii) scatterers. By "primaries," we mean the particles whose evolution we are explicitly interested in following with the Boltzmann equation. These might be the particles which dominate the energy of the system (as in the intermediate stages of bottom-up thermalization [3]), they might be particles which dominate transport of some charge of the system we are following, or whatever. These excitations may or may not overlap with the next two categories. The "screeners" are those particles whose response to electric and magnetic fields dominates the screening effects and hard thermal masses in the medium. That is, they have momenta comparable to the momentum scale at which hard thermal-loop (HTL) self-energies receive their dominant contribution. (Explicit formulas for HTL self-energies will be reviewed in Section IV.) Finally, the "scatterers" are those particles which generate the soft background of gauge fields off of which the primaries scatter in soft collisions. As a pictorial example, consider Figs. 1, 3, and 4. The lines entering on the top are primaries, the lines entering on the bottom are scatterers, and the screeners are not shown explicitly but are the particles in the hard thermal loops that are implicitly summed in the soft, exchanged gluon lines. To formulate the conditions under which our Boltzmann equations will be valid, it will be helpful to define the following two integrals involving the distribution function of a particular species (gluon, quark, or antiquark) of quasiparticles, (1.6) As usual upper signs refer to bosons and lower signs to fermions. In equilibrium, the ratio I/J is precisely the temperature T . Corrections to the ultrarelativistic dispersion relation for quasiparticles will be seen to involve the quantity J (x); the square of medium-dependent effective masses will be of order g 2 J . So the momentum dominating this integral defines the typical momentum of screeners which we will denote by p screen , and is the characteristic size of effective masses. This reduces to the gT scale in the nearequilibrium case. Momenta large compared to m eff will be referred to as "hard." The integral I(x) will appear as an effective density of scatterers which generate soft background gauge field fluctuations. The momentum scale which dominates this integral is, by definition, the characteristic momentum of scatterers, p scatter . The mean free time between small-angle scatterings (with momentum transfer of order m eff ) scales as 13 This generalizes the 1/(g 2 T ) small angle mean free time in the near-equilibrium case. Let p primary denote the characteristic primary momenta of interest, and let p hard denote the minimum of p scatter , p screen , and p primary . As we shall review in section V B, the formation time of near-collinear bremsstrahlung or annihilation processes involving a primary excitation behaves (up to logarithms) as represents the typical number of soft collisions occurring during a single near-collinear "1 ↔ 2" process. (In other words, N form is parametrically either 1 or [p primary /(m 2 eff τ soft )] 1/2 , depending on whether p primary is small or large compared to m 2 eff τ soft .) Since distribution functions may vary in space or time, all these scales are really local and may vary from point to point in spacetime. However, we will assume that distribution functions are suitably slowly varying in space and time. The following discussion should be understood as applying in the vicinity of any particular point x in the system. The key assumptions we will make are: 1. The species dependence of the above scales is not parametrically large. In particular, ratios of the effective masses of different species are O(1). This assumption is a matter of convenience, but relaxing it would significantly complicate the discussion. 2. The momenta of primaries, scatterers, and screeners are all large compared to mediumdependent effective masses. A large separation of scales is essential to our analysis, and implies that all relevant excitations are highly relativistic. To make parametric estimates, we will assume that for some positive exponent α. [For instance, m eff /p hard was O(g) in our previous discussion of near-equilibrium physics.] 3. The effective mass m eff must be large compared to the small-angle scattering rate τ −1 soft , as well as to zero-temperature mass scales (Λ QCD and quark masses). 13 Initial and final state factors for the scatterer appear in this expression inside the integral I, given by Eq. (1.7). Some readers may wonder whether there should be analogous initial and final state factors for the primary excitation. To be more precise, τ soft represents the inverse of the contribution to the thermal width of quasiparticles due to soft scattering. This part of the width does not contain statistical factors for the primaries. The scale τ soft characterizes the time scale for the decay of quasiparticle excitations. Equivalently, it is the relaxation time of fluctuations in the occupancy of a hard mode due to soft scattering. 4. All distribution functions have negligible variation over spacetime regions whose size equals the formation time t form of near-collinear processes involving primary excitations. This assumption is essential for our treatment of near-collinear processes. 5. Distribution functions of scatterers and screeners do not vary significantly with parametrically small changes in the direction of propagation of excitations. For example, distributions cannot be so highly anisotropic that the directions of screeners or scatterers all lie within an O(g) angle of each other. This simplifies the analysis by allowing us to ignore angular dependence when making parametric estimates. We will actually find that a stronger limit on the anisotropy of screeners is needed in order to prevent the appearance of soft gauge field instabilities [13] whose growth can lead to violations of the preceding spatial smoothness condition. Discussion of such instabilities will be postponed to section VI C. 6. Distribution functions, for hard momenta, have smooth dependence on momentum and do not vary significantly with O(m eff ) changes in momentum. [This assumption is implicit in various non-equilibrium HTL results which we will take from the literature.] For momenta of order p primary , we further assume that distributions do not vary significantly with an O(N 1/2 form m eff ) change in momentum, which represents the typical total momentum transfer due to the N form soft collisions occurring during a near-collinear "1 ↔ 2" process. 7. Distribution functions are not non-perturbatively large for momenta p O(m eff ). Specifically, for bosonic species we will assume that f (p, x) is parametrically small compared to 1/g 2 . (Fermionic distributions can never be parametrically large.) As discussed below, this inequality is actually a consequence of condition 2. 8. Distribution functions are not spin-polarized. Once again, this condition is a matter of convenience. But the consistency of this assumption is now a non-trivial issue since we are not requiring distribution functions to be isotropic in momentum space. In a general (rotationally invariant) theory, it is quite possible for a medium with anisotropic distribution functions to generate medium-dependent self-energy corrections which lead to spin or polarization dependent dispersion relations, implying spin-dependent propagation of quasiparticles. In such a theory, initially unpolarized distribution functions would not remain unpolarized. This point will be discussed further in section IV, where we will see that in hot gauge theories (at the level of precision relevant for our leading-order kinetic theory) unpolarized but anisotropic distribution functions do not generate birefringent quasiparticle dispersion relations. 9. Distribution functions are color singlets. Attempting to incorporate colored distribution functions would be inconsistent, since the color coherence time is comparable or shorter than the formation time of the "1 ↔ 2" processes which will be treated as instantaneous in this effective kinetic theory. 10. Observables of interest are dominantly sensitive to the dynamics of excitations with momenta of order p primary , are smooth functions on phase space, are gauge invariant, and are spin independent. The most important restrictions are the scale-separation condition 2 and the effective mass condition 3. They have numerous consequences, including condition 7, as explained below. Letf p denote the average phase space density for excitations with momenta of order p. First note that the definition of p screen as the momentum which dominates the integral J (1.6) implies that for any momentum p. For momenta p which are parametrically different from p screen , the above inequality ( < ∼ ) is actually a strong inequality (≪) (or else p screen will not be the scale which dominates J ). From (1.13) and definition (1.8), one has (1.14) For hard momenta p p hard , condition thus implies that which is a strengthened version of condition 7. For soft momenta, such as p ∼ m eff , which are parametrically smaller than p scatter by condition 2, the inequality (1.14) becomes strong, p 2f p ≪ m 2 eff /g 2 . We can then generally conclude that for momenta p m eff , the phase space density is perturbative, In order for any Boltzmann equation to be valid, the duration of scattering events (which are treated as instantaneous in the Boltzmann equation) must be small compared to the typical time between scatterings. For 2 ↔ 2 collisions, the largest relevant scattering duration is 1/m eff [for soft scatterings with O(m eff ) momentum transfer] and the smallest relevant mean free time is the small-angle scattering time (1.9). Hence, the condition 3 requirement that m eff τ soft ≫ 1 is needed for the validity of kinetic theory. If the density of scatterers is not parametrically small, so thatf pscatter [1 ±f pscatter ] is O(1) or larger, then this inequality automatically holds since Eqs. (1.13) and (1.15) imply 14 But the more general condition 3 determines the limit of applicability of kinetic theory for dilute systems. In order to regard excitations as having well-defined energies and momenta, their de Broglie wavelengths must also be small compared to the mean time between scatterings. The longest relevant de Broglie wavelength is 1/p hard , so the validity of kinetic theory requires that p hard τ soft ≫ 1. But this automatically follows from conditions 2 and 3. Condition 6, requiring smoothness of distribution functions in momentum space, prevents applications of our effective theory to cold degenerate quark matter. This condition implies that the temperature (for near-equilibrium systems) must be large compared to g p fermi and hence, for weak coupling, lies far outside the temperature region in which color superconductivity occurs. The remainder of this paper is organized as follows. Section II presents the structure of the effective kinetic theory. The effective scattering amplitudes characterizing 2 ↔ 2 processes are discussed in section III. These quasiparticle scattering amplitudes depend on medium-dependent self-energies, which are the subject of section IV. The appropriate formulation of effective transition rates for near-collinear "1 ↔ 2" processes is described in section V. Section VI discusses the validity of our effective kinetic theory at greater length, including possible double counting problems in our effective collision terms, and potential contributions of omitted scattering processes. We argue that neither of these concerns are an issue. This section also examines the possible appearance of instabilities in soft (p ∼ m eff ) modes of the gauge field, and briefly mentions open problems associated with extending our effective kinetic theory beyond leading order. Two short appendices follow. One summarizes simplifications to the formulas in the main text that can be made in the case of isotropic distribution functions, and the other discusses the connection between the formulas for 1 ↔ 2 scattering presented in section V and the results for the total gluon emission rate discussed in Ref. [8]. II. THE EFFECTIVE KINETIC THEORY Our effective kinetic theory will include all 2 ↔ 2 processes as well as effective collinear "1 ↔ 2" processes. The Boltzmann equations are, where the label s denotes the species of excitation (gluon, up-quark, up-antiquark, downquark, down-antiquark, etc.). Since we have assumed that distributions are not spin or color polarized, we do not decorate distribution functions with any spin or color label. However it should be understood that f s (x, p, t) represents the phase space density of a single helicity and color state of type s quasiparticles. 15 Schematically, the overall structure of our Boltzmann equations is similar to that outlined by Baier, Mueller, Schiff, and Son [3] in their treatment of the late stages of their "bottomup" picture of thermalization in heavy ion collisions, but our formulation of the details of the collision terms will be guided by our goal of providing a treatment which is complete at leading order. As will be discussed below, this requires a consistent treatment of both screening and LPM suppression of near-collinear processes. The elastic 2 ↔ 2 collision term for a given species a has a conventional form, We have introduced ν s as the number of spin times color states for species s. (So ν s equals 6 for each quark or antiquark, and 16 for gluons.) Capital letters denote 4-vectors. The on-shell 4-momenta appearing inside the delta-function are to be understood as null vectors, We are using p to denote Lorentz invariant momentum integration, (2.4) The first term in curly braces in (2.2) is a loss term, and the second is a gain term. M ab cd denotes an effective scattering amplitude for the process ab ↔ cd, defined with a relativistic normalization for single particle states; its square, |M ab cd | 2 , should be understood as summed, not averaged, over the spins and colors of all four excitations (hence the prefactor of 1/ν a ). The initial factor of 1/(4|p|) is a combination of a final (or initial) state symmetry factor 16 of 1 2 together with the 1/(2|p|) from the relativistic normalization of the scattering amplitude. Symmetry under time-reversal and particle interchange imply that etc. The effective scattering amplitude will itself be a functional of the distribution functions, since the density of other particles in the plasma determines screening lengths which affect the amplitude for soft scattering. This will be discussed explicitly in the next section. As Eq. (2.3) makes explicit, we have neglected medium-dependent corrections to quasiparticle dispersion relations in the overall kinematics of the collision terms, and in the particle velocity appearing in the convective derivative on the left side of the Boltzmann equation (2.1). Given our assumed separation of scales, these are relative O(g 2α ) perturbations to the energy or velocity of a hard quasiparticle. Because we have assumed that distribution functions and observables are smooth functions on phase space, including (or excluding) these medium-dependent dispersion relation corrections will only affect subleading corrections to observables of interest. (This is discussed further in sections III and VI.) Now consider "1 ↔ 2" processes. If isolated 1 ↔ 2 processes are kinematically allowed by the effective thermal masses of the particles involved, and if there were no need to consider 1 + N ↔ 2 + N processes, then the appropriate 1 ↔ 2 collision term would have a form 16 When the species c and d are identical, a symmetry factor is required. When they are distinct, the final state is double-counted in the sum cd over species. Hence a factor of 1 2 is needed in either case. completely analogous to the 2 ↔ 2 collision term: With strictly massless kinematics, it is impossible to satisfy both energy and momentum conservation in a 1 ↔ 2 particle process unless all particles are exactly collinear. For small masses (compared to the energies of the primaries), the particles will be very nearly collinear. One could then integrate over the small transverse momenta associated with the splitting to recast the collision term in the form where we have ignored the small deviations from exact collinearity when evaluating the distribution functions. The factor of γ a bc (p; p ′p , kp) in the integrand is simply a way of parameterizing the differential rate dΓ/dp dp ′ dk dΩp for an a → bc splitting processes, integrated over transverse momenta, excluding distribution functions and the longitudinal-momentum conserving δ-function. Any nearly-collinear "1 ↔ 2" process, now including 1 + N ↔ 2 + N soft scattering with emission events, can be cast into the general form of (2.7). The only difference is that the differential splitting/joining rates γ a bc will now implicitly depend on the distribution functions for the N scatterers. All of the phase space integrations for those scatterers, plus the summation over N, will be packaged into γ a bc . The appropriate values of the splitting rates γ a bc will be determined simply by requiring that the collision term (2.7) reproduce previous results in the literature for the rates of "1 ↔ 2" processes. In the collision term (2.7), we have written the momenta of the splitting (or joining) particles as though they were exactly collinear and as though their energy was exactly conserved. These particles actually receive O(m eff ) kicks to their momentum and energy due to the soft interactions with the other N particles participating in the near-collinear 1+N ↔ 2+N process. As mentioned earlier, this leads to a separation in the directions of the splitting particles by angles of order m eff /p primary . Treating this process as a strictly collinear 1 ↔ 2 body process, and neglecting the soft O(m eff ) momentum transfers to other particles in the system, is an acceptable approximation because of our assumption that distribution functions (and observables) are smooth functions of momenta. If this assumption were 5: Lowest-order diagrams for all 2 ↔ 2 particle scattering processes in a gauge theory with fermions. Solid lines denote fermions and wiggly lines are gauge bosons. Time may be regarded as running horizontally, either way, so a diagram such as (D) represents both qq → gg and gg → qq. not satisfied, then we could not factorize the collision term into a product of distribution functions and an effective transition rate, as done above. Given our assumed separation of scales, the relative error introduced by this idealization is at most O(g α ), and hence irrelevant in a leading-order treatment. The differential rates γ a bc are to be understood as summed over spins and colors of all three participants. Their explicit form will be discussed in section V. III. 2 ↔ 2 PARTICLE MATRIX ELEMENTS Tree-level diagrams for all 2 ↔ 2 particle processes in a QCD-like theory are shown in Figure 5. Evaluating these diagrams in vacuum (i.e., neglecting medium-dependent self-energy corrections), squaring the resulting amplitudes, and summing over spins and colors yields the matrix elements shown in Table II. 17 Of course, vacuum matrix elements do not correctly describe the scattering of quasiparticles propagating through a medium. In principle, one should recalculate the diagrams of Fig. 5 including appropriate medium-dependent selfenergy and vertex corrections. But with generic hard momenta, for which all Mandelstam variables are of comparable size, these corrections are O(g 2α ) effects and hence ignorable in a leading-order treatment. 18 Medium-dependent effects can give O(1) corrections to matrix elements if any Mandelstam variable is O(m 2 eff ). Such momentum regions are phase space suppressed by at least g 2α relative to generic hard momenta. Consequently, momentum regions where s ∼ −t ∼ −u ∼ m 2 eff (corresponding to either soft or collinear incoming particles) give parametrically suppressed contributions, and medium-dependent corrections can be ignored in these regions. This implies that medium-dependent corrections do not have to be included in terms with denominators of s 2 . 19 Table II, terms with singly-underlined denominators indicate such infrared-sensitive contributions arising from soft gluon exchange, while terms with double-underlined denominators indicate IR sensitive contributions from a soft exchanged fermion. It is these underlined terms in which medium-dependent effects must be incorporated. 20 as massless) is unchanged. 20 Sharp-eyed readers will notice that the s 2 /(tu) and u 2 /(st) terms appearing in the qq ↔ qq and qq ↔ qq matrix elements (squared) are not underlined, despite the fact that these contributions are infrared-For the soft gluon exchange terms one must, in effect, reevaluate the small t (or small u) region of diagrams (A), (B), or (C) with the free gluon propagator on the internal line replaced by the appropriate non-equilibrium retarded gluon propagator, (We have chosen Feynman gauge for convenience, but this is not required.) The required retarded gluon self-energy Π µν Ret (Q) is discussed in the next section. Evaluating the propagator (3.1) requires, in general, a non-trivial matrix inversion. (The matrix inversion may be performed explicitly in the special case of isotropic distributions; see Appendix A.) Because the self-energy only matters when the exchange momentum Q is soft, one has considerable freedom in precisely how the substitution (3.1) is implemented. Different choices, all equally acceptable for a leading-order treatment, include the following: 1. Fully reevaluate the gluon exchange diagrams (A), (B), and (C), using the nonequilibrium propagator (3.1) for the internal gluon line. 2. Introduce a separation scale µ satisfying m eff ≪ µ ≪ p hard , and replace the free gluon propagator by the corrected propagator (3.1) only when Q 2 < µ 2 . 3. Exploit the fact that soft gluon exchange between hard particles is spin-independent (to leading order) [8]. Write the IR sensitive matrix elements as the result one would have with fictitious scalar quarks, plus an IR insensitive remainder. Replace the IR sensitive part by the correct result for scalar quarks with medium corrections included. The final choice is technically the most convenient. It simply amounts to writing and then replacing To understand this, note that the square of the vacuum amplitude for t-channel gluon exchange between massless scalars is For the 1/u 2 terms, apply the same procedure with P ′ ↔ K ′ , which interchanges t and u. sensitive, albeit less so than the 1/t 2 or 1/u 2 terms. The s 2 /(tu) and u 2 /(st) terms arise from interference between t channel and either u or s channel gluon exchanges. They are sufficiently infrared singular to generate logarithmically divergent rates in the individual gain and loss parts of the collision term, but this log divergence (which would be cutoff by medium effects) cancels when the gain and loss rates are combined. Hence, the apparent IR sensitivity of these terms may be ignored. For the soft fermion exchange terms, one must similarly reevaluate the small t (or u) region of diagrams (D) and (E) with the internal free fermion propagator replaced by the non-equilibrium retarded fermion propagator, is discussed in the next section. In the qq ↔ gg matrix element, the net effect is to replace , along with the corresponding replacement with P ′ ↔ K ′ for the tu/u 2 term. For the qg ↔ qg matrix element, the analogous replacement is IV. SOFT SCREENING AND HARD EFFECTIVE MASSES To describe correctly the screening effects which cut off long range Coulomb interactions, we need the non-equilibrium generalization of the standard hard thermal loop (HTL) result for the retarded gauge boson self-energy Π µν Ret (Q) with soft momentum, Q = O(m eff ). The general result for the non-equilibrium case has been previously derived by Mrówczyński and Thoma [15], who obtain 21 where ε is a positive infinitesimal. 22 In this expression, the derivative ∂f (p)/∂P 0 should be understood as zero. The sum runs over all species of excitations (i.e., g, u,ū, d,d, ...), d A is the dimension of the adjoint representation, and C s denotes the quadratic Casimir in the color representation appropriate for species s. 23 (See the caption of Table II for specializations.) The self-energy (4.1) can also be rewritten in the form 24 The normalization of our distribution functions differ from Ref. [15] by a factor of 2, which appears in our expression as the spin degeneracy included in the factor ν s . To apply (4.1) for momenta Q of order m eff , there is an implicit requirement that the background distributions f (x, p, t) not vary significantly on time or distance scales of order 1/m eff . This is implied by our basic assumption in this paper that distributions are smooth on the still longer scale of t form . 22 We use a (−+++) metric. 23 For Abelian theories, replace g 2 C s by the charge (squared) of the corresponding particle. 24 To obtain this, replace d 3 p/2|p| in expression (4.1) by d 4 P δ(P 2 ) Θ(P 0 ). Then integrate by parts, noting that the contribution where the derivative hits the delta function generates a factor of P λ , which vanishes when combined with the bracketed expression in (4.1). Then perform the P 0 integral to get back to d 3 p. which is manifestly symmetric in the indices µ and ν. Simpler expressions may be given in the special case of isotropic distributions [16], where Π Ret (Q) has the same form as the equilibrium HTL result, but with a value of the Debye mass proportional to the integral J [Eq. (1.6)] over the distribution f (|p|). This is summarized in Appendix A. We mention in passing that the result (4.1) for Π Ret (Q) makes the implicit assumption that distributions f (p) do not vary significantly for changes of momentum of order q [which is O(m eff ) in the case of interest]. Specifically, the derivation of (4.1) assumes that f (p+q) − f (p) can be approximated as q · ∇ p f (p). We also need the corresponding self-energy Σ Ret (Q) for fermions with soft momentum Q, because this cuts off the small momentum-transfer behavior of fermion exchange diagrams like Fig. 2. This was also derived by Mrówczyński and Thoma [15] but their result has a factor of 4 error. The corrected expression for a fermion of flavor s is Σ Ret, where f g , f s , and fs are the gluon, fermion, and anti-fermion distributions respectively (as always, per helicity and color state). The simple gamma-matrix structure / Σ is a consequence of the chiral symmetry which results from the neglect of zero temperature fermion masses (relative to their medium-dependent effective mass). In addition to these soft self-energies, we will also need medium-dependent corrections to dispersion relations for hard particles that are nearly on-shell. These will enter in the formulas that determine the near-collinear "1 ↔ 2" rates. As noted in the introduction, the resulting dispersion relation corrections for hard excitations, at leading order, turn out to take the simple form Q 2 + m 2 eff = 0 . with a medium-dependent mass m eff . An efficient way to derive the values of these masses is by using the previous results for soft self-energies, which are valid for Q ≪ p screen , in the intermediate regime m eff ≪ Q ≪ p screen , where both the HTL approximation and the hard dispersion relation (4.4) are valid. 25 Consider hard gauge bosons first. Only transverse polarizations are relevant because the longitudinal polarization decouples for hard momenta. We can use the massless dispersion relation Q 2 = 0 when evaluating the self-energy (4.2) for the purpose of obtaining the first correction to the dispersion relation. Working in light-cone coordinates defined by the direction of q, the second term in brackets in (4.2) is then proportional to (4.5) 25 Some readers may wonder whether there can be O(Q/p screen ) effects in the self-energy that were dropped in the HTL approximation but which affect the hard dispersion relation for Q p screen , and which would cause m eff to be a non-trivial function of Q/p screen instead of a constant. One can check by explicit diagrammatic analysis of self-energies that, for Q 2 = 0, this does not happen for fermions, scalars, or transverse gauge bosons. For transverse choices of µ and ν, the derivative vanishes. So the effective mass m eff,g of hard (transverse) gauge bosons comes only from the first term of (4.2), giving (4.6) Now consider hard fermions. The effective Dirac equation is Multiplying on the left by another factor of ( / Q − / Σ Ret ) gives the condition so that, to leading non-trivial order, the on-shell dispersion relation for hard fermions is From the result (4.3) for the self-energy, we immediately find that the effective mass for a hard fermion of flavor s is given by (4.10) Note that the hard effective masses (4.6) and (4.10) are independent of the direction of the momentum q of the excitation, as well as its spin, even in the presence of general anisotropic distributions f (p). This is in contrast to soft screening, since the self-energy Π µν Ret (Q), given by (4.1), generically depends on the direction of q if distributions are not isotropic. The spin independence and isotropy of these (leading order) effective masses holds provided only that distribution functions are not themselves polarized. [This assumption underlies the HTL results (4.1) and (4.3).] This is special feature of hot gauge theories; in a generic anisotropic medium, no symmetry argument prevents splitting of dispersion relations into different branches depending on the spin of an excitation. In other words, a generic anisotropic medium is birefringent. If the self-energies (4.1) or (4.3) had led to spin-dependent dispersion relations for hard excitations, then even if distributions were not polarized at some initial time, the subsequent evolution of excitations through the birefringent medium would generate spin asymmetries. For such a system, it would be inconsistent to formulate an effective theory with spin independent but anisotropic distribution functions. Fortunately, hot gauge theories do not behave this way. The appropriate splitting/joining rates γ a bc characterizing near-collinear "1 ↔ 2" processes may be extracted from Ref. [8]. Letn be a unit vector in the direction of propagation of the splitting (or merging) hard particles, so that p = pn, p ′ = p ′n , and k = kn. Then the required color and spin-summed effective matrix elements, consistently incorporating the LPM effect at leading order, may be expressed as γ q qg (pn; p ′n , kn) = γq qg (pn; p ′n , kn) = p ′2 + p 2 p ′2 p 2 k 3 Fn q (p, p ′ , k) , (5.1a) γ g qq (pn; p ′n , kn) = where and α ≡ g 2 /(4π). The function Fn s (h; p ′ , p, k), for fixed given values of p ′ , p, k andn, depends on a two-dimensional vector h which is perpendicular ton. Fn s is the solution to the linear integral equation and represents the energy denominator ǫ g (k) + ǫ s (p) − ǫ s (p ′ ) which appears in a p ′ ↔ pk splitting process. The variable h is related to the transverse momentum; see Ref. [8] for details. We will discuss momentarily the required correlator A µ (Q)[A ν (Q)] * of the soft gauge field. Ref. [8], from which the above formulas for γ a bc were extracted, culminated in the derivation of the near-collinear "1 ↔ 2" contribution to the total differential gluon production rate dΓ g /d 3 k for hard gluons with momentum k. In order to facilitate comparison with that reference, we show in Appendix B how to express the near-collinear contribution to dΓ g /d 3 k in terms of the γ a bc differential rates. The final element we need is the mean square fluctuations in soft momentum components of the gauge field in the medium, A µ (Q)[A ν (Q)] * . Formally, this is the Fourier transform of the (non-equilibrium) HTL approximation to the Wightman gauge field correlator, 26 excluding the momentum and color conservation delta functions, This correlator characterizes the stochastic background fluctuations in which collinear splitting processes take place (see Ref. [8]). We are interested in the correlator for space-like 4-momenta Q, and physically (at leading order) it represents the correlation of the screened color fields carried by on-shell hard particles streaming randomly through the plasma. Hence, it may be understood as the (absolute) square of the amplitude shown in Fig. 6 integrated over the phase space of the hard particle, where the gauge propagator is to be understood as including the medium-dependent self-energy. This leads to where G Ret (Q) is the retarded propagator on the right side of (3.1) and for soft momenta Q. Here, v k ≡ (1,k) is the 4-velocity of a hard particle with momentum k and g v k gives the current of this particle up to group factors. The f 's appear as initial state distributions and final-state Bose enhancement or Fermi blocking factors. For soft momenta Q, expression (5.7) may be simplified to give One may alternatively derive this by summing hard thermal loops into the propagator of the Schwinger-Keldysh formalism, one of whose components gives the Wightman correlator. 27 This is the origin of our notation Π 12 above, which is the one-loop Wightman current-current correlator and is an off-diagonal component of the self-energy in this formalism. In the special case of isotropic distribution functions, one can simplify substantially the expression (5.8) for Π 12 (Q). Moreover, one can analytically reduce the d 4 Q integral appearing in the integral equation (5.3) to a two-dimensional integral over q ⊥ . See Appendix A for details. 26 Not to be confused with time-ordered or retarded correlators of the gauge field. Once again, our basic assumption is that distribution functions are smooth on a time and distance scale of t form associated with the duration of a near-collinear "1 ↔ 2" process. Hence, for the purpose of evaluating the non-equilibrium correlator in (5.5) for momenta of order m eff , distribution functions may be treated as x-independent. Spacetime variation in the non-equilibrium state will, of course, smear out the momentum conserving delta function in (5.5), but only by an amount which is irrelevant for our leading order treatment. 27 For a general introduction, see chapter X of Ref. [17]. See also, for example, Eqs. (A34) and (A23) of Ref. [18]. In equilibrium, the Wightman correlator is related to the retarded correlator by the fluctuation-dissipation theorem, which gives where n(ǫ) = [e βǫ −1] −1 is the usual equilibrium Bose distribution. For soft Q, the prefactor 2[n(q 0 ) + 1] can be replaced by 2T /q 0 . It is instructive to compare the non-equilibrium formula (5.8) for Π 12 (Q) with that for Im Π Ret (Q). The latter, for soft Q, can be extracted from the earlier expression (4.1) which gives Earlier, we promised an explanation of the parametric formulas (1.10) and (1.11) for the formation time t form of a near-collinear "1 ↔ 2" process and for the number N form of soft collisions that take place in that time. We will give here a brief, superficial review, using the notation we have adopted in this paper. For a more thorough discussion of the scales set by the LPM effect, the reader should consult the literature, such as Refs. [19][20][21]. We could deduce the scales by discussing the qualitative behavior of solutions of the actual equations (5.2) and (5.3) which incorporate the required physics, much as we did in the context of photo-emission in Ref. [11]. Instead, however, we will give here a more physical discussion which leads to the same results. One way to understand the basic scales is to begin by considering classical (soft) bremsstrahlung radiation of photons (rather than gluons), with wave number k γ , emitted by a classical charged particle moving very close to the speed of light that undergoes N random small-angle collisions. (This was the original classical picture of Landau and Pomeranchuk [22].) Let τ be the mean time between those collisions and θ 1 the typical angle of deflection from each collision. The total deflection is then θ ∼ √ N θ 1 , which we shall assume is small. If τ is very large, there will on average be no interference between the bremsstrahlung fields created by successive collisions (for fixed k γ ). If τ is very small, the bremsstrahlung field will not be able to resolve the individual collisions, and the field will be the same as that from a single collision by the total angle θ. Since bremsstrahlung is at most logarithmically sensitive to the scattering angle θ, this smearing of N collisions into one collision will reduce the power radiated at this wavenumber by a factor of roughly N compared to what it would be if each collision could be treated independently. That is the LPM effect. The classical bremsstrahlung field from a scattering by angle θ is dominated by radiation inside a cone of angle roughly θ. The first and last scatterings, at space-time points x 1 and x N , will interfere significantly together if the phases in the factors exp(K γ · x 1 ) and exp(K γ · x 2 ) are comparable. For random collisions, that will happen if K γ · (x N −x 1 ) ≪ 1, which, for small θ, gives 28 We have taken x 0 N −x 0 1 ≃ |x N −x 1 | ∼ N τ since the particle moves on nearly a straight-line trajectory at the speed of light. For k γ Nτ θ 2 ≫ 1, in contrast, the bremsstrahlung produced by the first and last collisions will be independent. The crossover criterion k γ Nτ θ 2 ∼ 1, with θ 2 ∼ Nθ 2 1 , then determines the typical number of collisions which are effectively smeared together by the LPM effect: 13) except that N must always be at least 1, since there must be a scattering to produce classical bremsstrahlung. A classical treatment of radiation breaks down when k γ is no longer small compared to the energy E of the radiating charged particle. However, parametric estimates (as opposed to precise classical formulas) are still valid where the classical treatment first begins to break down. So we can still use the estimate (5.13) for N when k γ ∼ E, provided E − k γ is not parametrically small compared to E. In a single soft collision of a hard particle with energy E, the typical deflection angle is where q ⊥ is the typical transverse momentum transfer. Inserting this into (5.13) and setting k γ ∼ E gives This estimate for hard bremsstrahlung applies equally well to the case of gluon emission. Setting q ⊥ ∼ m eff gives the estimate (1.11) previously quoted for N form . 28 For sharp single collisions (N =1), there are actually two angular scales: the deflection angle θ of the charged particle and the angle θ γ that k γ makes with the initial or final directions of the particle (whichever is smaller). There can then be logarithms arising from considering θ γ ≪ θ in the bremsstrahlung rate, which we ignore. When the formation time is large compared to τ , then it is simply Nτ by the definitions of N and τ . But if τ ≫ E/q 2 ⊥ , so that emission from different soft collisions do not significantly interfere, then the formation time is the time scale t 1 associated with bremsstrahlung from a single isolated collision. Classically, this is the time ∆x 0 for which K γ · ∆x ∼ 1, which gives k γ t 1 θ 2 1 ∼ 1 and for k γ ∼ E. This same result can be found by examining how far off-shell in energy the internal hard line is in the basic bremsstrahlung processes of Fig. 3. One can nicely combine both cases in the single formula which, upon taking q ⊥ ∼ m eff , gives the earlier quoted result (1.10). 29 We specialized to k γ ∼ E above. In this regime, the above estimates are equally applicable to photon or gluon emission. The behavior of the formation time for k γ ≪ E is not critical to understanding the conditions for applying our effective theory to the evolution of hard primaries. The dominant energy loss mechanism for primaries is via hard gluon emission processes with k g ∼ E rather than k g ≪ E soft emissions. However, it is interesting to note that for soft emission, the case of photon emission is qualitatively different from gluon emission. Eqs. (5.13) and (5.16) show that the formation time for photons is much longer for k γ ≪ E than it is for k γ ∼ E. This is not true for gluon emission. Because the gluon can scatter by strong interactions, it cannot maintain its coherence over as long a time scale as a photon can. Soft scatterings involving the emitted gluon can change its direction by angles of order q ⊥ /k g ∼ m eff /k g , which increase as k g decreases. Consequently, gluon bremsstrahlung with k g ≪ E is actually less coherent than gluon bremsstrahlung with k g of order E/2. For gluon emission where the LPM effect is significant, the longest formation time is for the case where the energies of the two final particles are comparable. A. Dispersion relation corrections and double counting In section III we asserted that medium-dependent dispersion relation corrections on external lines are, for hard particles, sub-leading corrections which may be neglected. There is, important collisions are soft collisions with momentum transfers of order m eff . Harder collisions are rarer than soft collisions. However, as noted in footnote 5, a single collision by an angle θ ∼ √ N m eff /E is no rarer than N consecutive soft collisions, each by angle m eff /E. For large N , the same could be said of, for example, N/10 consecutive collisions, each by angle √ 10 m eff /E. This multiplicity of possibilities turns out to result in logarithmic corrections to the above analysis. Throughout this paper, we have consistently ignored logarithms in parametric estimates, and we continue to do so here. We have also ignored the effect of effective thermal masses, which cause hard particles to move slightly slower than the speed of light. For q ⊥ m eff , however, this does not affect any of our parametric estimates. however, a potential subtlety concerning whether or not the internal line in a 2 ↔ 2 process is kinematically allowed to go on-shell. If this occurs, then the 2 ↔ 2 particle collision rate will include the contribution from an on-shell 2 → 1 process followed by a subsequent 1 → 2 process. Given the presence of explicit 1 ↔ 2 particle collision terms in our effective theory, this would be inappropriate double-counting of the underlying scattering events. Consider, for example, the t-channel qg → qg process illustrated in Fig. 7. To incorporate a correct treatment of small angle scattering, as discussed in section III, one must include the HTL fermion self-energy (4.3) on the internal quark line. If medium-dependent effective masses are included on the external lines, then the internal quark line can go on-shell if m eff,g > 2m eff,q . This condition is not satisfied in equilibrium QCD, but it can be satisfied with non-equilibrium distributions. It can also be satisfied, in equilibrium, in certain QCDlike theories. 30 If this condition is satisfied, then the on-shell pole in the fermion propagator will generate a divergence in the two body collision term C 2↔2 . This divergence arises from an integration over the time difference between the creation and destruction of the virtual intermediate fermion, and reflects the fact that a calculation which just includes the HTL self-energy (4.3) is modeling that excitation as having an infinite lifetime. 31 This divergence is, of course, unphysical and would effectively be replaced by the transport mean free time if further interactions with the medium were properly included. 32 This situation is depicted in more detail (but still qualitatively) in Fig. 8, which shows the contribution to the loss part of the collision term C 2↔2 (for hard particles) from t-channel qg ↔ qg scattering, as a function of the invariant momentum transfer. For simplicity of presentation, this figure assumes that the momenta of scatterers, screeners and primaries are all within an O(1) factor of a common scale T , and that the phase space distributions of all excitations on these scales are O(1). In other words, the system is at most an O(1) deviation from equilibrium. We have found it convenient to use τ ≡ t/ |t| rather than t, and have sketched dC 2↔2 loss /dτ vs. τ . If the external particles were massless, then t (and hence 30 For example, equilibrium SU(3) gauge theories with N f Dirac fermions have m eff,g = ( 1 2 + 1 12 N f ) 1/2 gT and m eff,q = 1 √ 3 gT . Hence m eff,g > 2m eff,q if N f > 10. 31 This is because the HTL self-energies (4.1) and (4.3) have no imaginary part for timelike 4-momenta Q. 32 One natural sounding but inadequate solution is to include the full thermal width on the intermediate line. However, this width is logarithmically IR divergent (in perturbation theory) due to sensitivity to long wavelength non-perturbative fluctuations in the gauge field. Moreover, merely including a width without simultaneously including additional interactions with the medium incorporates the wrong physics; the width is dominated by soft scattering, but a soft scattering event does not prevent the on-shell intermediate particle from propagating a large distance before breaking up. Fig. 7 for qg ↔ qg to the loss part of the 2 ↔ 2 collision term for a hard gluon or quark, plotted as dC 2↔2 loss /dτ vs. τ where τ ≡ t/ |t|. Interferences with other channels are ignored. The labels m q and m g are short-hand for the hard effective quark and gluon masses m eff,q and m eff,g , and we have assumed m eff,g > 2m eff,q . The other scales listed above (1, g 4 , T , etc.) only denote parametric orders. τ ) would always be negative. The behavior between τ ∼ −T and τ ∼ −gT is dC/dτ ∼ g 4 /τ and is responsible for a leading-log contribution to the collision term. The (exaggerated) fall-off depicted for τ ≪ −T reflects the decrease of initial state distributions for momenta large compared to T . The fall-off just above τ ∼ −gT is due to screening (that is, due to the inclusion of the HTL self-energy for the internal quark line). For τ < ∼ − gT , the contributions to C 2↔2 loss are dominated by hard initial particles whose trajectories intersect at an angle θ 12 of order 1. Each particle is deflected in the collision by an angle of order |τ |/T . The contributions in this kinematic region are not sensitive, at leading order, to whether or not one uses massive or massless dispersion relations for the external lines. In contrast, for τ > 0, the contributions are dominated by hard initial particles that are nearly collinear, with θ 12 ∼ g. The peak at τ = m eff,q represents the nearly-collinear process of Fig. 7. [Kinematics forces this process to be nearly collinear because the O(gT ) masses m eff,q and m eff,g are small compared to the hard O(T ) momenta.] From the plot, one can see that two regions make significant contributions to the collision term. The first region is −T τ gT , which reflects genuine 2 ↔ 2 processes and makes an O(g 4 T ) contribution to C loss . The second region is |τ −m eff,q | g 4 T , which is double counting and incorrectly treating the 1 ↔ 2 processes that are described by the "1 ↔ 2" collision term. Fortunately, our treatment of "1 ↔ 2" processes correctly handles off-shellness in energy as large as the inverse formation time O(g 2 T ) (see section I A) and so already correctly accounts for the physics in this region where the energy is off-shell by only < ∼ g 4 T . Note that τ 's further out on the peak at m eff,q in Fig. 1 (for instance, |τ − m eff,q | ∼ g 2 T ), give a contribution to C loss from this particular diagram that is subleading compared to genuine 2 ↔ 2 contributions and which may therefore be ignored. To formulate a correct collision term which only includes "genuine" 2 ↔ 2 processes, one must somehow keep the contribution from the first region and eliminate the second. Although there are many ways one could accomplish this, the simplest solution is to include the HTL self-energy on internal lines (where it is needed to describe screening) but to treat external lines as massless. Exactly the same issue can arise with s-channel processes, as illustrated in Fig. 9 for qq ↔ qq. A plot analogous to the one discussed above, but this time showing the contribution to dC 2↔2 loss /d √ s vs. √ s for this s-channel annihilation reaction, is shown in Fig. 10. If HTL self-energies are included in the internal gluon line, then the virtual gluon can go on-shell in this qq → g → qq process even when the external quarks are treated as massless, since two massless particles are kinematically allowed to combine to create one massive one. For s-channel processes, however, it is not necessary to include HTL self-energies on the internal line in the first place. Unlike the case of t and u-channel processes, they are not required to control infrared sensitivity. So one solution which avoids all partial double counting of 1 ↔ 2 processes while not affecting the leading-order result for the true 2 ↔ 2 contributions is to treat all external particles as massless, and include HTL self-energies in t-channel and u-channel internal propagators but not in s-channel ones. This is the approach presented in section III. The dot-dash line hiding under the peak in Fig. 10 and smoothly decreasing at small s illustrates the result of this prescription for s-channel processes. There are alternative possibilities which are equally valid at leading order. For example, one could include HTL self-energies multiplied by the step function Θ(Q 2 ) on all internal lines in 2 ↔ 2 processes, so that they only affect spacelike propagators. With external particles treated as massless, this would also avoid double-counting mistakes. But attempting to "improve" the effective theory by including both medium-dependent self-energies on internal lines and dispersion corrections on external lines is simply wrong -unless the contributions from 2 ↔ 2 processes degenerating into two independent scatterings are carefully separated and subtracted. Although this could be done consistently, 33 it is needlessly complicated compared to the simple approach of treating external lines as massless and only inserting medium-dependent self-energies where they are truly required. B. Additional scattering processes One may wonder if any additional scattering processes need to be included in a leadingorder effective theory. To examine this, first note that processes whose rates are parametrically slower than hard 2 ↔ 2 or 1 ↔ 2 processes will have negligible (i.e., subleading in g) effect on the dynamics of a quasiparticle. For example, consider adding an additional radiated gluon to a hard (i.e., large momentum transfer) 2 ↔ 2 scattering process. The case of qq → qqg is illustrated in Fig. 11. In the context of high-energy collisions in vacuum, it is well-known that bremsstrahlung gluons cost a factor of g 2 times logarithms associated with collinear and soft infrared enhancements. Soft gluon emission is unimportant for our leading-order effective theory which only describes the dynamics of hard quasiparticles. And in any case, sensitivity to small gluon momenta will be cut off in a medium by the effective mass of the gluon. Collinear logarithms will similarly be cut off by the effective masses of the quark and gluon. However, in a medium there is also a [1+f g ] final state statistical factor associated with the radiated gluon. This factor can be parametrically large. But, as discussed in section I C, our assumptions limit distribution functions to be small compared to O(g −2 ) for p > ∼ m eff . The upshot is that each radiated gluon suppresses the transition 33 Exactly the same issue of 2 ↔ 2 scattering processes degenerating into two independent 1 ↔ 2 scatterings arises anytime one attempts to formulate a kinetic theory for unstable particles. Ref. [23], for example, includes a discussion of the need to subtract the degenerating part of 2 ↔ 2 scattering rates in the context of various models of GUT-scale baryogenesis. rate by a factor that is parametrically small. [If we focus on bremsstrahlung of hard gluons, then condition (1.15) implies the suppression is at least g 2α , possibly times logs of 1/g.] Therefore, a primary quark or gluon will experience a parametrically large number of hard scatterings without gluon bremsstrahlung before it undergoes one with bremsstrahlung. The simple fact that a process is suppressed compared to others does not automatically make it irrelevant at leading order. For instance, 2 ↔ 2 scattering does not change the total number of hard quasiparticles. Correctly including the dominant number changing processes, even if they are parametrically slow compared to the large angle scattering time, is necessary for a leading-order calculation of physics that depends on equilibration in the number of quasiparticles (such as bulk viscosity). 34 However, bremsstrahlung from hard 2 ↔ 2 scattering is not the fastest number changing process. The most likely scattering events (as discussed in section I A) are soft scatterings with momentum transfer of order m eff . The relative suppression for radiated bremsstrahlung gluons is the same as above (up to logarithms) and is at least g 2α for hard bremsstrahlung gluons. That is, an excitation will experience a parametrically large number of soft scatterings unaccompanied by hard gluon bremsstrahlung before it undergoes a soft scattering with bremsstrahlung. And soft scattering with single bremsstrahlung is the fastest numberchanging process (illustrated in Fig. 3) for hard particles. Fortunately, this process has already been included in our effective kinetic theory. It is part of the N + 1 → N + 2 processes which have been summed up in our effective "1 ↔ 2" near-collinear transition rates. 35 The basic point is that a hard scattering accompanied by bremsstrahlung does not change the distributions of quarks or gluons in any way which is distinct from the much more rapid effects of hard 2 ↔ 2 scatterings together with soft scattering accompanied by gluon bremsstrahlung. In contrast, a soft scattering accompanied by gluon bremsstrahlung cannot be neglected, both because it changes particle number and because a single scattering of this type can produce an O(1) change in the momentum of an excitation (unlike the more rapid 2 ↔ 2 soft scatterings) at a rate which can be competitive with hard 2 ↔ 2 scattering. 34 This is discussed at some length, in the context of bulk viscosity for scalar theory, in Refs. [24,25]. 35 Recall that bremsstrahlung gluons are dominated by angles less than or order of the deflection angle in the underlying 2 → 2 scattering event. Diagrammatically, this occurs because of cancellations between different diagrams, such as those of Fig. 3. But it can also be seen by reviewing classical formulas for the intensity of bremsstrahlung radiation. Since the momentum transfer of the most prevalent collisions is order m eff , the deflection angle is order m eff /p primary for a primary excitation. In local equilibrium settings, this is order g. To make the above discussion more concrete, let us briefly specialize to typical excitations in near-equilibrium systems. In that case, the rate of hard scattering with hard gluon bremsstrahlung is O(g 6 T ), which is O(g 2 ) suppressed relative to the O(g 4 T ) rate of hard 2 ↔ 2 processes. The rate of soft scattering with hard bremsstrahlung is O(g 4 T ), which is O(g 2 ) suppressed relative to the O(g 2 T ) rate of straight 2 ↔ 2 soft scattering. As a further example of the suppression of higher-order processes, consider soft 2 ↔ 2 scattering with hard double bremsstrahlung, as depicted in Fig. 12, which we might call a "1 ↔ 3" splitting. As already discussed, this would be suppressed by g 2α compared to single bremsstrahlung. It is therefore irrelevant at leading order, since a parametrically large number of "1 ↔ 2" single bremsstrahlung events will occur for every one of these double bremsstrahlung events. [Near equilibrium, the rate for this "1 ↔ 3" double bremsstrahlung is g 6 T , compared to the g 4 T rate of "1 ↔ 2" processes.] Now consider adding one or more additional scatterings to the double bremsstrahlung process, to generate a diagram like the one shown in Fig. 13. Adding additional soft scatterings can increase the cross-section, because the internal line running from x to y in the figure can go on shell. 36 This part of the amplitude, however, really represents two consecutive "1 ↔ 2" processes and so is already accounted for. The process of Fig. 13 differs significantly from two consecutive collisions only when the time between x and y is comparable to the duration (1/g 2 T near equilibrium) of the individual "1 ↔ 2" processes. In momentum space, that corresponds to the kinematic region where that propagator is just as off-shell in energy as the intermediate quark line in the double bremsstrahlung, single soft scattering case of Fig. 12 considered previously. In other words, there is no additional 36 For comparison, the internal quark lines in the double bremsstrahlung process of Fig. 12 have virtualities P 2 ∼ m 2 eff since the angle between each hard gluon and the emitting quark line is O(m eff /p primary ). (See the previous footnote.) Hence these internal lines are off-shell in energy by an amount of order P 2 /E ∼ m 2 eff /p primary (or g 2 T in the near-equilibrium case). These internal lines are prevented from being more on-shell by the medium-dependent effective masses of particles. on-shell enhancement except when the process degenerates into separate scattering events, which are already included in the effective kinetic theory. In summary, multiple gluon bremsstrahlung processes are either indistinguishable from a sequence of "1 ↔ 2" and 2 ↔ 2 processes or else are parametrically slower and do not accomplish any relevant relaxation not already provided by a sequence of faster processes. After examining these, and other possibilities, we are unaware of any processes which can affect reasonable observables at leading order in g, beyond those which have already been included in our effective theory. 37 C. Soft gauge field instabilities In section III, we noted that in processes involving soft gluon exchange, one must include the medium-dependent retarded self-energy in the intermediate gluon propagator in order to incorporate the screening (and Landau damping) of long range gauge field interactions. For situations involving substantial departures from equilibrium, there is an important question which has not yet been addressed -does the medium-dependent self-energy actually cut off long range interactions? In other words, are the resulting 2 ↔ 2 non-equilibrium scattering rates well-defined? The relevant part of the phase space integral for soft gluon exchange processes involves an integral over the spacelike momentum transfer of the product of advanced and retarded non-equilibrium gluon propagators, This is then contracted with factors which, by gauge invariance, are necessarily transverse to Q. If χ a (Q) and λ a (Q) denote the eigenvectors and associated eigenvalues of Π Ret (Q), then the potentially dangerous part of this integral is the piece involving projections onto the same transverse eigenvector χ a (Q) from both propagators, The integrand will be singular if the gluon propagator has poles at real spacelike momenta, which can potentially happen if an eigenvalue λ a (Q) is real and negative for some domain of spacelike momenta. One may show that a spacelike pole in the gluon propagator creates a non-integrable singularity in the integral (6.2) and generates a logarithmic divergence in the soft collision rate. The presence of such a spacelike pole in the non-equilibrium retarded gluon propagator would imply that the corresponding modes of the soft gauge field have 37 This assertion does deserve a few caveats. Since QCD exactly conserves the net fermion number in each flavor, but weak interactions do not, weak interaction collision terms cannot be neglected if one is interested in the evolution of flavor asymmetries on sufficiently long time scales. Similarly, in hot electroweak theory explicit baryon production/destruction terms representing the effects of non-perturbative baryon number changing processes can be relevant on time scales large compared to the mean free times of processes discussed in this paper. See Ref. [2] for more discussion of these points. exponentially growing behavior in time. To our knoweledge, the first discussion of such instabilities in the context of QCD plasmas was by Mrówczyński in Ref. [13]. In equilibrium, no such instability is possible. More generally, such instabilities do not appear if distribution functions are isotropic, but otherwise arbitrarily far from equilibrium. This is a consequence of the fact, discussed in Appendix A, that if distribution functions are rotationally invariant then the non-equilibrium HTL self-energies turn out to be proportional to their equilibrium form. For anisotropic but parity invariant distributions, it turns out that spacelike poles are generically present. To see this, first note that the HTL self-energy (4.1) does not depend on the magnitude of Q, but only on its (four-dimensional) direction. And, as may be seen from Eq. (5.10), for any parity invariant distribution the imaginary part of Π Ret (Q) vanishes identically when q 0 = 0, implying that the eigenvalues of the zero frequency self-energy are purely real. Within the q 0 = 0 surface, if there are directions in which Re λ a (Q) is negative then there will be singularities in the integral (6.2). For generic anisotropic but parity invariant distributions, there are such directions. One may show that the angular average of the trace of the HTL spatial self-energy, Π i i (Q), vanishes at zero frequency, which means that the angular average of the sum of the two transverse eigenvalues of the spatial zero-frequency self-energy vanishes. Hence, if there is any direction in which an eigenvalue of the static spatial HTL self-energy is positive, then there must be some direction in which an eigenvalue is negative. The only way instabilities can be avoided is if the zero-frequency spatial gluon HTL self-energy vanishes identically, as it does for isotropic distributions. If one perturbs a parity invariant distribution by adding a parity non-invariant component, then a continuity argument shows that instabilities will still generically be present for sufficiently small deformations away from the parity invariant case. [In the presence of an arbitrarily small parity non-invariant perturbation, Im λ a (Q) must vanish on some surface which is a small deformation of the q 0 = 0 plane. Within this surface, there must still be directions in which Re λ a (Q) is negative, since there are such directions in the absence of the deformation.] Whether these spacelike poles of the gluon propagator persist for completely general non-parity invariant, anisotropic distributions is not yet clear to us. For a given set of distribution functions, if spacelike poles are present then the characteristic wave vector (and growth rate) of the associated soft gauge field instabilities will at most be of order of the non-equilibrium effective mass m eff , since this is the scale which characterizes the size of medium-dependent self-energy corrections. For distribution functions which are parametrically close to isotropy, the wave vector and growth rate of soft instabilities will be parametrically smaller than m eff . If any gauge field modes with wave vectors of order m eff are unstable, then the growth of such instabilities would be expected to lead to spatial (and temporal) inhomogeneities in distribution functions on length scales of order 1/m eff . Consequently, the growth of such instabilities should lead to a violation of the spacetime smoothness condition which underlies our effective kinetic theory. The physics which cuts off the growth in these instabilities, and removes the divergence in the soft scattering rate, can only come from including the effects of spacetime inhomogeneities inside the evaluation of the effective scattering rate. This means one can no longer use the derivative expansion which underlies the effective kinetic theory. As far as our effective kinetic theory is concerned, the net result is that its domain of validity is smaller than anticipated. We initially required that anisotropies in distribution functions not be parametrically large. But it appears that O(1) anisotropies in the excita-tions responsible for screening will lead to violations of the assumed spacetime smoothness condition on a time scale that is shorter than the minimal time scale t form for which the effective theory was designed. However, parametrically small anisotropies are still acceptable. An O(g) anisotropy in the distribution of "screeners" can only generate instabilities whose wave vectors are parametrically small compared to m eff . In a leading-order treatment, one may simply excise this "ultrasoft" momentum region from the phase space integral (6.2). The resulting ambiguity in the effective collision rates will be subleading in g, and our effective kinetic theory should remain valid for the intended class of observables -those primarily sensitive to the dynamics of hard excitations. Clearly, it would be interesting to study the effects of soft gauge field instabilities in nonequilibrium systems and try to understand their influence on physical observables which probe the relevant soft or very-soft dynamics. This is a topic for future work. D. Effective kinetic theory beyond leading order? Typical effective theories (such as heavy quark theory, or non-relativistic QED) can be systematically improved order-by-order in powers of the ratio of scales which underlies the effective theory. Having constructed a leading-order effective kinetic theory for the dynamics of a hot gauge theory, it is natural to ask whether one can formulate a beyond-leading-order kinetic theory which will correctly incorporate relative corrections suppressed by one or more powers of g. This is an interesting open question. Consider, for simplicity, the case of systems which differ from equilibrium by at most O(1), so that all relevant hard momenta are O(T ). Any attempt to construct an effective theory of hot dynamics beyond leading order must handle numerous different sources of subleading corrections. These include: 1. Kinematic mass corrections of order m 2 eff /p 2 hard ∼ g 2 . These are everywhere: in the convective derivative of the Boltzmann equation, in the overall kinematics of the collision terms, inside the effective 2 ↔ 2 scattering amplitudes, etc. Consistently including such corrections should be feasible, but will force one to separate and subtract the degenerating parts of scattering rates, as discussed in section VI A. 2. Contributions from higher order tree processes, such as bremsstrahlung from hard scattering. Numerous such additional processes appear at O(g 2 ) and would need to be included in the collision terms, again with appropriate care to eliminate phase space regions where an intermediate line goes on-shell and the process separates into multiple scattering events. 3. Loop corrections to 2 ↔ 2 effective scattering amplitudes. For hard scattering, these will be order g 2 effects, but for soft scattering, the relevant loop expansion parameter is g, not g 2 . Also, the HTL approximation to the self-energies required in soft exchange processes receive O(g) corrections because of kinematic approximations made in the HTL results. 4. Subleading corrections to effective near-collinear transition rates, and a proper treatment of the soft emission region. We believe these enter at O(g). Evaluating such corrections would require a next-to-leading order treatment of LPM suppression. This is unknown territory. 5. Contributions from soft (p ∼ gT ) on-shell excitations. The size of such contributions depends on the sensitivity of observables of interest to soft momenta. For observables like fermion current densities or the traceless part of the stress tensor (whose behaviors determines diffusion constants and shear viscosity), soft contributions are suppressed by O(g 4 ) or more. Incorporating soft contributions requires formulating a kinetic theory which correctly describes both hard (ultrarelativistic) and soft (non-relativistic) excitations. 6. Contributions from non-perturbative gauge field dynamics on the g 2 T (ultrasoft) scale. The importance of very small angle scattering via ultrasoft gauge boson exchange is suppressed, relative to soft exchange, by g 2 ; so we expect ultrasoft physics effects to enter as g 2 corrections to our effective kinetic theory. This means that nonperturbative inputs will be necessary to formulate a kinetic theory which correctly describes O(g 2 ) corrections. We have no idea how this could be done in practice. 7. Corrections due to the uncertainty in energy of excitations. The relative size of such corrections is controlled by the inverse of an excitation's energy times its mean free time between scatterings. For soft scatterings of hard excitations, this is O(g 2 ). Correctly incorporating such quantum corrections to kinetic theory is an interesting open problem. Consistently incorporating O(g) corrections may well be feasible, but extending the effective kinetic theory to include O(g 2 ) effects involves major conceptual challenges as well as technical difficulty. VII. CONCLUSION We have argued that quasiparticle dynamics in relativistic plasmas associated with hot, weakly-coupled gauge theories (such as QCD at asymptotically high temperature T ) can be described by an effective kinetic theory, valid on sufficiently large time and distance scales. This effective theory is adequate for performing leading-order evaluations of observables (such as transport coefficients and energy loss rates) which are dominantly sensitive to the dynamics of typical ultrarelativistic excitations. In other words, our effective theory neglects effects which generate relative corrections suppressed by powers of the gauge coupling g, but correctly includes all orders in 1/ log g −1 . To construct such a leading-order effective theory, it was necessary to include in the collision term of the kinetic theory both 2 ↔ 2 particle scattering processes as well as effective "1 ↔ 2" collinear splitting and merging processes which represent the net effect of nearly collinear bremsstrahlung and pair production/annihilation processes taking place in the presence of fluctuations in the background gauge field. Our effective kinetic theory is applicable not only to near-equilibrium systems (relevant for the calculation of transport coefficients), but also to highly non-equilibrium situations, provided the distribution functions satisfy the conditions discussed in section I C [as amended in section VI C] which, in particular, require that there is a clear separation between the Debye screening scale and the momenta of typical excitations of interest, and that the excitations responsible for screening be close to isotropic. These conditions can be satisfied in asymptotically high temperature QCD, where the running coupling g(T ) is truly small. They may also be satisfied at intermediate stages of collisions between arbitrarily large nuclei at asymptotically high energies [3]. What, if any, utility this effective theory has for understanding real heavy ion collisions at accessible energies is not yet clear. However, we believe that understanding dynamics in weakly coupled asymptotic regimes is a necessary and useful prerequisite to understanding dynamics in more strongly coupled regimes. and sum to the metric, P µν + Q µν + K µ K ν /K 2 = η µν . The transverse and longitudinal APPENDIX B: RELATIONSHIP OF dΓ g /d 3 k TO γ a bc The leading-order equilibrium differential rate for production for hard gluons, as defined in Ref. [8], corresponds to the gain part of gluon collision terms (2.2) and (2.7), evaluated in equilibrium and multiplied by ν g /(2π) 3 . Explicitly, the near-collinear LPM-suppressed part of the production rate for hard gluons with momentum k = kn is, at leading order, ∞ 0 dp dp ′ δ(p ′ +p−k) γ g gg (kn; p ′n , pn) n b (p ′ ) n b (p) + ∞ 0 dp dp ′ δ(p ′ −p−k) γ g gg (p ′n ; pn, kn) n b (p ′ ) [1+n b (p)] + N f ∞ 0 dp dp ′ δ(p ′ +p−k) γ g qq (kn; p ′n , pn) n f (p ′ ) n f (p) + 2N f ∞ 0 dp dp ′ δ(p ′ −p−k) γ q qg (p ′n ; pn, kn) n f (p ′ ) [1−n f (p)] . (B1) The 1 2 in the first term is an initial-state symmetry factor, the 2 multiplying the final term reflects the identical contributions from q → qg andq →qg, and N f is the number of Dirac fermion flavors. n b (ω) and n f (ω) are equilibrium Bose and Fermi distribution functions, respectively. Ref. [8] expresses results more compactly by using crossing symmetries. For a more direct comparison with that paper, expression (B1) can be rewritten as ∞ −∞ dp dp ′ δ(p ′ −p−k) γ g gg (p ′n ; pn, kn) n b (p ′ ) [1+n b (p)] + N f ∞ −∞ dp dp ′ δ(p ′ −p−k) γ q qg (p ′n ; pn, kn) n f (p ′ ) [1−n f (p)] . However, the authors of Ref. [8] should be profoundly chastised for not pointing out that this differential gluon production rate is, in fact, an infrared divergent quantity. The problem arises from the p → 0 region of the g ↔ gg term in (B2). This portion of the integral represents processes in which a hard gluon with momentum p ′ nearly equal to k experiences a soft scattering with emission or absorption of a soft gluon to yield a hard gluon with momentum k. But physical quantities can only depend on this production rate minus the corresponding rate at which gluons are scattered out of the mode k, and the infrared sensitivity cancels in the difference of these rates. In other words, although the production rate dΓ LPM g /d 3 k is not actually well-defined, the complete collision terms (2.7) built from the same near-collinear transition amplitudes are infrared safe.
2019-04-14T02:34:40.228Z
2002-09-29T00:00:00.000
{ "year": 2002, "sha1": "4b44cdc6e11213805bfa782c5665d09179ac2018", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0209353", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d34bce3b51b4a19bc3f56018b3f3f77fec0f16e8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
9781992
pes2o/s2orc
v3-fos-license
The Butterflies of Barro Colorado Island, Panama: Local Extinction since the 1930s Few data are available about the regional or local extinction of tropical butterfly species. When confirmed, local extinction was often due to the loss of host-plant species. We used published lists and recent monitoring programs to evaluate changes in butterfly composition on Barro Colorado Island (BCI, Panama) between an old (1923–1943) and a recent (1993–2013) period. Although 601 butterfly species have been recorded from BCI during the 1923–2013 period, we estimate that 390 species are currently breeding on the island, including 34 cryptic species, currently only known by their DNA Barcode Index Number. Twenty-three butterfly species that were considered abundant during the old period could not be collected during the recent period, despite a much higher sampling effort in recent times. We consider these species locally extinct from BCI and they conservatively represent 6% of the estimated local pool of resident species. Extinct species represent distant phylogenetic branches and several families. The butterfly traits most likely to influence the probability of extinction were host growth form, wing size and host specificity, independently of the phylogenetic relationships among butterfly species. On BCI, most likely candidates for extinction were small hesperiids feeding on herbs (35% of extinct species). However, contrary to our working hypothesis, extinction of these species on BCI cannot be attributed to loss of host plants. In most cases these host plants remain extant, but they probably subsist at lower or more fragmented densities. Coupled with low dispersal power, this reduced availability of host plants has probably caused the local extinction of some butterfly species. Many more bird than butterfly species have been lost from BCI recently, confirming that small preserves may be far more effective at conserving invertebrates than vertebrates and, therefore, should not necessarily be neglected from a conservation viewpoint. Introduction different for invertebrates. There is little doubt that habitat modification and loss greatly affects local butterfly richness in tropical rainforests [6]. For example, butterflies that use trees as larval hosts are more likely to be absent in logged forests, whereas butterflies using grasses and lianas as larval hosts are likely to be more abundant in these forests [35]. Extinction risks in butterflies appear to be correlated with larval utilization of certain plant growth types, adult habitat specialization, larger body size, and narrow geographic range [6,10]. Thus, larval host plants represent a crucial factor in assessing extinction risks in butterflies [11]. The aims of this study are fourfold: (1) to estimate total butterfly species richness on the island; (2) to provide baseline data for assessing long-term changes of butterflies on BCI in the future; (3) to assess the likelihood of local butterfly extinctions on BCI since the 1930's; and (4) to identify possible species traits (accounting for butterfly phylogeny) likely to influence the probability of extinction. Our working hypothesis is that likely butterfly extinction on the island may be related to the loss or severe reduction of larval host-plant species [11]. To achieve (1) and (2) we established a revised checklist of the butterflies of BCI, including Hesperiidae and Lycaenidae, and employing a 'DNA barcode' approach [36] (see methods). To estimate (3) we compared the older and recent species compilations (see methods) and to perform (4) we compiled the ecological information associated with each butterfly species recorded from BCI. Study site Barro Colorado Island (9.15°N, 79.85°W, 120-160m asl) in Panama is a biological reserve. It receives an annual average rainfall of 2631 mm, and has an annual average daily maximum air temperature of 28.5°C. The 50 ha CTFS plot is located in the center of the island. A detailed description of the setting and of the CTFS plot may be found elsewhere [37,38]. Circa 1880, about 45% of what is now the island was covered with old growth forest, whereas the rest represented shifting agriculture, with a probable mosaic of corn, upland rice and secondary forests that were about 20 years old [39][40][41] (S.J. Wright, pers. comm). Around 1910, the Chagres River was dammed to create the Panama Canal. Cerro Barro Colorado, cut off from the mainland by the rising water, became a 1,542 ha island (M. Solano, STRI GIS laboratory, pers. comm.). Circa 1920Circa -1923.7% of the island was covered with forests [40] but agricultural clearings still existed and gradually regenerated into secondary forest after BCI became a protected biological reserve in 1923 [40,41]. By 1930, forest ("primeval" and secondary forests) occupied 99.2% of the island's area, the rest representing various clearings, including laboratory buildings and a small agricultural area used to grow plantains, yucca and fruits, which was maintained until the 1940s and then grew back to forest [40] (S.J. Wright, pers. comm). In the 1990's small areas were cleared for additional laboratory buildings, including the former small agricultural area. Currently, the island is 100% forested, bar a few man-made clearings concentrated in the laboratory and housing area and one lighthouse clearing. Hence, recent changes in the BCI vegetation have been relatively few [41], but may nevertheless have affected its butterfly communities. As Enders [40] indicated: "although insignificant in area, these clearings are important as they are occupied by both plant and animal forms that do not occur in other areas." records were not distributed evenly across the years but rather emphasized active collecting in the 1930's and recent monitoring in the 2000's (S1 Fig). Our analyses aim at comparing the two richest periods with records, the 1923-1943 period and the 1993-2013 period, hereafter termed old and recent periods, respectively. As far as possible, we consider data that span several years for defining these periods, because insect populations may fluctuate markedly from year to year in Panama [30,51] or even become temporarily extinct [24]. Other studies documenting insect extinctions also considered 20 years with no recorded sightings as an adequate period after which a species can be considered as extinct [52]. A few isolated records were also extracted from specialized publications and those are listed in S1 Appendix. The highest number of records was obtained from the compilation of Huntington [12] for the old period, whereas the CTFS monitoring [8] yielded most records for the recent period (Table 1). Most (95%) of the older records were obtained from years 1931-1933 (S1 Fig). The old published lists [12,[43][44][45] listed, for each species, independent records by date and sex, by compiling observations obtained from different collectors during different periods of the year. It is impossible to provide an estimate of sampling effort for the old period but it is rather low as compared to the recent period in terms of number of records (Table 1). The lists of Bell (1931Bell ( , 1937 [43,44] focus on Hesperiidae but they do not over-represent this group when comparing data to the recent period, as sampling effort is higher and unbiased towards specific groups for the latter. There is no reason to believe that there was under-representation of particular taxonomic groups within the old period, with the exception of the night-flying Hedylidae, which were not recorded during that period, and are therefore not included in the present study. In a few cases (13 species, 4% of the number of species recorded during the old period), species were characterized as being "abundant" or "frequent" without mention of the number of individuals actually collected or observed. These records were scored as 10 individuals. For the recent period the Srygley and CTFS monitoring (Table 1) were rather complementary, in terms of surveying different habitats. The former was performed by walking trails and searching gaps, including the "laboratory" gap. Surveying included both the dry and wet season and overall sampling effort amounted to 237 person-hours of observation. The latter used standardized Pollard walks to calculate indices of butterfly species abundance along 10 linear transects of 500m that were repeatedly sampled over a given time interval (30 min.) [53]. These transects were all located within and near the CTFS plot, in the shady understory of the forest. Each transect was surveyed as three replicates in each of four surveys encompassing dry and wet seasons. In total 30 transect-replicates were performed during a survey and 120 during a year (details in [8]; total 345 person-hours of observation or 345km of transects). CTFS and Srygley records also included opportunistic netting and collecting with traps baited with fruits on BCI. Voucher specimens were deposited into the collections of the CTFS-ForestGEO Arthropod Initiative at STRI. Species identification Butterfly identifications dating from the old period were made by professional taxonomists using morphological data [12,43,44]. In the recent period, many of our identifications relied both on morphological and molecular data. For the monitoring projects (Srygley, CTFS: Table 1), reference collections were built before the onset of monitoring and used later to identify, whenever possible, butterflies in flight. However, an appreciable proportion of specimens sighted was also collected for verifying identifications. For the largest dataset, CTFS, 2,461 specimens out of 7,159 (34%) were pinned specimens. This represents about 1,000 specimens more than the total number of observations recorded during the old period (Table 1). These pinned specimens were first identified morphologically using [16,[54][55][56], more specific publications when needed, and expert opinions (see Acknowledgments). Second, 1,328 CTFS pinned specimens were extracted one leg each, which were then processed at the Biodiversity Institute of Ontario (University of Guelph) using methods in [57,58] to obtain DNA Cytochrome c oxidase subunit I (COI, 'DNA barcode') sequences. We obtained 1,248 sequences out of these specimens, representing 17% of the total CTFS records. Molecular data were used to confirm identifications based on morphology and to examine the possibility that morphological uniformity might conceal cryptic species. All DNA sequences are available publicly on-line in the BOLD database (http://www.boldsystems.org/, project BCIBT) and in GenBank (http:// www.ncbi.nlm.nih.gov/genbank/, GenBank accession numbers: KP848543-KP849461). All species delineated by molecular data (where COI divergence > 2% between species: [59]) are unambiguously referred to by their Barcode Index Number (BIN), which can be used as interim taxonomic nomenclature if needed [59]. Since it was not possible to examine the older specimens collected on BCI, deposited in various collections, we also searched the BOLD database to associate BINs to butterfly species not collected by CTFS during the recent period, and to be sure that they were distinct from BINs of recently collected specimens. As indicated earlier, we used Lamas [42] to update the taxonomy of older records. Following Braby et al. [60], we refrained from using subspecies, since they are often inconsistently defined and they frequently fail to reflect distinct evolutionary units according to population genetic structure. Our only exception to this rule was when we encountered distinct BINs for subspecies and in this case only, we list them as separate entities, awaiting further taxonomic analyses. Morphospecies not yet formally described but with distinct BINs are termed "cryptic" species, even if in some cases they can be easily distinguished morphologically from other species. Interim names for cryptic species follow BOLD recommendations. Cryptic species were ignored in comparisons of old vs. recent periods because (a) they were not recognized during the old period; and (b) in most cases their abundance was lower than relevant sister species within the recent period. Phylogenetic relationships Following extensive searches of the BOLD BCI Butterfly Data Base, the BOLD public records and GenBank we found barcodes from 451 of the 601 butterfly species recorded in the study for inclusion in our phylogenetic analysis. Given the scale of the analysis (both in terms of taxon number and taxonomic scale) and the fact that by definition within BIN variation rarely exceeds between BIN variation we decided to choose one exemplar per BIN. Our primary criterion was the length of the sequence read, subsequent decisions on inclusion were made arbitrarily. All data were downloaded from BOLD (including sequences mined from GenBank) either directly or by using the 'read.BOLD' function in the R package 'Spider' [61]. For species to be included in further analyses (see below) for which no barcodes existed (mostly 'extinct' species) we inserted an additional congeneric species as a replacement (S1 Appendix). This decision was made based on the taxonomic scale of the analysis and the reduction in statistical power caused by the omission of 11 taxa. To minimize the genetic distance between the original species and the replacement we chose congeners from the same region, making the assumption that these would be more closely related than, for example, Asian congeners. Using BEAST v2.1.0 [62] we estimated both topological relationships and branch lengths in millions of years. We found that setting strong node age and rate priors reduced the computational time needed for the BEAST 2.0 analyses to converge after burnin and allowed better exploration of the model parameters. Importantly it led to increased Effective Sample Size (ESS) for most parameters. Our phylogeny was constrained to match all of the inter-familial relationships that received high support (posterior probability support of one in all cases) by Heikkilä et al. [63]. Furthermore the relationships among subfamilies of Nymphalidae were constrained following Wahlberg and Wheat [64]. We used a secondary calibration point to estimate the node ages of our phylogeny, namely an exponential prior on the node age of the subfamily Nymphalinae with an offset of 60 MY and a mean of 15. This reflected the range of age estimates for the subfamily given by Wahlberg [65]. Furthermore, we used a relaxed lognormal clock with a ucld. mean of 0.0177 substitutions per million years (M parameter in real space) and log-normal distribution with an S parameter of 0.1 (in log-normal space). This gave a median rate of 0.0177 substitutions per million years with a 5% quantile of 0.0149 and a 95% quantile of 0.0208. This prior is based on the insect COI substitution rate taken from Papadopoulou et al. (2010) [66], but with wider bounds. Whilst a TIM2+I+G model was selected by jModelTest v2.1.6 [67] we selected a Hasegawa, Kishino and Yano (HKY) substitution model with a Gamma shape and a proportion of invariant sites (HKY+I+G). This was because the low cytosine (0.08) and guanine (0.01) base frequencies made any substitution rates involving these bases difficult to estimate, leading to poor mixing in parameters related to substitution rates and incomplete convergence, even over hundreds of millions of generations. We ran two MCMC chains of 80,000,000 generations which were sampled every 4,000 generations. These chains were resampled every 16,000 generations to yield a total of 10,000 trees which were used in combination for estimating the topology and node ages after removing a 'burnin' of 10% for each chain. We assessed the convergence of each statistic using Tracer v1.6 [68] to ensure that all Effective Sample Sizes (ESSs) were all over 200. We summarized the trees and median node ages using TreeAnnotator v2.1.2 [69]. Statistical methods We assigned each butterfly species recorded on BCI during the period 1923-2013 to one of the categories of "abundance status" listed in Table 2. These categories are used as a convenience to characterize our overall dataset and not all of them may have a biological significance. Most of these categories indicate the uncertainty of the status of species but the following categories are of special interest in the context of this study: -Category 7: these species were well represented throughout 1923-2013 and can probably be considered as resident on BCI. -Category 8: these species may include new colonizers/invasive species to the island or species with a recent high increase in population size. However, because sampling effort was much larger in the recent period than in the older one (as judged by the number of individuals recorded, Table 1), sound interpretation of these data is difficult. -Category 9: these species may be considered as locally extinct from BCI, or at least with extremely low populations. Despite being collected with frequency during the old period (sum of individuals 10) and a considerably higher sampling effort during the recent period, they could not be located anymore. Note that all species included in Category 9 have a significant Fisher test (p < 0.0001) when comparing their species abundances in old vs. recent periods to that of all individuals recorded during these periods. -Category 4: although these species were not collected during the recent period, their status is unclear because they were not abundant during the old period. However, adding the number of species in Categories 9 and 4 provides us with a maximum number of species that may be extinct on BCI. In order to anchor any estimate of butterfly extinction on BCI, we estimated the species richness of butterflies on the island by three separate methods. First, we estimated the total number of species likely to be present in the shady understory of BCI forests by randomizing Table 2. Categories of abundance status used to characterize each butterfly species recorded on BCI during the period 1923-2013, and denominations used in the context of this study. Ind. = individuals. Category Description No. of species 1 Species currently only recognized with molecular data (BIN). These are denominated cryptic species. [70]. Second, we calculated similarly the ICE with the number of individuals and species collected/observed with all data available for the recent period . To appreciate the theoretical taxonomic knowledge of the BCI butterfly fauna, we plotted the year of species description against the cumulative number of BCI species described. For cryptic species uniquely known by their BIN, we used the year in which the species was first collected as the year of description. Last, we fitted a logistic power regression to the cumulative number of individuals sequenced against the cumulative number of cryptic species discovered. We used Nonmetric multidimensional scaling (NDMS) to compare the faunal composition of years on record for which at least 300 butterfly individuals were recorded (years 1932, 1968, 1988, 2003 and 2008 to 2013; matrix of 592 species x 10 years; Bray-Curtis distance). We used WinKyst 1.0 of the CANOCO package for these calculations [71]. We further performed multiple regressions with the scores of the years on Axes 1 and 2 of the NDMS ordination as dependent variables and variables Year and number of individuals collected (as proxy for sampling effort) as independent variables, in order to test the influence of time. We tested for phylogenetic signal with regards to extinction status across the whole 451 taxon phylogeny, for all 151 members of the family Hesperiidae (12/23 extinct species were hesperiids), and for all 161 species of Nymphalidae (7/23 extinct species were nymphalids). Not only are these the two most species rich families in our data set with the highest numbers of extinct species, but they also diverge with respect to host use. Many members of the family Hesperiidae are often herb or palm feeders, whilst members of Nymphalidae feed across a wider range of plants [72]. We predict that extinction will be more clumped in hesperiids, that are less capable of utilizing alternative hosts and likely have more conserved patterns of host use than nymphalids. We calculated the D statistic for phylogenetic signal in a binary trait [73]. The value of the D statistic is based on the sum of changes between sister clades across the phylogeny. Highly clumped traits tend to have lower D values, closer to 0. In contrast more labile traits have higher values, with a D value of 1 representing a pattern close to phylogenetic randomness. We also compared the scaled value of the observed statistic to values generated under a model simulating under a Brownian model of phylogenetic structure and one resulting from no phylogenetic structure (each with 10,000 permutations) using the R package 'Caper' [74]. One can argue that a Brownian model of evolution is inappropriate for evaluating the phylogenetic signal of a non-evolved trait (albeit one correlated with evolved traits). Therefore we used a complimentary significance based approach to provide further support for these results, by testing for phylogenetic signal according to the mean phylogenetic distance (MPD) between extinct taxa. We used standardized effect sizes of MPD generated under null models of tip label randomization (999 runs) as implemented in the R package 'Picante' [75]. We tested for correlation between ecological and biogeographic traits of butterfly species and local extinction in BCI. Selection of these traits was guided by previous studies that considered the effect of anthropogenic disturbance on butterfly assemblages [2,6,10,11,76]. For this analysis, we considered only butterfly species that were common (> 10 individuals observed) during either the old or recent periods, or both (i.e. species in Categories 7 to 9). As the variables under investigation (see below) are likely to be phylogenetically conserved, we performed binary logistic regression in a phylogenetic context using the Phylogenetic Generalized Linear Models (PGLMs) of Ives and Garland [77] as implemented in the R package 'phylolm' [78]. By this approach we were able to assess the influence that each variable had on the probability of extinction independently of the phylogenetic relationships among butterfly species. The response variable was the probability of extinction, coded as either 0 (extinct species in Category 9) or 1 (common and settler species in Categories 7 and 8). The explanatory variables were host specificity (ordered categorical), host growth form (ordered categorical), geographic range (ordered categorical), color category (ordered categorical) and wing size (continuous; all variables are further described in S1 Text and S2 Fig). We used the "Logistic_MPLE" (Maximized Penalised Likelihood) method with a btol (bound on the linear predictor) of 1,000. Prior to analysis we tested for correlation among the explanatory variables by calculating the correlation co-efficients between each variable using a Spearmans rank correlation, none exceeded 0.57 (S1 Table). Furthermore, we also calculated the Generalized Variance Inflation Factors (VIFs) as implemented by M. Helmus (unpublished code available from GitHub as 'AIC_func. r') for our model without interaction terms, as none exceeded 1.6 we consider the inclusion of all explanatory variables to be valid. For our analysis we included all two way interactions apart from Size:Range, Growth Form:Color Category and Range:Color Category due to convergence problems caused by the inclusion of these terms (none of which were significant predictors when included in a separate model). Because no automated AIC based model simplification procedure is currently available for 'phyloglm' models we carried out backward step-wise model simplification based on the p-values of the model terms (terms with the largest insignificant p-values were removed first), with non-significant interaction terms being removed first. All models were then compared using the function 'aictab.AICphyloglm' written by M. Helmus and available (unpublished code available from GitHub as 'AIC_func.r'). Because the flora of BCI is well-known [79] and has been censused regularly [80], we were able to assess whether the host plants of extinct butterfly species may still be present on BCI using recent plant records [81,82], as well as with expert botanist opinion (S. Aguilar, pers. comm.). Faunal composition and local species richness The most species-rich butterfly families on BCI during the period 1923-2013 were Hesperiidae (33% of total number of species), Nymphalidae (31%), Lycaenidae (15%), Riodinidae (14%), Pieridae (4%) and Papilionidae (3%; full species list in S1 Appendix). The most species rich subfamilies were Pyrginae (including Eudaminae, to be consistent with Lamas, [42]), Theclinae, Hesperiinae and Riodininae, each contributing more than 80 species, whereas other butterfly subfamilies each contributed less than 40 species (S3 Fig). Most species (70%) in our list had valid BINs. Out of the total number of species in the list, 5% were cryptic species, 21% were species with sequences originating from elsewhere from BCI, and 45% were species with sequences originating from BCI (S3 Fig, S1 Appendix). For the old period, 64% of species had BINs whereas this proportion amounted to 90% for species collected during the recent period (S1 Appendix). Huntington [12] listed 267 butterfly species for BCI. Our full list for the 1923-2013 period includes 601 species. However, only 373 species (including cryptic species) were collected during the recent period (1993-2013) and the Incidence Coverage-based Estimator suggests that at least 514 ± 26.6 species (ICE ± SD) could have been collected on the island during recent times (Fig 1A). In the shady understory of BCI forests, using the CTFS transects, we collected 268 butterfly species during 2008-2013 (inset of Fig 1A). The ICE calculated with these data suggests that at least 390 ± 9.19 (SD) species may have been present in these forests during this period. Twenty-seven cryptic species were discovered after sequencing 1,228 individuals. The best fit model between the cumulative number of individuals and that of cryptic species suggests that a total of 34 cryptic species remain to be discovered on BCI (parameter a of the logistic power regression, Fig 1B). That number represents about 9% of the total number of species collected during the recent period. Further, theoretical taxonomic knowledge of BCI butterflies (S4 Fig) suggests that at the time of Linnaeus [83] only 4% of the species ever recorded on BCI had been formally described, whereas this proportion had already reached 91% at the time of publication of Huntington's list [12]. Comparison of old vs. recent periods The NDMS analysis clearly separated recent monitoring years (2008-2013) from older years, including 1932, along Axis 1 of the ordination (Fig 2). Multiple regressions indicated that only the variable Year itself explained significantly the scores of years on Axis 1 (F = 32.02, p<0.001, R 2 = 0.775, n = 10), whereas sampling effort (no. of individuals recorded) explained to a lesser extent the formation of Axis 2 (F = 5.61, p = 0.045, R 2 = 0.339, n = 10). The proportion of individuals recorded per family also varied along years, with, for example, a higher proportion of Pieridae (especially Itaballia spp.) and Riodinidae (Detritivora spp.) and a lower proportion of Hesperiidae being recorded during more recent years, as compared with Year 1932 (Fig 2). This was confirmed by a Chi-square test comparing the number of individuals recorded in each family for the old vs. recent period (Chi-square = 797.3, p < 0.0001, d.f. = 5). The Hesperiidae decreased from 36% to 10% of records, while Pieridae and Riodinidae rose from 6% to 20% and from 12% to 24%, respectively. In contrast, the number of species per family remained similar between the old vs. recent period (Chi-square = 5.3, p = 0.374, d.f. = 5). There was also a negative correlation between the number of individuals per species recorded in years 1932 and 2013 (r s = -0.301, p<0.001, n = 327) and, similarly, between the number of individuals per species recorded during the old and recent periods (r s = -0.252, p<0.001, n = 529). Locally extinct butterfly species Cryptic and recently described species represented 5% and 4% of the 601 species collected during the period 1923-2013 (Table 2). Common and settler species represented 12% and 3% of the total, while locally extinct species only 4% (23 species). The vast majority of species (73%) had an unclear status of abundance according to our classification. Hence, of the species that had a higher abundance during 1923-2013 (Categories 7, 8 and 9 in Table 2), locally extinct species represented a much higher proportion, 20%. Thus, our analyses contrasted these three categories and, particularly, differences between common and extinct species. The distribution of butterfly families, of butterfly host specificity, host growth form, geographic distribution, wing color and wing size across the different categories of abundance status (as defined in Table 2) can be appreciated in Fig 3. The simplest model with no non-significant terms was selected using differences in AICc scores. In this model growth form (z = 3.083, p = 0.002) and wing size (z = 2.391, p = 0.017) Extinction of Tropical Butterflies had a significant effect on the probability of extinction, with smaller butterflies feeding on herbaceous plants more likely to go extinct. Host specificity was also retained as a significant term (z = -2.556, p = 0.011). The significant interaction terms Growth Form:Size (z = -2.154, p = 0.031) and Host Specificity:Range (z = 2.150, p = 0.032) were also retained in the model. The p-values derived for each coefficient are Wald type p-values conditional on the phylogenetic correlation parameter (alpha), which was approximately 0.0132 for each model. The 23 species listed as extinct are further discussed, in terms of possible misidentifications and loss of host-plant species, in S2 Appendix. This list includes 12 Hesperiidae, 7 Nymphalidae, 3 Riodinidae and 1 Lycaenidae. In 13 cases, their host plants included herbs, in 6 cases either palms, shrubs or trees, in two cases lianas, and in two cases were unknown. The most common case included hesperiids feeding on herbs (8 cases). In contrast, Category 7 (common species) included 38 Nymphalidae, 11 Hesperiidae, 8 Pieridae, 6 Riodinidae, 4 Lycaenidae and 3 Papilionidae. For 19 of these 70 species, their host was either a tree or a shrub. The most common case in Category 7 included a nymphalid feeding on a liana. For 17 locally extinct species, Extinction of Tropical Butterflies we provide evidence that their host plant(s) are still extant on BCI and that extinction is unlikely to result from a decrease in host plant(s) abundance. However, local extinction of host plants may have resulted in the extinction of four butterfly species on BCI (S2 Appendix). These include Corticea noctis feeding on sugarcane, Staphylus mazans and S. musculus feeding on Amaranthaceae and Chenopodiaceae (but these two species are likely to be misidentified see S2 Appendix), and Ithomia iphianassa feeding on Capsicum and Witheringia. Sugarcane and Capsicum were cultivated on BCI during the old period. Discussion Size of the butterfly species pool on BCI Some 601 butterfly species have been recorded during 1923-2013 on the 1,542 ha Barro Colorado Island. This number is plausible given that in Costa Rica, a complex mosaic of 120,000 ha of dry, cloud and rain forest over 0-2,000 m elevation hosts more than 978 butterfly species [84], suggesting that about 40% of all butterfly species estimated for Panama might occur on BCI ([85]: 1,550 species). Unlike Janzen et al. [84] we could not systematically sequence each butterfly specimen collected during the recent period. Thus, our total estimate of 34 cryptic species on BCI (9%, against 16% for Janzen et al. [84]) probably represents an underestimate. These cryptic species were not included in the old vs. recent period comparisons. They could potentially complicate these comparisons, because of possible misidentifications (S2 Appendix). The total number of species collected on BCI may be also inflated by misidentifications but we have tried, whenever possible, to assist our identifications with DNA barcoding. During the recent period 373 species were collected on BCI and the ICE estimator suggests that at least 514 species could actually be present there. However, that does not imply that all of these species are resident (i.e., breeding) on the island. Extended flights over water near BCI are a likely occurrence for many butterflies [27]. They may originate from the nearest forests and open areas close to BCI and cross the water channel that is 0.5-3.5km wide, depending on the location. Good dispersers may not necessarily be ovipositing on the island. Poor dispersers (smaller species) may not be able to cross the water channel. Dynamics of colonization-extinction for BCI butterflies would warrant further studies. The steep rise in the accumulation of butterfly species in CTFS transects performed in the shady understory suggests a large species pool and, possibly, dispersal in the forest of species that usually prefer to fly in the canopy or open habitats. Because BCI is currently ca. 100% covered by forest, CTFS transects are probably representative to estimate the number of resident (breeding) species on BCI, which we estimated to be 390 species. We also note that 36% of the 70 common species (Category 7) have been reared from host-plants on BCI and thus are more likely to be breeding on the island (datasets Aiello and Coley, Table 1). Challenges in ascertaining local extinction rates When Huntington published his checklist [12], taxonomical knowledge of BCI butterflies was good, with 91% of today's fauna already known. Therefore it is sound to compare his checklist with butterfly collections described after 1943, excluding cryptic species, which represent 5% of the species ever collected on the island. Taxonomic knowledge of BCI butterflies mirrored advances in Neotropical lepidopterology, with early descriptions (Linnaeus, Fabricius), followed by the monumental works of Bates [86], Godman and Salvin [87] and the recent surge of cryptic species discovered by molecular techniques [84]. Many species in our classification had an unclear status of abundance (Categories 3 to 6, 63% of the total number of species in the list), because of the long tail of species-abundance distributions typical of tropical forest habitats [88]. This presents a common challenge to studies of tropical insect conservation. We need to discuss the cases where there is reasonable evidence for local extinction, i.e. species that were common during the old period but have not been collected recently despite a much higher sampling effort. Though it may yield extremely conservative estimates of local extinction we must ignore species considered rare during the old period (Category 4, 132 species) because it is very difficult to ascertain whether they may be extinct on BCI today. Misidentifications of extinct species cannot be discounted, but they should at most include 9 out of the 23 species listed in our extinct category. Eleven of the 23 extinct species have BINs (48% of species in Category 9; S2 Appendix), and despite high sampling efforts these BINs have not been sequenced from specimens collected recently on BCI. Still, our extinct species category includes many taxonomically-challenging taxa, for which we cannot exclude misidentification (11 out of 23 species, 48%). Ten of the 12 hesperiids listed as extinct were identified by Ernest L. Bell, who was one of the foremost authorities on New World hesperiids. Very few of his 200+ species and subspecies have fallen into synonymy [89], making it unlikely that he misidentified the hesperiids in our extinct species list. However, there are possible misidentifications, such as the two Staphylus spp., which do not occur in Panama (A.D. Warren, pers. obs.). Huntington's specimens [12] were identified by himself, assisted by Bell, Curran and Watson. Sheldon's [45] specimens were identified by Arthur Hall. The highest probability for misidentification concerns three extinct species uniquely recorded by Sheldon [45]: Pythonides herennius, Ithomia iphianassa and Emesis cerea, but that cannot be confirmed without vouchers. Finally, misidentifications are also possible among recent specimens, which could belong to species listed as extinct. That would be most likely for the four indistinct species without BINs: Corticea noctis, Pareuptychia ocirrhoe and the two Staphylus mentioned above. Local butterfly extinction and relationships with host plants What proportion of the probable pool of ca 390 resident and extant species can we consider as locally extinct on BCI? The short answer to this is 23 spp. / (390 + 23 spp.) or 6% if we include all extinct species in Category 9. More conservatively we could include only the 12 species in Category 9 for which the probability of misidentification is very low, decreasing the proportion of extinct species to 3%. A more radical approach would consider all 132 species in Category 4 as being extinct or in danger of extinction, raising the proportion of extinct species to 38%. Therefore, extinction rates may range between 3% and 38% of resident species, with a most probable estimate of 6%, as our data and analyses suggest. Besides, many of the species locally extinct on BCI have been collected recently in the "Canal Area" adjacent to BCI (S2 Appendix). Phylogenetic signals of extinction were of intermediate strength when they were present, and occurred on distant phylogenetic branches within several families rather than in distinct clades. The most speciose family, the Hesperiidae, showed the largest phylogenetic signal as each subfamily had a clade with a higher extinction risk. In contrast the second most speciose family, Nymphalidae, showed no phylogenetic signal with respect to extinction risk (if anything borderline over dispersion was detected). These results may give some small insight into how host use may play a role in species extinctions (in addition to the results of our logistic regression). Many members of the family Hesperiidae feed on herbs and palms, whilst members of Nymphalidae have a much wider range of host plants [72]. The more clustered extinctions in Hesperiidae may reflect fine scale host conservatism within the family, whilst it is possible that nymphalid species may be pre-adapted to host plant loss. The traits most likely to influence the probability of extinction were host plant growth form, butterfly wing size and host specificity (specialist vs. generalist species), independent of the phylogenetic relationships among butterfly species. Size of geographical range and wing color patterns (dull vs. brightly colored species as a proxy for bias in species detection) had no significant influence on the probability of extinction. Our most likely candidates for extinction were small hesperiids feeding on herbs (35% of extinct species). However, contrary to our working hypothesis, extinction of these species cannot be imputed to loss of host plants on BCI. In most cases, those host plants remain extant on BCI, although their recent abundances have not been ascertained. The most drastic and recent changes in BCI vegetation have been the closure of forests and loss of open habitats. In 1929, herbs represented 48% of plant species on BCI [39]. Recent figures are not available because CTFS is monitoring trees and lianas on the island, not herbs [38]. However, many open habitats have been lost or fragmented, and many herb species must subsist now at lower and more fragmented densities than before. Many tropical butterfly species have been shown to have short dispersal distances (< 200m; [90]). The decline in herbaceous hosts on BCI may have reduced the fitness of smaller butterfly species (poor dispersers) feeding on herbs, possibly leading to local extinction. The significant interaction between host growth form and butterfly size in the phylogenetic generalized linear model describing the probability of extinction would lend credence to this interpretation. In temperate areas, some butterfly species occupy wide areas as "transient" colonies/populations that move around within that general range as habitats shift [91]. That scenario may apply also to some of our locally extinct species. Other factors, perhaps related to global climate change [92], possibly have pushed some BCI species towards extinction, although this is difficult to discuss without good baseline data and on-going monitoring programs. Conclusions and Implications A small number of butterfly species (6% of the resident pool) have probably become locally extinct on BCI. The populations of these species probably declined as a result of the loss and fragmentation of habitats where their herb host was growing. A remedy to this situation, if one wants to re-install these butterfly species on BCI, would be to manage the forest and allow locally the herbaceous vegetation to grow back. This is common practice to conserve butterflies of dry meadows in Europe [93]. Small amounts of disturbance are beneficial to conserve a high proportion of biodiversity [94]. However, even if host plants are present, local conditions may not be suitable for butterflies to complete their life cycles on them, as is probably true for most of the extinct species reported in this study. Knowledge of species' life history appears paramount to conserve them, even for tropical invertebrates. Karr [34] reported that 50-60 species of birds were lost to BCI since its isolation from the mainland, as a result of several factors. In comparison, far fewer butterfly species have been lost from BCI in recent times. Butterflies are known to be able to maintain viable populations in tiny habitat fragments of a few to hundred hectares [9]. Our data confirm that small preserves may be far more effective to conserve invertebrates than vertebrates and, therefore, should not necessarily be neglected from a conservation viewpoint. Butterflies are popular indicators to monitor forest disturbance, mainly because of logistical and visibility advantages over other insect taxa [95]. Changes in the absence and presence of species over years were difficult to evidence in our study, as indicated by the high numbers of species in Categories 3 to 6. Changes in butterfly composition over the years on BCI were more clearly evidenced by the abundance of a few species limited to the forest interior (Y. Basset et al., unpubl. data; Itaballia spp.; Fig 2). Thus, for long-term monitoring of butterflies in tropical rainforests, analyses of population changes of particular species may be more informative than synecological analyses of whole assemblages based on presence-absence data. Lastly, irrespective of their conversion into plantations and secondary logged forests [96], tropical rainforests are going to lose butterfly and insect species in the relatively short-term by either natural succession or through the effects of global climate change. In this regard the 155 butterfly species listed in our Categories 4 and 9 probably represent potential candidates for extinction, but there is great uncertainty about their abundance status. To refine recent extinction rates of insects in tropical rainforests, we imperatively need to invest more resource in detailed long-term insect monitoring with adequate frequency. Supporting Information S1 Appendix. List of butterfly species (Hesperiidae & Papilionoidea) collected or observed on Barro Colorado Island, for the period 1923-2013. The number of specimens is summed for the old and recent (1993-2013) periods, as well as for the whole study period (1923-2013, Abundance), and detailed for each year on record. When available, Barcode Index Numbers (BINs) are indicated for each species. Indices of host specificity, geographic range and abundance status are also listed for each species, as well as congeneric replacements for extinct species, for which barcodes were unavailable (see text). A few isolated records were also extracted from the specialised literature and those are listed in the last column.
2017-04-14T10:11:20.613Z
2015-08-25T00:00:00.000
{ "year": 2015, "sha1": "48bf6c4f41c5ea6fdeff5d362feb45beebcbf3aa", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0136623&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "48bf6c4f41c5ea6fdeff5d362feb45beebcbf3aa", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
3476849
pes2o/s2orc
v3-fos-license
Design and Simulation Analysis for Integrated Vehicle Chassis-Network Control System Based on CAN Network Due to the different functions of the system used in the vehicle chassis control, the hierarchical control strategy also leads to many kinds of the network topology structure. According to the hierarchical control principle, this research puts forward the integrated control strategy of the chassis based on supervision mechanism.The purpose is to consider how the integrated control architecture affects the control performance of the system after the intervention of CAN network. Based on the principle of hierarchical control and fuzzy control, a fuzzy controller is designed, which is used to monitor and coordinate the ESP, AFS, and ARS. And the IVC system is constructed with the upper supervisory controller and three subcontrol systems on the Simulink platform. The network topology structure of IVC is proposed, and the IVC communication matrix based on CAN network communication is designed. With the common sensors and the subcontrollers as the CAN network independent nodes, the network induced delay and packet loss rate on the system control performance are studied by simulation. The results show that the simulation method can be used for designing the communication network of the vehicle. Introduction From the current development of the control system of vehicle chassis, integration and networking trend is very obvious [1].The architecture of system control and network has different degrees of influence on the stability of chassis control.Due to the different functions of the system used in the vehicle chassis control, the hierarchical control strategy also leads to many kinds of the network topology structure and the distribution of the system computing tasks.In the 80s of last century, the researchers began to decompose the complex chassis control problem into a number of subcontrol systems and then use a mechanism to coordinate the dynamic relationship between the subsystems to meet the control requirements.Therefore, the research and discussion of the integrated control architecture of the chassis form [2][3][4][5][6][7][8][9] began to become the focus. As far as the integrated control strategy of vehicle chassis is concerned, numerous studies have shown that the hierarchical control can effectively reduce the operation conflict between different functional subsystems,and quickly and effectively make the vehicle get the best performance.A large number of literatures [2][3][4] divide chassis control into different subcontrol systems according to the vertical, lateral, and normal control systems, and the integrated optimization control of the chassis is realized through the hierarchical control strategy.Li et al. put forward the integrated control structure of chassis based on the combination of the main loop and servo loop and discussed the problems of different directional force and force distribution of the chassis [6]. Chang and Gordon divided the chassis control system into three layers to achieve the active collision avoidance control [8].Using the system architecture for the independent control units of the chassis of integrated control with upper coordinated control [10] can effectively adjust the collaborative work of control units, avoid the conflict of the controllers, and make the vehicle obtain the best running state.Through the analysis of the complex working conditions, the supervision mechanism is used to coordinate the multiple control systems of the vehicle chassis, which can achieve a very good control effect of the system integration [11]. For these reasons, this paper firstly according to the hierarchical control principle, puts forward integrated control strategy of the chassis based on supervision mechanism.Based on the verification of the validity of this control strategy, the purpose of the study is to consider how the integrated control architecture affects the control performance of the system after the intervention of CAN network.Some exploratory simulation research is carried out.In order to facilitate the discussion, the integrated control system of network of the vehicle chassis based on network communication is abbreviated as IVC-NCS, namely, Integrated Vehicle Chassis-Network Control System. Dynamic Model of the Vehicle At present, international vehicle coordinate system mainly has two kinds [12]: one is SAE vehicle coordinate system issued by American Society of Automotive Engineers and another one is ISO vehicle coordinate system issued by International Standardization Organization.In this paper, SAE vehicle coordinate system is used for modeling, calculation, and analysis of vehicle dynamics.Based on the above assumptions, the nonlinear vehicle dynamics model has eight degrees of freedom. There are a lot of tire models to calculate the complex nonlinear force between the road surface and the wheel.The most commonly used in the project is magic formula raised by Pacejka of Holland [13,14] and unified tire model of overall conditions raised by Guo Konhui of China [15].This paper uses Dugoff tire model [16], which is often used in computer simulation.It belongs to analytical model, and the parameters are small and easy to obtain. Architecture of IVC-NCS Based on Supervision Mechanism Figure 1 shows the architecture of IVC-NCS based on supervision mechanism.Three subcontrol systems are ESP, AFS, and ARS.Each subsystem can be controlled according to the calculation of local state variables.Based on the global state of the vehicle, the upper supervision controller judges the function weight by the vehicle stability for each subcontrol systems.The implementation of the execution mechanism is determined by the calculation results of each subcontroller and control weight. ESP Subcontrol System. ESP takes the handling stability as the control target on the critical conditions of the whole vehicle.By controlling the braking intensity of four wheels, the electronic control of vehicle active safety is finished.The yaw rate tracking is the control target by applying a braking force at the right wheel to correct the unstable state of the vehicle.The system adopts sliding mode control strategy, and the tracking error of yaw rate is defined as sliding mode variable: where is the actual yaw rate and idl is the ideal yaw rate.The condition for reaching the sliding surface is defined as where and are all positive constants, reflects the response speed of yaw tracking controller, shows the convergence rate of sliding mode surface of the system, is the tracking error of yaw rate, and is the thickness of boundary layer. The sliding mode controller satisfies the stability condition of Lyapunov sense. Ignoring the inclination of vehicle and considering formula (2), when ESP control system acts on braking of single vehicle, the calculation formula of additional yaw rate torque is gotten: where is the additional yaw torque generated by longitudinal driving force or braking force, is the moment of inertia of the vehicle body around -axis, is the vertical distance from the centroid to the front axle, is the vertical distance from the centroid to the rear axle, fl is the longitudinal force of the ground on the left front tire, fr is the longitudinal force of the ground on the right front tire, rl is the longitudinal force of the ground on the left rear tire, rr is the longitudinal force of the ground on the right rear tire, idl is the ideal yaw rate, reflects the response speed of yaw tracking controller, shows the convergence rate of sliding mode surface of the system, is the tracking error of yaw rate, and is the thickness of boundary layer. In order to improve the unstable state in extreme conditions, braking force is applied to inward rear wheel when the vehicle has the understeer, or braking force is applied to outward front wheel when the vehicle has the oversteer.It can quickly and effectively improve vehicle stability.Therefore, the additional yaw torque calculated by formula (3) is converted to the equivalent braking force that can be applied to wheel with the most effective braking force. AFS Subcontrol System. In steering system of the vehicle chassis, a relatively independent subcontrol system such as AFS is increased to adjust the front wheel angle for obtaining the optimum performance of IVC-NCS. The system adopts sliding mode control strategy, and the tracking error of yaw rate is defined as sliding mode variable: where is the actual yaw rate and idl is the ideal yaw rate. The condition for reaching the sliding surface is defined as where where and are all positive constants, reflects the response speed of yaw tracking controller, shows the convergence rate of sliding mode surface of the system, is the tracking error of yaw rate, and is the thickness of boundary layer. The control law of steering angle about front wheel is where is the lateral vehicle speed, is the actual yaw rate, idl is the ideal yaw rate, reflects the response speed of yaw tracking controller, shows the convergence rate of sliding mode surface of the system, is the tracking error of yaw rate, and is the thickness of boundary layer. According to the vehicle model of two degree of freedom, 21 = 2 / , where is the vertical distance from the centroid to the front axle, is the pitch damping, is the moment of inertia of the vehicle body around axis. And 21 = (2 −2 )/ , where is the vertical distance from the centroid to the rear axle, is the caster damping, is the vertical distance from the centroid to the front axle, is the moment of inertia of the vehicle body around -axis, is the longitudinal speed. And where is the vertical distance from the centroid to the front axle, is the pitch damping, is the vertical distance from the centroid to the rear axle, is the caster damping, is the moment of inertia of the vehicle body around -axis, and is the longitudinal speed. ARS Subcontrol System. Active four-wheel steering technology can improve the handling stability of the vehicle at high speed and the controlling flexibility at low speed.The ideal yaw rate calculated by vehicle model of two degrees of freedom is the tracked target.So ARS takes round steering angle as the controlled variable. The system adopts sliding mode control strategy, and the tracking error of yaw rate is defined as sliding mode variable: where is the actual yaw rate and idl is the ideal yaw rate. The condition for reaching the sliding surface is defined as where where and are all positive constants, reflects the response speed of yaw tracking controller, shows the convergence rate of sliding mode surface of the system, is the tracking error of yaw rate, and is the thickness of boundary layer. In order to restrain the shake of high frequency caused by frequent switching on the sliding surface, is taken as the thickness of the boundary layer. reflects the response speed of yaw tracking controller, and reflects the rate how the system reaches the sliding surface. Upper Supervisory Controller Design. The control idea of the supervisory controller is as follows: judging the steady state of the vehicle according to the stability factor, distributing the weight of the control function of three subcontrollers, and coordinating the output of each subcontroller. Firstly, the stability factor of front and rear wheels is defined as [17] where SF is the possibility that the front wheels come into the slipping state and is the corresponding sideslip angle of the middle of the left and right wheels on the front axle: where SF is the possibility that the rear wheels come into the slipping state and is the corresponding sideslip angle of the middle of the left and right wheels on the rear axle. 1 and 2 can be obtained by analyzing the relationship between the phase plane and the steering stability of the tire [18]. SF and SF show the possibility of the corresponding wheel beginning to side.The larger the value, the bigger the side slipping possibility of corresponding wheel, namely, the smaller the control margin provided by the wheel.Conversely, the smaller the value, the greater the effective strength of corresponding wheel.Through repeated simulation tests, when SF and SF are less than 0.7, the active steering control of the front and rear wheels can meet the requirements of vehicle stability.When SF or SF is bigger than 1.3, the use of ESP can be more effective to correct the excessive or lack steering state, which can keep the vehicle stable fast.When SF and SF are in the range from 0.7 to 1.3, the wheels with smaller stability factor provide a greater role in vehicle stability control.Based on this, the design of fuzzy logic controller is designed as follows. The controller takes the stability factors of the front and rare wheels, such as SF and SF , as the input.The membership functions are in the same range [0, 2], and the fuzzy subset is {S, MS, MB, B} as shown in Figure 2(a).The outputs of the controller are the control weights of three subcontrollers whose range is [0, 1]. The membership functions of AFS and ARS are the same, and fuzzy subset is {D, M, E} as shown in Figure 2(b).The membership function of ESP subcontroller is shown in Figure 2(c), and fuzzy subset is {S, MS, MB, B}.The collection of letters is as follows: S is small, M is medium, and B is big. Considering the actual application of the computation and real-time, all variables of the membership functions are easy to be calculated by the procedure, such as trigonometric function or trapezoidal function.Table 1 shows the inference rules of fuzzy controller of IVC. Network Topology Design of IVC-NCS According to system control strategy of IVC, combined with the control requirements of vehicle stability, the following several points are considered as the basis for the design.Actual limitations of vehicle space layout are as follows: because CAN network agreement and the corresponding international standards limit the length of the branches connecting the nodes and communication trunks, so network nodes in the actual space layout is one of the major considerations of network topology structure.Such as ARSC and AFSC, they are divided into two control units to control the system separately, which is helpful to connect the sensors and the executing agency. Load capacity constraint of network communication is as follows: for IVC-NCS, if all sensors, controllers, and actuators exist as independent network nodes and the network works in 250 Kbps rate of regulated by vehicle high speed network of SAE, only from the theoretical calculation of CAN communication capability, its load capacity is difficult to meet the control requirements.While the communication speed is increased to 500 Kbps, the anti-interference ability of the node will be poor, so it is difficult to realize the high speed communication in the bad electromagnetic environment. Real-time requirements of subsystems are as follows: three subsystems of IVC-NCS are the relatively independent closed-loop control system.ESP subsystem has higher request on real-time of wheel speed signals, which requires the executing agencies to react quickly according to control orders. The sensors necessary for many systems are designed as independent network nodes.The subcontrol systems adopt traditional point-to-point connection in the controllers, sensors, and executing agencies.Its object is to obtain satisfactory real-time performance and reliability. Based on above analysis, the network in Figure 3 is designed as IVC-NCS structure.CAN network is taken as the communication medium of the controller node, and each subsystem is connected with the traditional method of point to point.Considering that ESP system has obvious effect for vehicle stability in extreme conditions, the supervision and control tasks of the system and the control calculation of ESP are assigned to one node. The sensor signals are the basis of the controller to judge the state of the vehicle and control instructions.When the network communication load suddenly increases, the probability of signal loss of low level sensors will be significantly increased.Therefore, in order to ensure the realtime performance of the sensor signal transmission, the message priority of the sensor nodes is set higher to avoid the message loss in the control cycle, which leads to control instability.Table 2 shows the communication matrix table of IVC-NCS.Messages Msg7 and Msg9, as the state messages of executing agencies, can help the controller nodes to understand the operation status of the system.Because they do not participate in the control calculation, so the priority is low, and the transmission cycle is relatively large. Simulation and Result Analysis According to nonlinear vehicle model with eight degrees of freedom to calculate the state of the vehicle, Simulink platform is used for simulation.Before the performance of IVC-NCS, the IVC system is simulated and tested to verify the effectiveness of the controller. Effectiveness Verification of IVC System Control. In order to verify the effectiveness of IVC system, the sine curve and the step curve with the maximum value 5 degrees (about 0.087 rad) of the vehicle steering wheel are input to simulate the tracking response of the vehicle under different input yaw rates.According to the transmission ratio of the steering system, the corresponding input curve of front wheel steering angle is shown in Figure 4(a).The vehicle travels at a good road with a adhesion coefficient of 0.85, and the initial speed is 25 m/s.Figures 4(b) and 4(c) are the response curves of vehicle yaw rate at different angle inputs.It can be seen that the yaw rate of the controlled vehicle can quickly and effectively track the ideal value when compared with the system without the control.For the sine input, the execution of the vehicle is a nonstandard single lane change test.At this time due to the correction function of angle changes of the front wheel, so after the apparent slip, the yaw rate is settled in zero value, as shown in Figure 4(b). Under the step input of steering wheel angle in Figure 4(c), the yaw rate of the vehicle without control cannot track the ideal value, which appears as the trend of divergence.So the vehicle cannot achieve stable circular motion and rollover because of instability.The yaw rate of the vehicle with controllers is good at tracking the ideal value.Simulation results show that the IVC system can effectively improve the stability vehicle in critical conditions, which verifies the effectiveness of the designed control system. Simulation Analysis of IVC-NCS Based on CAN. In order to investigate the performance change of the designed IVC system after the CAN network is involved in the control, the stability of the vehicle was investigated using the same step input of the steering wheel.The initial speed is 25 m/s, and the road adhesion coefficient is 0.85.Considering the practical application of CAN network with high speed, the communication rate is set to 250 Kbps.Node sends only data frames.If the interfering nodes do not send any message, the network load is about 84% when the maximum is filled.When the interference nodes send the interference message of high priority with 4 ms cycle, it can ensure that the network load is close to 1 but less than the network bandwidth, which ensures the system communication not to lose the frames. According to the assumptions and simulation conditions, Figure 5 shows the comparison curve of yaw rate tracking about CAN network communication and point-to-point connection.Compared with point-to-point connection mode, the IVC system with CAN network connection can quickly and effectively track the ideal value under the condition of good network environment without changing the steady state of the control system.It can be clearly seen that, in the part of the amplified image, the network involves in the control system, which makes the yaw rate fluctuate with microamplitude.The overshoot of control increases from 3.1% of the point-to-point connection to 6% of the CAN network connection. In order to investigate the influence of different network state on the control performance of the system, the tracking simulation test of vehicle yaw rate is carried out for different network load and packet loss rate.Figure 6 shows the response curve of different packet loss rates of IVC-NCS yaw rate.In the simulation process, the interference nodes do not send the messages.It can be seen that when the packet loss rate is lower than 20%, the dynamic characteristic of the system becomes bad.In the packet loss rate of 5% and 20%, the corresponding overshoots of the system are about 9% and 12.5%.In 0.3 s after the step input of front wheel ends, the vehicle yaw rate can be stable to track the ideal value.When the packet loss rate is less than 40%, the yaw rate of the vehicle can be finally stabilized at an ideal value.When the packet loss rate is more than 40%, the yaw rate is obviously fluctuated in the ideal yaw rate tracking process.At 50%, the overshoot of yaw rate increases rapidly to about 42%, the vehicle begins to sideslip. When the packet loss rate is up to 60%, the vehicle yaw rate tracking is seriously lagging behind, which cannot achieve stable circular motion.The analysis shows that when the packet loss rate is low, the message transmission keep high success rate.The information of the sensors can be obtained by control nodes in time, so the controller works fast with little effect on the performance of system control.With the increase of packet loss rate, the control instructions cannot be timely generated and executed, which makes the control cycle become longer.The status of executing agency cannot be corrected in time.The input of executing agency will be too large or too small, which causes the control to fail. Figure 7 shows that the interference nodes send the messages of highest priority in 4 ms cycle, and the network load is close to 1.The long dashes are the response curve of yaw rate of CAN network without the interference, when the network load is about 84%.The short dashed lines, dasheddotted lines, and bold dashed lines are separately response curves of yaw rate at different packet loss rates when the load is full. Under the condition to meet the communication requirements of the control system, when the network load is close to 1, the induced delay of the system is largest.It can be calculated, when the network load increases from 84% to nearly 100% and the overshoot increases from 6% to 7%.When the network load is 1 and packet loss rate is 30%, the overshoot of yaw rate is 15.7%.Therefore, although the network load increases, as long as network load can meet the communication requirements of the control system, the network intervention only has little effect on the quality of dynamic control, which does not change the steady characteristics of the system.The vehicle can achieve the stable circular motion within 0.3 s of the yaw rare input of the front wheel. When the communication network is fully loaded and the packet loss rate is 50%, the vehicle cannot complete the scheduled circular motion.The yaw rate of the vehicle diverges to make the vehicle out of control.The simulation results show that when the network bandwidth meets the needs of control system, the effect of the network induced delay of control system is very small and negligible.And the network packet loss will affect the performance of control system seriously.When the packet loss rate is up to 50%, the system control performance will deteriorate significantly. Stability and Coordination Analysis. From the development of the vehicle chassis control system, the trend of integration and network is very obvious.The system control architecture and the network architecture form have different effects on the stability control of the chassis.In this paper, the design of the control system fully takes into account the stability of the chassis control performance. Because ABS is the basis for the realization of ESP, and the latter needs to achieve the independent control of braking intensity about the four wheels, so ABS is designed as an independent four-channel mode.As one kind of the controller associated with safety and real-time, the execution and controller of ABS usually adopt directly connected manner, in order to reduce the information switching delay and ensure the safety and stability of the vehicle. The control target of ESP system is to control the stability of the vehicle in the extreme conditions, through the control of braking strength of four wheels to achieve the active safety.In order to improve the unstable state of the vehicle in extreme conditions, applying the braking force on inward rear wheel with the understeer or on the outward front wheel with the oversteer can quickly and effectively improve the stability.Taking into account that ESP system has the obvious effect on the vehicle stability in extreme conditions, the study will assign the supervision and control tasks and the calculation of ESP control to one node. For the performance of network control system, communication real-time performance is the most important factor affecting the control performance, which can be expressed and measured by network delay.The existence of network delay reduces the control performance of the system, which will lead to the loss of stability of the stable control system. Especially, in extreme conditions, the change of the vehicle state is larger.When a large number of control instructions are lost, the adjustment of the new and old control instructions is bound to increase because of the large number of cycles, which will increase the action range of the actuator.Therefore, too much data packet loss is extremely unfavorable for the stability control.When the packet loss rate is less than a certain value, only the system dynamic characteristic becomes worse, and the system stability is not changed.When the packet loss rate reaches the critical value, the system control stability is close to the critical state. In addition, through the simulation experiment, we can know that CAN network intervention did not significantly affect the stability of vehicle braking.Therefore, when CAN network communication environment is good, the network induced delay of CAN network has a little influence on the performance of the controller, which indicates that the ABS controller built in this research has strong robustness on a single road. In the 80s of last century, the researchers began to try to decompose the complex chassis control problem into a number of subcontrol systems and then use a mechanism to coordinate the dynamic relationship between the subsystems to meet the control requirements.Using the upper coordinated control for the integrated control architecture of multiple independent control units of the vehicle chassis can effectively adjust the collaborative work of the control units, avoid the conflict between the controllers, and make the vehicle obtain optimal running state. The supervision mechanism is based on a hierarchical control principle, combined with fuzzy control logic to design a controller to supervise and coordinate ESP, AFS, and ARS.The target of the upper supervisory controller, according to the stability factor to judge vehicle steady state, is to redistribute the control weights of three subsystems and coordinate the output of each subcontroller. The sensors necessary for many systems are designed as independent network nodes.The subcontrol systems adopt traditional point-to-point connection in the controllers, sensors, and executing agencies.Its object is to obtain satisfactory real-time performance and coordination. Conclusions In this paper, the vehicle chassis control system is taken as the application of CAN network.The target focuses on how the network affects the control system.The ABS, ASC, and IVC are simulated.The main research contents and conclusions are as follows. According to the control theory of sliding mode, ESP and AFS subcontrollers are designed to track the ideal yaw rate.Based on the principle of hierarchical control and fuzzy control, a fuzzy controller is designed, which is used to monitor and coordinate the ESP, AFS, and ARS.And the IVC system is constructed with the upper supervisory controller and three subcontrol systems on the Simulink platform.Compared with the point-to-point connection, the system simulation of IVC-NCS shows that the control of the integrated control system has good performance. According to the IVC based on the supervision mechanism, combined with the function of each subsystem, the network topology structure of IVC is proposed, and the IVC communication matrix based on CAN network communication is designed.With the common sensors and the subcontrollers as the CAN network independent nodes, the network induced delay and packet loss rate on the system control performance are studied by simulation.The simulation results show that the network does not lose frame, and even if the network traffic load is close to 1, the network intervention of IVC can only show the very small change of the dynamic quality of the system.The network packet loss has a significant impact on the performance of the system control.When the packet loss rate is less than 30%, only the system dynamic performance becomes worse, and the system stability does not change.When the packet loss rate is up to 50%, the system control stability is close to the critical state, and the vehicle is unstable. Figure 2 : 1 Figure 3 : Figure 2: (a) SF and SF .(b) Control weight AFS , ARS of AFS and ARS.(c) Control weight ESP of ESP. Figure 4 : Figure 4: (a) Input curve of front wheel angle.(b) The response curve of yaw rate of the steering wheel with sine angle input.(c) The response curve of yaw rate of the steering wheel with step angle input. Figure 5 : Figure 5: The response curve of IVC yaw rate of CAN network connection. Figure 6 :Figure 7 : Figure 6: The response curve of different packet loss rates. Table 1 : Rules of fuzzy controller of IVC. Table 2 : Communication matrix table of IVC-NCS.
2018-04-03T03:23:36.995Z
2016-08-02T00:00:00.000
{ "year": 2016, "sha1": "5054667d6ef2c577f5549e829b741055717677ed", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/js/2016/7142739.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5054667d6ef2c577f5549e829b741055717677ed", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
251132158
pes2o/s2orc
v3-fos-license
Hydrogen Bonding in Liquid Ammonia The nature of hydrogen bonding in condensed ammonia phases, liquid and crystalline ammonia has been a topic of much investigation. Here, we use quantum molecular dynamics simulations to investigate hydrogen bond structure and lifetimes in two ammonia phases: liquid ammonia and crystalline ammonia-I. Unlike liquid water, which has two covalently bonded hydrogen and two hydrogen bonds per oxygen atom, each nitrogen atom in liquid ammonia is found to have only one hydrogen bond at 2.24 Å. The computed lifetime of the hydrogen bond is t ≅ 0.1 ps. In contrast to crystalline water–ice, we find that hydrogen bonding is practically nonexistent in crystalline ammonia-I. A mmonia (NH 3 ) is intermediate in character between the other two isoelectronic hydrides, water (H 2 O), which forms strongly hydrogen-bonded tetrahedral structures, and methane (CH 4 ), that forms close-packed structures at low temperatures. These three materials have four fundamental elements, O, N, C, and H, that are the building blocks of amino acids. 1 Ammonia (NH 3 ) forms a weakly hydrogenbonded liquid. 2 It plays a critical role in biochemistry, especially in the structures and functions of proteins. 3 Water and ammonia are major components of the interiors of the giant icy planets and their satellites. Ammonia is a potentially important source of nitrogen in the solar system and plays a pivotal role in planetary chemistry. 4 Ammonia in its different forms, green and blue ammonia, is expected to play an important role in production of clean energy and solutions toward climate change. The concept of hydrogen bonding has played an important role in understanding of the structure of ice and of liquid water as well as other condensed systems. 5 However, the nature of hydrogen bond and its lifetime in liquid ammonia have remained an enigma. 6,7 Within the concept of associated liquids, which are characterized by a fluctuating hydrogen bond network, H 2 O is pictured as a three-dimensional distorted tetrahedral network with a 1.8 Å hydrogen bond, while HF is thought to form one-dimensional chain-like structure with the shortest hydrogen bond at 1.6 Å. In contrast, liquid ammonia possesses one of the weakest hydrogen bonds in nature. It is, of course, possible that NH 3 simply behaves differently in the condensed phase, where environment dependent many-body interactions are important. According to Pimentel and McClellan's criteria for hydrogen bonding, 8 a hydrogen bond is said to exist, when (a) there is evidence of a bond and (b) this bond involves a hydrogen atom already covalently bonded to another atom, the condensed-phase evidence for NH 3 hydrogen bonding is actually much less convincing than that available for HF and H 2 O. Interest in studying the microscopic structure of liquid NH 3 is based on the widely held belief that ammonia is, together with HF and H 2 O, one of the simplest H bonded fluids. In fact, the situation is somewhat intriguing because some macroscopic properties of ammonia indicate the presence of a hydrogen bond network in the liquid, while others have a behavior similar to that of simple, nonassociated liquids. For instance, in ammonia there is approximately a 10% increase in relative volume upon melting, whereas it has the opposite sign for H 2 O; ice floats on water! A characteristic property of H bonded fluids is that the range of temperature over which the liquid state exists is larger than in simple fluids. The ratio, T c / T 3 , between the critical temperature, T c , and the triple point, The local average structure of pure liquid ammonia has been studied by both X-ray spectroscopy 9 and X-ray diffraction 10,11 and neutron diffraction techniques. 12 Ricci et al. have performed neutron diffraction experiments with isotopic H/ D substitution on liquid ammonia at T = 213 K and T = 273 K, corresponding to densities of 2.53 × 10 −2 molecules/Å 3 and 2.26 × 10 −2 molecules/Å 3 , respectively. 13 Unlike in H 2 O, where there is a clear hydrogen bonding peak in g OH (r) at ∼1.8 Å, no evidence of a clear peak in g NH (r) for hydrogen bonding was observed in liquid NH 3 . Ricci et al., based on their neutron experiment, concluded the following: "The present study of the microscopic structure of liquid ammonia has shown that the spatial arrangement of nitrogen atoms (NN correlations) indicates that no H-bonded network exists in the liquid at either of the thermodynamic states investigated." Theoretical studies of liquid ammonia by molecular dynamics (MD) simulation are numerous, 6,14,15 and several empirical interaction potential models have been developed. 16 DFT based quantum MD allows for the investigation systems without empirical interaction potentials. 17 In this scheme, the forces on the nuclei are computed from an electronic structure "on the fly" within the adiabatic approximation. DFT based MD simulations on crystalline ammonia-I were carried out by Fortes et al. 18 Diraison et al. investigated liquid ammonia using Car−Parrinello MD 14 and concluded, "The probability distribution function for a NH 3 molecule to donate or to accept an HB is very similar. More precisely, for both values of the radial cutoff, about 50% of the molecules are found to accept 1 HB and donate 1 HB, to yield a total of 2 HB per molecule." On the basis of DFT based MD simulations, Boese et al. 6 conclude that "Contrary to earlier conceptions the spatial arrangement of nitrogen atoms showed that no extended hydrogen bonded network exists in liquid ammonia. Nevertheless, some degree of hydrogen bonding was inferred from the temperature dependence of the N−H and H−H radial distribution functions. However, the hydrogen bond interaction in liquid ammonia proved to be much weaker than that in water and no clear hydrogen bond peak was observed in either N−H or H−H correlations, unlike the case of water." The question then arises, "Does ammonia hydrogen-bond?", as was asked in a 1987 Science paper by distinguished Harvard theoretical chemist William Klemperer and his collaborators. 19 They concluded, "If NH 3 is to be classified as a hydrogen-bond donor, it must be considered a very poor donor, forming weaker, longer, and less linear hydrogen bonds than even HCCH, CF 3 H, and H 2 S." Given the discrepancies in the molecular-level understanding of the structure and complexity of the hydrogen bond network in liquid NH 3 , it is important to investigate the nature and structure of the hydrogen bond in liquid ammonia and to determine its lifetime. Figure 1 shows our computed g(r) for N−N, N−H, and H− H correlations compared with neutron diffraction results, 19 demonstrating good agreement with peak positions and widths. Before proceeding further, we emphasize an important point regarding g N−H (r). There are two clear peaks in g O−H (r) in water, the first at 0.95 Å with a coordination of 2 reflecting covalently bonded H atoms in H 2 O, and a second peak at 1.75 Å, also with a coordination of 2, reflecting hydrogen bond in liquid water. 20,21 In liquid ammonia, there is no such clear peak for hydrogen bonded N···H in g N−H (r). Our goal is to address this unresolved matter and establish the nature of hydrogen bonding in solid and liquid ammonia. Following Pimentel and McClellan's criteria, we determine the existence of a hydrogen bond between a H and N atom by considering the electron charge density overlap. 22 We examine the nature of H-bond in crystalline NH 3 on the basis of charge density overlap rather than simple criterion of bond distance and coordination numbers. It is believed that weak hydrogen bonding between neighboring ammonia molecules results in a pseudo-close-packed arrangement in the crystalline phase. The cubic unit cell of ammonia-I contains four orientationally ordered ammonia molecules on symmetry sites C 3v . The dipole moments of the ammonia molecules are directed toward the crystallographic [111] directions. From the crystalline geometry it appears that each molecule both accepts and donates three hydrogen bonds, each of which deviates significantly from the almost perfectly linear hydrogen bonds seen in water−ice. The crystal structure of ammonia has been interpreted as hydrogen bonded, yet the N−H···N bond angle is not 180°but only 159.3°. 19 This is a serious problem, since with three hydrogen atoms on each subunit it is difficult to conceive of any reasonable crystal structure without some hydrogen atoms pointed in the general direction of a nitrogen atom. Furthermore, the distribution of angles has a full width at half-maximum of nearly 40°. Thus, it is not obvious that the crystal structure indicates that NH 3 is an effective hydrogenbond donor. However, the traditional view has been that the condensed phase interactions of NH 3 are dominated by hydrogen bonding. To understand H-bonds in NH 3 , it is important to first understand the structure and coordination around NH 3 molecules in the crystalline and liquid phases. Figure 2 shows the partial pair correlations for N−H pairs in crystalline NH 3 (Figure 2a) and in liquid (Figure 2b) along with the coordination around N atoms. There are two important distances in the crystalline NH 3 g(r) that affects the local structure of NH 3 molecules. The first peak in g N−H (r) corresponds to the covalent N−H bond at 1 Å which gives a coordination number of 3. The second peak at 2.4 Å corresponds to the distance between N and the nearest H atoms belonging to neighboring NH 3 molecules. The coordination jumps from 3 to 6 at this distance indicating that three other NH 3 molecules are equidistant from the central N atom in the first coordination shell at 2.4 Å as shown in Figure 2f. In the liquid phase, there is no significant change in the covalent bonding and the coordination and local structure seen in g N−H (r) up to 1.5 Å are largely intact. However, upon melting, crystalline NH 3 undergoes a 10% volume expansion, which results in a reorganization of the second shell (coordination of 3 in the crystal) beyond the covalently bonded hydrogen, resulting in a disordered structure as seen in Figure 2b. It is this reorganization of the second shell, which introduces structure even below 2.4 Å, that makes it difficult to classify the nature of the H-bond in NH 3 . This is in contrast to the case of other liquids like H 2 O, where, We define this as the threshold for the existence of a hydrogen bond in NH 3 . The vast majority of molecular configurations (h, i) in QMD trajectories in liquid NH 3 are characterized by a hydrogen N···H at distances below 1.9 Å. A few configurations (j, k) are found in QMD configurations where there is a single H atom and two N atoms from two other NH 3 molecules at distances ranging from 1.9 to 2.4 Å. In these cases, the charge density overlap between the H and the neighboring two N exceeds our threshold of 0.012 electrons/Å 3 and these configurations are considered to have a bifurcated hydrogen bond. The Journal of Physical Chemistry Letters pubs.acs.org/JPCL Letter due to the anomalous but small volume contraction upon melting, the structure of the hydrogen bond is preserved, and the same coordination and symmetry are maintained. To investigate how the three intermolecular hydrogens belonging to the second peak in the crystalline g N−H (r) are reorganized in this disordered peak in Figure 2b, we have plotted their pair distributions separately in Figure 2c−e. It is easy to notice that the nearest intermolecular hydrogen atom, H1 approaches closer than 2.4 Å, while the farthest of the three intermolecular H atoms go beyond 2.8 Å. The second nearest H atom, H2, remains approximately at 2.4 Å, the same distance as in the crystal. To identify if these molecular configurations and intermolecular distances correspond to the existence of hydrogen bonds, we compute and plot electron charge density isosurfaces for crystalline and liquid configurations. The computed charge density in crystalline NH 3 in Figure 2g shows that the charge density overlap in the intermolecular region is less than 0.012 electron Å −3 , which is 1/32 the value of 0.384 electron Å −3 , the charge density value at the center of the N−H covalent bond. We define this value of charge density, which corresponds to binding energies 1000 times weaker than that of a covalent bond, as the threshold for the existence of a H-bond. Using this definition, we notice that Hbonding in crystalline NH 3 is practically nonexistent. In the liquid phase, the second shell reorganization brings the nearest intermolecular hydrogen closer to the N atom at distances up to 1.8 Å, while simultaneously moving the second-and thirdnearest intermolecular hydrogen atoms further away. These liquid configurations demonstrate a strong (>0.012 electron Å −3 ) charge density overlap between the nearest neighbor N− H pair and negligible overlap between the second-and thirdnearest neighbor N−H pairs. Therefore, the vast majority of liquid ammonia configurations contain only one hydrogen bond for each NH 3 molecule. Figure 1d shows that due to the reorganization of the second shell, the second-nearest N−H distances are on average approximately similar to that in the crystal; however, the finite spread in distribution leads to the presence of some second-nearest N−H pairs at distances as low as 1.9 Å, which is comparable to that of first-nearest N−H distances in these configurations. Figure 2j and Figure 2k show computed charge density isosurfaces for configurations with comparable first-and second-nearest intermolecular N−H distances. In both these configurations, the charge overlap in the intermolecular region between both pairs exceeds the threshold of 0.012 electrons Å −3 and reveals the existence of transient bifurcated hydrogen bonds, implying that a central NH 3 molecule is simultaneously H-bonded to its two nearest neighboring NH 3 molecules. Beyond this unique structure of the H-bond network, several aspects of hydrogen bond dynamics in these systems have also been investigated. 23 The first inelastic neutron scattering experiments on liquid and solid ammonia were carried out in 1974 by Thaper et al. 24 Due to limitation of neutron flux and limited resolution, many features in the density of states are not resolved. Another effort to measure the density of states of solid ammonia at 30, 50, 90, and 140 K and liquid ammonia at 210 K was made by Carpenter et al. 12 at Intense Pulsed Neutron Source at Argonne National Lab. They were able to resolve some features in the density of states; however the background in the data is quite large. Klein and co-workers have used quantum molecular dynamics using the Car− Parrinello scheme to model the structural dynamics of singlet and triplet bipolarons in NH 3 to identify a novel leapfrog mechanism for bipolaronic diffusion. 21 Quasi-elastic X-ray scattering experiments have been carried out on liquid ammonia to determine diffusion constants and estimate relaxation time. Inelastic X-ray scattering experiments on high-pressure ammonia liquids in the THz frequency regime revealed that the structural relaxation dynamics of liquid NH 3 is independent of temperature in the range of 220−298 K, in contrast to what is observed for liquid HF and H 2 O systems, indicating a marked difference in the connectivity of the Hbond network in NH 3 . 10 We characterize dynamics in liquid ammonia in our QMD simulations by computing H-bond lifetimes using the population time correlation function, C HB based on a geometric definition of hydrogen bond for liquid ammonia. Here, N is the number of atoms, h ij (t) is unity if two ammonia molecules are hydrogen-bonded at time t and otherwise zero, and N HB t = 0 is the number of hydrogen bonds at t = 0. Figure 3 shows the C HB function for QMD at two temperatures, T = 213 K and T = 233 K. Two ammonia molecules are assumed to be hydrogen bonded if the intermolecular N−H distance is less than 2.4 Å. There is no direct method to experimentally determine the H-bond lifetime. 25 For example, vibrational relaxation times of 0.74 ps have been reported for water, 26 whereas observed rotational relaxation times range from 0.6 ps 27 to 2.1 ps. 28 We have examined the rotational relaxation time in liquid ammonia using the characteristic orientational vectors in a NH 3 molecule. The orientational correlation functions C α ,α ∈ 1, 2, 3, is defined as Here e 1 is the unit vector pointing to the direction of molecular dipole moment based on the atomic geometry and empirical charges assigned on each atom position. e 2 is the unit vector pointing from N to H, i.e., the direction of N−H covalent bond, in a NH 3 molecule. Similarly, e 3 is the one between two H atoms. The relaxation time is obtained by exponential fit, C α (t) = exp (−t/τ α ) where τ α is the relaxation time for α-th orientational vector. The top of Table 1 summarized hydrogen bond life times obtained from exponential fits shown in Figure 3, and the bottom half of the Table 1 summarizes the obtained Figure 4a shows the velocity autocorrelation function for deuterated ammonia at 213 K. This is defined as where v i (t) denotes the velocity of the ith atom at time t and the brackets denote the averages over ensembles and atoms. The current−current correlation function for deuterated ammonia is shown in Figure 4b. It is defined as where the charge current is given by J(t) = ∑ i Z i ev i (t). The vibrational density of states is determined by the Fourier transform of the corresponding velocity autocorrelation function. Figure 4c shows the vibrational density of states for deuterated ammonia at 213 K. The frequency dependent ionic conductivity can be calculated from the Fourier transform of the current−current correlation function where V is the volume of the system and k B is the Boltzmann constant. Figure 4d shows the normalized frequency dependent ionic conductivities for deuterated ammonia at 213 K. Peak positions from IR experimental data are shown in black, and computed values are in red. 29 Vibrational modes from the total vibrational density of states that obey dipole selection rules are also visible in the compute IR spectrum in Figure 4d. We have used DFT-SCAN quantum molecular dynamics simulations to investigate the nature of hydrogen bonding in crystalline and liquid ammonia. In contrast to the case of water, with two stable hydrogen bonds per oxygen atom of water molecule, liquid ammonia shows a weaker hydrogen bonding network with only one hydrogen bond per nitrogen atom of each molecule. Hydrogen bonding is found to be practically nonexistent in crystalline ammonia, which, although denser than the liquid phase, has longer intermolecular bonding distances.
2022-07-29T06:17:44.356Z
2022-07-28T00:00:00.000
{ "year": 2022, "sha1": "f0f43acb431be4879573f4cf587e5fb231a0c31d", "oa_license": "CCBY", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.jpclett.2c01608", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a380fa2b684a832dd3a880b587e0ec8004bd8c60", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
254449511
pes2o/s2orc
v3-fos-license
THE EFFECTS OF USING LIQUID ORGANIC NPK FERTILIZER FOR RICE PLANT GROWTH AND PRODUCTION (Oriza sativa L.) ABSTRACT normally needed by plants cannot be separated from the three main nutrients, i.e., nitrogen, phosphorus, and potassium (NPK). NPK compound fertilizer is an inorganic fertilizer. The advantages of using NPK compound fertilizers are: 1) The nutrient content is the same as a single fertilizer; 2) It can be used in place of a single fertilizer; 3) The use of compound fertilizer is simple; and 4) Transportation and storage of compound fertilizer saves time, space and cost (Susanto & Amirta, 2020). Liquid organic fertilizer (LOF) is a solution derived from the decomposition of organic materials, such as plant matter, animal manure, ash and water (Mangera & Ekowati, 2022;Raden, Fathillah, Fadli, & Suyadi, 2017). Sources of raw materials for organic fertilizer are available in abundance and are usually in the form of waste from households restaurant, markets, agricultural markets, livestock, and other organic waste (Devianti, Yusmanizar, Syakur, Munawar, & Yunus, 2021). Liquid organic fertilizers contain low macronutrients but contain sufficient micronutrients that are indispensable for plant growth and development. Fertilization requires care to ensure that plants receive the required dose and concentration. The fertilizer used should not be lower than, or exceed, the required dose as it could hinder plant growth and development (Bindraban, Dimkpa, Nagarajan, Roy, & Rabbinge, 2015). The advantage of LOF is that it is a fast way to overcome nutrient deficiencies, doesn't cause nutrient leaching, and is able to provide nutrients quickly (Phibunwatthanawong & Riddech, 2019). Therefore, this research was conducted on the effects of NPK and LOF on the growth and production of rice plants with the aim of determining the appropriate concentration of NPK for optimal growth and yield of rice plants. Research Study and Time This research was conducted in Mertajati Indah Village, Sausu Subdistrict, Parigi Mautong Regency, Central Sulawesi, Indonesia, from August to December 2019. The items used were a tractor, hoe, meter, scale, measuring cup, and blender. The materials used were Cigeulis rice seeds, Phonska NPK fertilizer (15:15:15), and liquid organic fertilizer (LOF). Research Design The study uses a two-factor randomized block design (RBD). The first factor was the dose of NPK, which consisted of three levels, i.e., NPK 200 kg ha -1 , which is equivalent to 120 g/plot -1 (P1); NPK 400 kg ha -1 , which is equivalent to 240 g/plot -1 (P2); and NPK 600 kg ha -1 , which is equivalent to 360 g/plot -1 (P3). The second factor was the concentration of LOF, which also consisted of three levels, i.e., without LOF (K0), 2.5% LOF (K1), and 5.0% LOF (K2). Therefore, there were nine treatments in total, and each treatment combination was repeated three times as a group. Research Implementation Tillage was carried out twice and harrowing was carried out once, then plots of 300 cm x 200 cm were made. Preparation of liquid organic fertilizer was carried out by providing the formulation of ingredients (papaya leaves, moss, rice washing water, granulated sugar, brown sugar, Yakult, and water). Next, the papaya leaves and moss were blended, and the granulated sugar and brown sugar were heated until they became liquid. The solutions were mixed until evenly distributed, then slowly filtered. The LOF was incubated for a week, and then the liquid organic fertilizer was ready to be used. Planting was performed using a direct seeding system (called Tabela) using the Jajar Legowo 2:1 cropping pattern, with a spacing of 20 cm x 20 cm, and a population of 283,000 clumps per hectare. The application of NPK fertilizer was carried out in two doses, with a treatment of 50% at 21 days after planting (DAP) and 50% at 42 DAP. The LOF was given at twice the concentration according to the treatment. The first application was 35 DAP with a dose of 200 L (equivalent to 120 mL/Plot -1 ), and the second application was given at 49 DAP, with a dose of 300 L (equivalent to 180 mL/Plot -1 ). To determine the effects of the treatment, observations were recorded for plant height, number of tillers, age of flowering, number of panicles, and dry grain yield. Experimental Soil Characteristics The results revealed the initial physical properties of the experimental soil; it had a sandy loam texture with a distribution of 53.04% sand, 31.71% silt and 15.25% clay, and a bulk density of 1.44 g/cm -1 . In terms of chemical properties, it had a slightly acidic pH level, a moderate organic carbon (c-organic) content (2.15%), a moderate nitrogen (N) content (0.24%), moderate C/N (11.94), moderate P2O5 (9.43 ppm), a moderate potassium (K) content (0.33 meq/100g), moderate cation exchange capacity (CEC) (23.03 meq/100g), and moderate base saturation (49.1%). The C-organic content and CEC, which were classified as moderate, indicate that the experimental soil had moderate levels of organic matter. The pH value of the soil describes the level of soil acidity, which greatly affects the activity of micro-organisms in the soil and the uptake of plant nutrients. Parameters of Rice 3.2.1. Plant Height The analysis of variance (ANOVA) results show that the NPK and LOF administration had an effect on plant height, while the interaction between the two treatments had no effect. The median test results (see Table 1) show that fertilization using the 600 kg/ha -1 NPK resulted in higher plants, which was different from the 200 kg/ha -1 NPK, but not different from the 400 kg/ha -1 NPK. This is presumably because the administration of 600 kg NPK has fulfilled the plants' nutrient needs. Fertilizer application can increase plant growth because it increases the availability of N, P, and K. Table 1 also shows that the administration of the 5.0% LOF produced taller plants, which was different from that without LOF, but not different from that with a LOF concentration of 2.5%. This is apparently because the administration of 5.0% LOF has fulfilled the plants' nutrient needs. The increase in plant height is influenced by the provision of LOF concentrations, as well as by a dense population, which can cause competition for sunlight. Number of Tillers The ANOVA results revealed that the administration of LOF had an effect on the number of tillers, while the NPK dose and the interaction between the two treatments had no effect (see Table 2). The median test results shown in Table 2 indicate that the administration of LOF with a concentration of 5.0% produced more tillers, which was different from that without LOF, but not different from that with a LOF concentration of 2.5%. This is presumably because the administration of 5.0% LOF has fulfilled the plants' nutrient needs. The application of LOF tends to form the number of tillers based on the adequacy of light intensity. Meanwhile, the administration of the 2.5% LOF did not significantly increase the number of tillers because the number of nutrients was insufficient, so the growth will be stunted. The amount of nutrients required by the plant is closely related to the needs of the plant to grow optimally. If the amount of nutrients required is not available, then growth will be stunted. If the amount of available nutrients is higher than what is required by the plants, it can be defined as a condition of luxury consumption. Flowering Age The ANOVA results show that the LOF administration had an effect on the flowering age, while the NPK dose and the interaction between the two treatments had no effect. The flowering age results are presented in Table 2, and they show that the administration of LOF with a concentration of 5.0% resulted in faster flowering of plants, which was different from that without LOF, but not different from that with a LOF concentration of 2.5%. This is presumably because the application of LOF can accelerate the flowering age of rice plants. In addition, the direct seed planting system generated good results at the flowering age with a time difference of 3-10 days. The direct seed planting system can accelerate the flowering age by nine days compared to conventional planting patterns and harvesting age (Bahua & Gubali, 2020;Sahardi, Nappu, Idaryani, Nurlaila, & Syam, 2021). The direct seed planting system showed better prospects and can increase the production of harvested dry grain, eliminate the negative impact of nursery and grain quality, and shorten the life of the plant (Bahua & Gubali, 2020). Therefore, the direct seed planting system increases the opportunity to enhance cropping intensity, which improves productivity and farmers' income, both obtained through increased production per unit area, as well as production cost savings, such as labor and fertilizer costs, and opportunities for optimizing the use of land resources. Panicles The ANOVA results show that the NPK dose and LOF concentration had an effect on the number of panicles per clump, while the interaction between the two treatments had no effect. The average numbers of panicles per clump are presented in Table 3. The median test results Table 3 show that the administration of 400 kg/ha -1 of NPK produced a higher number of panicles per clump, which was different from 200 kg/ha -1 of NPK, but not different from that with 600 kg/ha of NPK. This is presumably because the administration of 400 kg/ha -1 of NPK has fulfilled the plants' nutrient needs, especially N and P, because these nutrients play an important role in the formation of tillers. Table 3 also shows that the administration of 5.0% LOF resulted in a higher number of panicles per clump in contrast to other treatments, in which the higher LOF concentration produced more panicles. This is presumably because the administration of 5.0% LOF has fulfilled the plants' nutrient needs. This is in agreement with Sutardi, Gunawan, Winarti, and Cahyaningrum (2021), who stated that the provision of LOF in each treatment has an effect on the number of panicles. Panicle length. The ANOVA results show that the LOF concentration had an effect on the panicle length, while the NPK dose and the interaction between the two treatments had no effect. The average panicle lengths are presented in Table 3. The median test results show that the administration of LOF with a concentration of 5.0% resulted in longer panicles, which was different from that without LOF, but not different from that with a LOF concentration of 2.5%. In addition to the concentration of LOF, panicle length is more likely to be influenced by genetic and environmental factors. Spacing is one way to create environmental factors and nutrients that are evenly available for each individual plant. Available nutrients in sufficient quantities allow plants to grow and produce maximally. A low availability of nutrients in the production phase causes the inhibition of several plant metabolic processes, which decreases plant yield, inhibits flower formation, affects the panicle length, and decreases the number of seeds (Wei et al., 2017). The single factor of LOF treatment shows results with a significant effect. Number of grains per panicle. The ANOVA results show that the NPK and LOF administration had an effect on the number of grains per panicle, while the interaction between the two treatments had no effect. The average number of grains per panicle is presented in Table 3. The median test results show that the administration of 600 kg/ha -1 of NPK resulted in more grains per panicle, which was different from that of 200 kg/ha -1 of NPK, but not different from that of 400 kg/ha -1 of NPK. This is presumably because the administration of 600 kg/ha -1 of NPK has fulfilled the plants' nutrient needs. Nitrogen plays an important role as a constituent of proteins that will be used by plants, including increasing the number of panicles (Ju, Liu, & Sun, 2021;Zhou et al., 2017). Panicle length is strongly influenced by the panicle initiation period, which is a critical period for the plant. Lack of nutrients and water during the initiation period can cause panicle formation to be poor, and this affects the ovules that will form. The number of grains per panicle is determined in the reproductive phase. Table 3 also shows that the administration of LOF with a concentration of 2.5% resulted in more grains per panicle, which was different from that without LOF, but not different from that with a LOF concentration of 5.0%. This is presumably because the administration of 2.5% LOF has met the plants' nutrient needs. Percentage of empty grain. The ANOVA results showed that LOF administration had an effect on the percentage of empty grain, while the NPK and the interaction between the two treatments had no effect. The average percentage of empty grain is presented in Table 3. The median test results show that the administration of LOF decreased the percentage of empty grain, and a higher concentration of LOF resulted in a decreased percentage of empty grain. The administration of LOF with a concentration of 5.0% resulted in a lower percentage of empty grain, which was different from that without LOF, but not different from that with a LOF concentration of 2.5%. This is presumably because the administration of 2.5% LOF has fulfilled the plants' nutrient needs. Empty grain was determined by the number of tillers that grew before reaching the primordial phase. K deficiency can cause a high amount of empty grain and incomplete grain filling, and the plant growth is closely related to the balance of required nutrients (Susanto & Sirappa, 2015). Weight of 1000 grain seeds. The ANOVA results showed that the administration of LOF had an effect on the weight of 1000 grain seeds, while the NPK and the interaction between the two treatments had no effect. The average weight of 1000 grain seeds is presented in Table 3. The median test results show that the administration of LOF with a concentration of 5.0% resulted in higher pithy grain, which was different from that without LOF, but not different from that of the LOF with the 2.5% concentration. The higher LOF concentration produced higher pithy grain. The pith of the grain is largely determined by the availability of nutrients and the plant's physiological processes. Dry Grain Yield The ANOVA results show that the administration of NPK and LOF had an effect on the dry grain yield, while the interaction between the two treatments had no effect. The average dry grain yield is presented in Table 4. The median test results (see Table 4) show that the administration of 600 kg/ha -1 of NPK resulted in heavier dry grain yields, which was different from that of 200 kg ha -1 , but not different from that of 400 kg/ha -1 . This is presumably because the administration of 400 kg/ha -1 of NPK has fulfilled the nutrient requirements of the plants. The application of NPK increased the dry grain weight because the NPK contents can meet the P and K nutrient needs of the plant, resulting in optimal grain production. Table 4 also shows that the administration of LOF with a concentration of 5.0% resulted in heavier dry grain yields, which was different from other treatments. This is presumably because the administration of LOF with a concentration of 5.0% had fulfilled the plants' nutrient needs. The application of LOF stimulated plant growth, roots, fruiting, and it reduced flower and fruit loss, so crop yield increased (Masniawati, Suhadiyah, Tambaru, & Sulastri, 2017). The growth and yield of plants are strongly influenced by many factors, i.e., genetic traits or inherited traits, such as plant age, plant morphology, yield, capacity to store food reserves, and resistance to disease, among others (Ata-Ul- Karim et al., 2022;Liu, Zhou, Li, & Xin, 2017;Oladosu et al., 2014). Meanwhile, external factors are environmental, such as climate, soil, and biotic factors (Peng et al., 2004;Song et al., 2022). Differences in growth and yield are affected by one or more of these factors. Differences in genetic makeup (genotype) is one of the factors causing diversity in plant appearance, and differences in genotype will always occur, even if the plant material used is derived from the same plant species. CONCLUSION The effects of LOF administration were the same for each dose of NPK, and increasing the NPK dose requires an increase in the LOF concentration. The application of 400 kg/ha -1 NPK fertilizer resulted in better growth and yield indicated by taller plants, higher panicles per clump (19.53 panicles/clump -1 ), a higher number of grains per panicle (136.63 grains/panicle -1 ), and higher dry grain production (7.69 ton/ha -1 ). The application of 2.5% LOF achieved better growth and yield characterized by taller plants, a higher number of tillers, faster flowering, more panicles per clump (19.72 panicles/clump -1 ), a higher number of grains per panicle, a lower percentage of empty grain, increased pithy grain weight (30.26 g/1000 grains), and higher yields (7.79 tons/ha -1 ). Funding: This study received no specific financial support. Competing Interests: The authors declare that they have no competing interests. Authors' Contributions: All authors contributed equally to the conception and design of the study. Acknowledgement: All authors thank the Faculty of Agriculture, Tadulako University, for providing the best facilities for this research. Views and opinions expressed in this study are those of the authors views; the Asian Journal of Agriculture and Rural Development shall not be responsible or answerable for any loss, damage, or liability, etc. caused in relation to/arising out of the use of the content.
2022-12-09T16:14:51.948Z
2022-12-06T00:00:00.000
{ "year": 2022, "sha1": "d1587e0a882f4312663c1010135672a1bf2a0b01", "oa_license": null, "oa_url": "https://archive.aessweb.com/index.php/5005/article/download/4689/7399", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "93f0228f6c3f7d03ce14445d7cfa101bb139feef", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
229723664
pes2o/s2orc
v3-fos-license
Teleneurology based management of infantile spasms during COVID-19 pandemic: A consensus report by the South Asia Allied West syndrome research group Highlights • A multipronged teleneurology based approach for management of infantile spasms is needed.• R Reduction of treatment lag and early initiation of standard therapy are crucial.• Efforts should be made for improving sensitivity and specificity of diagnosis.• Constant motivation of parents for monitoring therapeutic response, adverse effects, and infections. Introduction The coronavirus disease-2019 (COVID-19) pandemic has significantly impacted the customary delivery of healthcare. Although children mostly remain asymptomatic, infants are particularly susceptible [1]. Negative consequences due to nationwide lockdowns, travel restrictions, and fear among patient families have affected the care of children with epilepsy like Infantile spasms (IS; includ-ing West syndrome) [2]. Young age, comorbidities, need for hormonal therapy, and frequent healthcare visits are problems specific to children with IS [3]. Although the exact numbers are not known, the burden of IS and the treatment gap in developing countries is expected to be high considering the rampant causes of acquired brain injury such as hypoxic-ischemic brain injury, infections, etc., and relatively underdeveloped health infrastructure. Their management challenges are also distinct, e.g., a preponderance of structural etiology, significant lead-time-to-treatment, limited access to pediatric neurologists and specific investigations including electroencephalography (EEG), and problems with availability and licensing of first-line drugs [adrenocorticotrophic [4,5]. Moreover, the probable escalation of treatment lag (a significant predictor of outcome) due to prevalent travel restrictions is expected to adversely affect the outcome in children with IS [6]. Hence, the management protocols of the developed countries may not be entirely applicable to the developing nations. Teleneurology is a well-established tool for epilepsy management, especially when face-to-face consultations are difficult (e.g., pandemic involving difficult access to care) [7]. In developing countries, the use of teleneurology is even more desirable during the COVID-19 pandemic to decentralize the patient care to community health services, promote healthcare access, reduce treatment lag, and cost of care [8]. However, considering the adverse effects associated with first-line therapeutic options for IS, reduced facilities, and lesser level of parental understanding in developing countries, a higher degree of vigilance is required during teleconsultations for IS. Therefore, a need for a simplified protocol for the management of IS via teleneurology exists. Methods The South Asia Allied West Syndrome Research Group developed an algorithm for teleneurology-based care of children with IS in developing countries. The initial research group had evaluated the management practices for West syndrome in South Asia and subsequently, developed a viewpoint statement on the management concerns during the COVID-19 pandemic [4,9]. The current study group initially searched the PubMed and EMBASE using the search terms ''Infantile spasms OR West syndrome AND Teleneurology AND/OR COVID-19" until June 30, 2020, and later updated the literature review until September 20, 2020. The recommendations and guidelines developed by other societies/research groups were also searched. Since the literature search did not reveal any specific recommendations or original research on teleneurology specific to IS during the COVID-19 pandemic, a simplified algorithmic teleneurology-based approach is being proposed keeping the general recommendations by the Child Neurology Society (CNS) and our group as the basis [3,9]. These recommendations and the concerns raised by the group members were discussed through multiple correspondences. The initial draft was formulated and revised on Google docs by authors based on the available evidence, expertise, practicality in their countries, and consensus. The concerns raised by group members in the algorithm were discussed and addressed through multiple emails and modifications were made in the algorithm based on suggestions. Telemedicine tools Despite the lack of organized telehealth facilities, there is easy access to smartphones in developing countries [7]. Tools with video modes of communication such as video-calls on a chat platform, Skype, video conferencing solutions, etc. may be the preferred options [7,9]. Unlike the USA, the South Asian and many developing countries don't follow Health Insurance Portability and Accountability (HIPAA) standards. However, country-specific recommendations for telemedicine need to be practiced. Patient confidentiality is of prime importance. WhatsApp, an end-to-end encrypted chat platform, may be useful in many developing countries since it ensures that the patient data is not available with any private company. Hence, such tools may be used for interaction, sharing of patient videos, and providing prescriptions. Some legal safeguards which may be helpful in the context of IS include proper documentation and recording of the consultation after parental consent, running a checklist of signs (including critical events) suggesting a need for an in-person consult before the beginning of teleconsult, maintaining a record of prescriptions, etc. Specific safeguards in each country also need to be considered [8,10]. First evaluation At the first evaluation, video-teleconsultation or in-person consultation is preferred depending on travel restrictions and COVID-19 transmission in the region. Teleconsultation with the managing pediatrician or pediatric neurologist should be in liaison with a local health provider involved in the patient care. Each new consult should begin with general information (relevant history including clinical, family, past medical, perinatal, developmental, and treatment history; and focused physical/neurological examination). Detailed information on spasms, e.g., age at onset, clustering, type, relation with the sleepwake cycle, burden, should be sought. Assessment of home videos of habitual events is strongly encouraged. Efforts should be made to determine the etiology [history of prior brain insult e.g., neonatal asphyxia, hypoglycemia, infections, or trauma; the presence of neurocutaneous abnormalities like ash-leaf spots (tuberous sclerosis complex), etc.]. Comorbidities should be looked for carefully. Besides a pretreatment screen to rule out any infections including tuberculosis, baseline parameters such as weight and blood pressure (BP) should be recorded. Baseline urine and blood sugar, electrolytes, liver enzymes, complete blood count, etc. should be recorded if possible. Electroencephalogram In agreement with the CNS recommendations, outpatient EEG comprising at least one sleep-wake cycle is advisable for confirmation of diagnosis, but treatment initiation should not be delayed if EEG is not feasible [3]. EEG should preferably be done at a place with expertise for hypsarrhythmia reporting. If conducted regionally, the technician and reporting person should be well-informed about the expectations from the report. Application of objective scoring for hypsarrhythmia may be considered based on feasibility and availability of expertise. Various available scores are Burden of Amplitudes and Epileptiform Discharges (BASED), score by Kramer and colleagues, score by Jeavons and Bower, etc. [11][12][13]. BASED score appears simple, feasible, and reliable tool with a favorable interrater agreement [11]. EEG confirmation of hypsarrhythmia would be useful for the initial diagnosis and the subsequent decisionmaking. However, it is important to note that hypsarrhythmia is not always present and is not essential to diagnose IS [14]. Availability of EEG and Magnetic Resonance Imaging (MRI) facilities is also a major limitation in many developing countries. Creation of this infrastructure will require funds and may not be possible acutely during this hour of crisis. Hence, gradual but persistent efforts towards creating a framework for the widespread availability of these investigations may be helpful. Efforts to improve sensitivity and specificity of diagnosis and reduce treatment lag In addition to objective EEG scoring, awareness regarding the semiology and development and validation of an objective clinical scoring system based on history and event videos may be beneficial. Also, easy access to this objective score in a smartphone application may ease the diagnostic process for local health providers and pediatricians. Furthermore, training of local health care providers and pediatricians through webinars for diagnosis and management of IS will also be advantageous. Establishing a linkage facility for paediatricians to reconfirm their observations and management decisions with Paediatric neurologists may also be helpful. Besides, creating awareness among parents through mass media (screening children with neonatal brain injury during visits for immunization) will probably reduce the treatment lag. Initial treatment advice The initial choice should be one of the standard first-line medications: ACTH, prednisolone, and vigabatrin. Contraindications for high-dose hormonal therapy include acute infections, a history of clinical infection caused by herpes or cytomegalovirus, and congestive heart failure. In accord with the CNS recommendations, high-dose oral prednisolone may be the preferred initial therapy during the COVID-19 pandemic [3] Preferably syrups/ suspensions should be prescribed, and printed advice regarding dispensing and administration of liquid formulation using a syringe and possible adverse effects of medications and danger signs should be provided. Caregivers should be advised to record the daily burden of spasms, change in seizure types, and adverse effects of atniseizure medications (e.g. irritability, sleep disturbances, any new-onset symptoms of infections like fever, cough, respiratory distress, etc.). There is a risk of hypertension with hormonal therapy requiring BP monitoring [3,15]. Availability of appropriate-sized BP cuff for infants is an issue. Further, BP monitoring at home is challenging, hence may be done at a nearby healthcare facility. Urine sugar may be monitored at home. Besides, repeated emphasis should be given on general precautions such as the use of masks, social distancing, and hand hygiene. Follow-up The first follow-up should be scheduled after a week (or earlier depending on clinical need) of treatment initiation for assessing compliance, tolerability, and response to therapy. The next teleconsultation should be at two weeks of therapy. EEG is desirable to demonstrate resolution of hypsarrhythmia at this point, however, the feasibility and access may be issues. Depending on the therapeutic response, the initial therapy may be continued or switched to or supplemented with another first-line drug. Subsequent follow-up should be scheduled once in two weeks for assessment of therapeutic response, drug compliance, adverse effects, and need for modification of therapy. All the available first-line options need to be exhausted before going to second-line therapies. It is important to reiterate that teleconsultation should be switched to in-person consultation in the event of a diagnostic or management uncertainty. Conclusion Despite the innumerable pros of teleneurology, there are some limitations such as lack of detailed examination, limited access to investigations, problems with infrastructure and internet facilities, confidentiality, and legal implications involved. However, it seems to be a simple and convenient tool during the COVID-19 pandemic to provide optimal care while minimizing hospital visits. Although formal evaluation of the effectiveness of this approach is evolving, the advantages of early diagnosis and reduction in treatment lag are presumed to outweigh this limitation. Future studies should be done to validate this approach and algorithm in developing countries. Ethical publication statement We confirm that we have read the Journal's position on issues involved in ethical publication and affirm that this report is consistent with those guidelines. Study funding None. Authors' contribution Priyanka Madaan and Jitendra Kumar Sahu contributed by planning of the study, literature search, participated as an expert, preparation of initial draft of manuscript and its revision for intellectual content.
2020-12-31T14:01:53.586Z
2020-12-31T00:00:00.000
{ "year": 2020, "sha1": "f504b22e7e063e5b060ef5fb70c211e51e7a24d6", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ebr.2020.100423", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "071014266e94d100d883c1930fda151c6855664c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220965915
pes2o/s2orc
v3-fos-license
Biventricular strain by speckle tracking echocardiography in COVID-19: findings and possible prognostic implications The COVID-19 infection adversely affects the cardiovascular system. Transthoracic echocardiography has demonstrated diagnostic, prognostic and therapeutic utility. We report biventricular myocardial strain in COVID-19. Methods: Biventricular strain measurements were performed for 12 patients. Patients who were discharged were compared with those who needed intubation and/or died. Results: Seven patients were discharged and five died or needed intubation. Right ventricular strain parameters were decreased in patients with poor outcomes compared with those discharged. Left ventricular strain was decreased in both groups but was not statistically significant. Conclusion: Right ventricular strain was decreased in patients with poor outcomes and left ventricular strain was decreased regardless of outcome. Right ventricular strain measurements may be important for risk stratification and prognosis. Further studies are needed to confirm these findings. The COVID-19 infection, which occurs as a result of infection with the novel coronavirus SARS-CoV-2, has firmly established itself as a pandemic that has permeated every aspect of our lives. SARS-CoV-2 is a single-strand RNA virus that gains entry into cells by binding ACE2 [1]. It affects multiple organ systems, including the cardiovascular system. Elevated cardiac biomarkers (e.g., troponins and BNP), heart failure, arrhythmias, myocarditis and acute coronary syndromes have been described in the literature, with the underlying mechanisms thought to be, among others, direct viral injury, increased cardiac stress secondary to hypoxemia, systemic inflammation and a prothrombotic state [2,3]. Various diagnostic and management strategies addressing all the affected organ systems have been proposed and investigated. The foundation of diagnostic cardiac imaging remains transthoracic echocardiography (TTE), a portable and useful imaging modality that is recommended for patients with COVID-19 when it may provide relevant information impacting clinical management [4]. Both right ventricular (RV) and left ventricular (LV) systolic function and diastolic function can be evaluated, with RV dysfunction seen most commonly in patients with clinical deterioration [5,6]. 32-55% of patients have been reported to have normal transthoracic echocardiograms [5,7]. In critically ill patients, TTE can quickly ascertain hemodynamic status and impact a patient's therapeutic course [8,9]. Despite these findings that can be obtained, thought leaders have recommended screening the appropriateness of studies, suggested workflow adjustments and encouraged focused examinations in order to minimize exposure time [10]. Myocardial strain measurement by speckle-tracking echocardiography, which can measure LV global longitudinal strain (LVGLS), RV free wall strain (RVFWS) and RV global strain (RVGS), plays a diagnostic and prognostic clinical role in several cardiac diseases and provides objective quantification of biventricular myocardial deformation and dynamics [11][12][13]. A recent study reported the prognostic capability of RVFWS in a large Chinese cohort affected by COVID-19, but examination of the effect on both ventricles was beyond the scope of the investigation [14]. Given the unfortunate volume of patients seen in our institution in New York City, an epicenter of COVID-19, we sought to determine whether the combination of LVGLS, RVFWS and RVGS is affected by COVID-19 infection and is associated with adverse outcomes such as death or need of intubation and mechanical ventilation. Methods The study was approved by the Institutional Review Board. TTEs were performed based on American Society of Echocardiography (ASE) guidelines, with tailoring of image acquisition according to the specific test indication [4,15]. All TTEs were reviewed by two expert echocardiographers and only patients with optimal visualization of both ventricles were included in the study. Apical three-chamber, two-chamber and four-chamber views were used for LVGLS and an RV-focused four-chamber view was used for RVFWS and RVGS. If all four of these views were obtained with optimal visualization, these patients were identified as candidates for biventricular strain measurement. Measurements were performed offline using QLab 13.0 (Philips, Best, the Netherlands). Two patients had their TTE performed before ultimately being intubated, while two had their studies after clinical decompensation requiring intubation. All patients but one were on anticoagulation therapy and no patients had documented pulmonary embolism (either through clinical suspicion or computed tomography angiography). Demographic data were collected, as well as pertinent laboratory parameters, including white blood cell count and differential. Biochemical markers such as CRP, D-dimer, troponin and BNP were also collected [16]. Categorical variables were presented as percentage (%); continuous variables as mean ± standard deviation (SD) for normally distributed variables and median (interquartile range [IQR]) for others. Student t-test, Kruskal-Wallis and chi-square tests were performed to examine differences between the groups. Statistical analysis was performed using STATA 14.0 MP (StataCorp LP, TX, USA). Results From 103 clinically appropriate TTEs performed on hospitalized COVID-19 patients, 12 (12%) were of adequate quality for biventricular speckle tracking echocardiography (STE) analysis and were included in this single-center, retrospective study. We compared the prevalence of demographic, laboratory, biochemical and inflammatory markers and echocardiographic parameters between patients who required intubation and/or died and those who survived to discharge (Table 1). Four patients died while intubated from hypoxic respiratory failure, most likely secondary to acute respiratory distress syndrome and another died suddenly prior to intubation due to arrhythmia. RV and LV dysfunction were reported in 41.7% and 58.3% of patients, respectively. Both mean RVGS and RVFWS were significantly decreased in the patients who had poor outcomes compared with those who did not; the mean RVGS and RVFWS were -10.2 ± 3.7% versus -20.3 ± 6.1% (p = 0.007) and -9.8 ± 3.8% versus -21.5 ± 6.5% (p = 0.007), respectively. The mean LVGLS was severely low, but similar, in both groups at -11.9 ± 4.6% and -11.95 ± 4.5%. BNP levels were available for 11 of the 12 patients at the time of hospital presentation; while those who had poorer clinical outcomes had a numerically higher median BNP level compared with patients who survived to discharge, the difference was not significantly different (553.1pg/ml [IQR: 3226.1] vs 20.5pg/ml [IQR: 2103.0], respectively; p = 0.8). There was no correlation between BNP and RVFWS (R 2 0.03; p = 0.6) or RVGS (R 2 0.2; p = 0.2) No significant difference was observed in the pulmonary artery systolic pressure measured by TTE. Discussion To our knowledge, this is the first study in the literature evaluating biventricular mechanics by simultaneous LV and RV strain imaging in COVID-19 patients. We report firstly that, while we were able to measure biventricular mechanics in only 12% of all TTEs performed in our hospital for COVID-19 patients, both RVGS and RVFWS were significantly decreased in patients with poor outcomes; and secondly that LVGLS was severely decreased in all patients regardless of their outcome (either survival to discharge or death) and/or requirement for endotracheal intubation. The mean LVGLS was -11.93% ± 4.2 in all studied patients, which has not been described before in the literature. In contrast, the mean LVGLS in normal patients using the Philips software platform was reported to be 18.8%, with a lower limit (two SDs) of 15.2% [17]. While LVGLS was decreased in all patients, RVGS and RVFWS were only significantly decreased in patients with adverse outcomes. The reduced RV strain measurements in the patients with poor outcomes are likely multifactorial in etiology; they may be due to more advanced baseline pulmonary disease, worse hypoxemia, pulmonary vasoconstriction and afterload mismatch. However, there was no significant difference in the pulmonary artery systolic pressures in the two groups. In a recent study, Li et al. measured only RVFWS and showed that it was a powerful predictor of mortality. They described that patients in the lowest tertile of RVGS (-10.3% to -20.5%) had a higher percentage of acute respiratory distress syndrome (ARDS; 52.5%) and death (32.5%) [14]. We also found that RVGS, in addition to RVFWS, were severely decreased in patients who needed intubation and/or died. Because the LVGLS in all of our studied patients were severely reduced, one would have expected that RVGS, which includes the ventricular septum, would have been decreased in both groups as well; however, this was not the case. This is an interesting observation and is hypothesis generating: is it possible that the response to COVID-19 is different in the two ventricles and that a higher inflammatory burden is required for the RV mechanics to be affected and eventually result in poor outcomes? Further study is required to better elucidate this. This study has a number of significant limitations, most notably the small sample size. Additionally, this was a single-center study and a high percentage of cases had inadequate imaging for STE analysis. Thus the findings are more hypothesis generating. TTEs were performed using a limited protocol for safety reasons, primarily to limit duration of sonographer exposure to patients with COVID-19 infection. This is in line with the recommendations of major cardiovascular societies [17]. Therefore only 12% of studies had suitable imaging for biventricular STE analysis. Factors such as endotracheal intubation, high positive end-expiratory pressures with mechanical ventilation, patient positioning and body habitus limited the image quality for strain analysis. For example, the mean body mass index in our cohort was 28.2 ± 6.5, while the mean in the study by Li et al. was 23.7 ± 3.0 [14]. These factors can result in a significant selection bias; patients with acceptable echocardiographic windows may have had a lower body mass index or may have been able to position themselves better as they were less symptomatic and had a less severe disease process secondary to COVID-19. Conclusion In patients with COVID-19 infection, LV global longitudinal strain, RV global strain and free wall strain were altered. This became more apparent when dividing our cohort into patients who survived to discharge and those who had adverse outcomes. In the acute setting, off-line measurements of RVGS and RVFWS may be important tools for risk stratification and prognosis of COVID-19 patients. Moving forward and especially in the setting of a second wave of COVID-19 infection, routine measurement of these parameters may be useful in risk stratifying patients and guiding clinical management. In addition, the effect of investigational therapies such as convalescent plasma, remdesivir, tocilizumab and others on these parameters will be of great interest. With increased availability of personal protective equipment to protect our echocardiography staff, we believe that more optimal imaging can be achieved for biventricular strain analysis in COVID-19 patients. Further studies with larger patient sample size will be needed to confirm and expand on our findings. Executive summary • COVID-19 is a viral infection with widespread effects, particularly for the cardiovascular system. • Myocardial strain imaging by speckle tracking echocardiography has prognostic implications in a number of conditions. • Biventricular myocardial strain has not been previously evaluated in COVID-19 infection. • This will require further study with larger sample sizes. Financial & competing interests disclosure
2020-08-05T13:06:30.231Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "9acb0036c9b9ace1bf0f5fd1329b1055a270e76e", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7405100", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "c039e2ea7cf0e6b9ae0a45e54fff78b3cea085aa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238476359
pes2o/s2orc
v3-fos-license
High-throughput screening alternative to crystal violet biofilm assay combining fluorescence quantification and imaging The crystal violet assay is widely used for biofilm quantitation despite its toxicity and variability. Here, we instead combine fluorescence labelling with the Cytation 5 multi-mode plate reader, to enable simultaneous acquisition of both quantitative and imaging biofilm data. This high-throughput method produces more robust data and provides information about morphology and spatial species organization within the biofilm. The crystal violet assay is widely used for biofilm quantitation despite its toxicity and variability.Here, we instead combine fluorescence labelling with the Cytation 5 multi-mode plate reader, to enable simultaneous acquisition of both quantitative and imaging biofilm data.This high-throughput method produces more robust data and provides information about morphology and spatial species organization within the biofilm. Biofilms are microbial communities commonly composed of mixed bacterial species where frequent inter-and intraspecies interactions occur (Costerton et al., 2003;Røder et al., 2016;Tan et al., 2017).The use of biofilms as model systems for investigating such interactions creates a need for suitable tools that enable high-throughput screening of the adhesive capabilities of the contributing species and their synergistic effects.Even though multiple methods have been developed for studies of such interactions there are still many limitations regarding reproducibility and resolution (Azeredo et al., 2017). Among these methods, crystal violet (CV) staining of biofilms in microplate wells and pegs (Christensen et al., 1985;Ceri et al., 1999;Stepanović et al., 2000) is one of the most extensively used platforms for high through-put quantification of biofilm biomass (Djordjevic et al., 2002;Extremina et al., 2011;Merritt et al., 2011;Røder et al., 2015;Doll et al., 2016).Crystal violet binds negatively charged molecules and thus stains both bacteria and the surrounding biofilm matrix.However, the use of CV as a quantitative method has many limitations, including i) toxicity (Merck, 2017); ii) unspecific binding to negatively charged molecules and iii) low reproducibility (Peeters et al., 2008;Kragh et al., 2019) due to uneven dye extraction or differential removal of biofilm biomass throughout the washing steps. Other methods involve dyes staining specific biofilm components such as nucleic acids or chromosomal tagging with fluorescent proteins for strains compatible with genetic manipulation.These approaches have previously been used for imaging and quantifying microbial biofilms (Lawrence et al., 1998;Klausen et al., 2003;Peeters et al., 2008;Larrosa et al., 2012;Tolker-Nielsen and Sternberg, 2014;Sanchez-Vizuete et al., 2015;Stiefel et al., 2015), but are so far not applicable for high-throughput screenings. In this study, we combine quantitative and imaging features in a single method to facilitate high-throughput biofilm screening for i.e., genetic mutants, growth conditions or species combinations in microbial communities.We assessed biofilm adhesion and variation by the conventional CV assay compared to i) fluorescent staining with SYTO 9 and ii) gfp-tagged strains using the Cytation 5 instrument, a multi-mode plate reader integrating imaging and quantitative capabilities. We were interested in comparing CV staining and fluorescence (FL) for biofilm biomass quantification of isogenic strains (mutant screening) but also more complex biofilms such as multispecies biofilms.For the former, we chose Pseudomonas putida KT2442 (wt), that undergoes rapid biofilm dispersal in response to nutritional stress, and its derivative mutant MRB1 which is resistant to dispersal due to a mutation in the lapG protease gene (Gjermansen et al., 2005;López-Sánchez et al., 2013).Additionally, we tested a four-species community (SPMX) composed of Stenotrophomonas rhizophila, Paenibacillus amylolyticus, Microbacterium oxydans and Xanthomonas retroflexus, where only the latter is capable of forming abundant biofilm in monoculture in microplate wells.These strains have shown a strong synergy in mixed vs. single cultures and have been extensively used as a model for investigating interactions in multispecies biofilms (Liu et al., 2021;Ren et al., 2014Ren et al., , 2015;;Hansen et al., 2016;Herschend et al., 2017). Biofilm formation in microtiter plate wells was quantified using a modified method (O'Toole and Kolter, 1998).Three replicate cultures of each strain were grown overnight (16 to 20 h) in 5 ml LB at 30 using both endpoint and well area scan measurements (Fig. 3A, 5 × 5 points; well diameter 6604 μm; probe diameter 2000 μm) expressed in relative fluorescence units (RFU).For biofilm biomass calculations, average values, standard deviations, and coefficient of variation were used.For statistical analysis, Welch's t-test was generally applied, unless otherwise stated, with P-values <0.05 considered as significant.Šidák's correction was applied in multiple comparisons (MC) and Pearson's correlation coefficients (r) were calculated for comparing endpoint vs. area scanning methods, and corresponding P-values (supplementary).All statistical analyses were done using GraphPad Prism 6.01 (GraphPad Software, Inc.).Fluorescence images were acquired with the Cytation 5 Gen5 software Image Prime 3.10 using manual mode with LED intensity 10, integration time 5 ms and camera gain 22.9 with a 20× PL FL objective (Olympus) and GFP 469,525 filter cube (P/N 1225101).Three replicate wells were recorded, two images per well and random pictures shown. We compared quantification of biofilm biomass between SYTO 9 staining and CV for X. retroflexus monospecies (X) vs. mixed-species biofilms (SPMX).When using fluorescent staining, we observed a 3.3fold significant induction in biofilm biomass in the bacterial community compared to single species (P < 0.0001, two-tailed Welch corrected t-test) (Fig. 1A, Table 1), while CV staining showed non-significant difference (P = 0.0819, Welch corrected t-test) (Fig. 1B).Variation was similar for both methods, reflecting that more complex communities or types of biofilm produced may have an impact of biomass variability.However, the discrimination level differed depending on the staining method used.Mono-and multispecies biofilms had significantly higher biomass than the blank when stained with SYTO 9 but not with CV ( Šidák's MC test, P < 0.05, Table S1).Imaging of the wells also provided evidence of a distinct biofilm for the multispecies combination regarding surface coverage and topology (Fig. 1C) consistent with the fluorescent quantitative data (Fig. 1A).Additionally, differences in estimated biomass in X. retroflexus monospecies biofilms may be explained by the nature of both stains since CV stains cells and the extracellular matrix (CV) while SYTO 9 stains DNA, and, to a lower extent, eDNA (Li et al., 2003). Next, we examined biofilm biomass and structure in a chromosomally gfp-tagged P. putida KT2242 strain and its isogenic mutant MRB1 (Fig. 2).Comparison of P. putida fluorescently tagged strains resulted in significant difference between wild-type and mutant strains (P < 0.0001, Welch corrected t-test) irrespective of the quantification method, likewise mutant vs. wt biomass ratios (fluorescence, 1.9-fold; CV, 1.8-fold; Table 1).CV yielded higher variation regardless of the strain tested (FL 4-13%; CV 20-33%).The sensitivity of both assessment methods was also evaluated by comparison with blank values (Table S1).MRB1 biofilm was significantly different from the baseline when measured by fluorescence ( Šidák's MC test, P < 0.001), unlike KT2442 biomass regardless of the quantification method (P = 0.1077, FL; 0.9997, CV).Besides the increased biomass in the MRB1 mutant, imaging of the wells also evidenced differential biofilm structure in both strains, showing scattered, adhered wild-type cells in comparison with more complex microcolony structure covering a larger surface area for the for the MRB1 mutant (Fig. 2C). We further tested whether the fluorescence scanning method would influence the results and variation.The Cytation 5 enables two types of readings: endpoint measurement acquires a single measurement in the centre of each well, resulting in faster readings and available in all plate readers.In contrast, area scan acquires multiple measurements in each well (Fig. 3A), and thus a more integrated reading of the entire well.We measured fluorescence of KT2442-gfp, MRB1-gfp and non-inoculated wells to assess the variability but also the discrimination level of both scanning methods.We observed significant positive correlation between endpoint and area scan fluorescent measurements irrespective of the type of biofilm tested (r = 0.8634, KT2442-gfp; r = 0.4412, MRB1-gfp; r = 0.6111, blank) although the strongest correlation was found in the MRB1 mutant (Fig. 3B).Fluorescence of KT2442-gfp was significantly different from that of MRB1-gfp regardless of the method used (P < 0.0001) but not from the blank (0.4580, endpoint; 0.3112, area scan).This suggests that sensitivity of the method could be a limitation for strains producing low amounts of biofilm.Biofilm type accounted for most of the variation (88.27%), unlike the scanning method (1.36%), even though both factors were found significant (Two-way ANOVA, Pvalue <0.0001).Variability was generally higher using endpoint mode than area scan but only MRB1-gfp showed significant difference (Fig. S1, Table S2).Thus, we can conclude that biofilm complexity rather than measurement mode was responsible for the variation found and endpoint reading may therefore safely be used when area scan is not available, even though it may not be suitable for all types of biofilm topology. This study presents an alternative to CV assay for monospecies but also more complex biofilms, in terms of robustness, simplicity and information recorded, such as biofilm morphology.Unlike the CV assay, biofilm quantification using the Cytation 5 does not involve additional dye incubation, extraction or washing steps, making biofilm processing milder and faster for high-throughput screenings.Viability of stained cells should nonetheless be tested in case that subsequent physiological assays of continuous monitoring are wanted.Other studies have reported downsides of SYTO 9 staining associated with different binding affinity to live and dead cells or permeability in Gram-positive and Gram-negative cells (Stiefel et al., 2015;McGoverin et al., 2020).However, there is a wide array of nucleic acid and biomass stains that can circumvent this problem (Thermo-Scientific, 2014).Even though we used the Cytation 5 imaging reader, comparison of endpoint and area scan measurements proves that this workflow is also compatible with conventional plate readers and microscopy.We envision our approach as a streamlined alternative to CV quantification that could facilitate high-throughput biofilm screenings for i.e. genetic mutants, growth conditions or species combinations in microbial communities, but also acquisition of topological information of such biofilms. The authors would like to thank Anette Løth and Ayoe Lüchau for their assistance in the laboratory and Susanne Schoells for lending us a Cytation 5 demo version.This study was funded by grants from the Fig. 1 . Fig. 1.Biofilm formation of Xanthomonas retroflexus (X) mono-and mixed cultures (SPMX) after 24 h.Biomass quantitation of 24-h biofilms stained with SYTO9 (A, fluorescence area scan) or crystal violet (B, 590 nm).Black dots outside the boxes denote outliers.C, Images of X. retroflexus (X) and multispecies (SPMX) biofilms stained with SYTO 9, 20× PL FL objective.Scale bar corresponds to 72 μm.(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Fig. 2 . Fig. 2. Biofilm formation of gfp-tagged Pseudomonas putida KT2442 and MRB1 after 24 h.Biomass quantification of 24-h biofilms (A, fluorescence area scan) or crystal violet (B, 590 nm).Black dots outside the boxes denote outliers.C, Images of KT2442-gfp and KT2442 MRB1-gfp biofilms, 20× PL FL objective.Scale bar corresponds to 72 μm.(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Fig. 3 . Fig. 3. Biofilm biomass comparison of fluorescence by measuring method.A. Representation of area scan and endpoint measurement.B. Correlation of fluorescent readings using endpoint vs. area scan methods for Pseudomonas putida KT2442-gfp, P. putida KT2442 MRB1-gfp and blanks.Individual points represent 21 replicates corresponding to 3 biological replicates each including 7 technical replicates.Lines represent linear regressions of each data series with the 95% confidence interval (dotted lines).
2021-10-09T06:17:16.954Z
2021-10-04T00:00:00.000
{ "year": 2021, "sha1": "3bf625bb0531491415715d11d902b97ef70f6b9a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.mimet.2021.106343", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "2ebadbef782988f811950440c903277f5b37e54c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
235500002
pes2o/s2orc
v3-fos-license
CircAPP Competes with APP mRNA to Promote Malignant Progression of Bladder Cancer Background: Bladder cancer (BCa) is the most common cancer in the urinary system with high recurrence rate and poor prognosis. Circular RNA (circRNA) is a novel subclass of noncoding-RNA which participate in progression of BCa. Here, we identied a novel circRNA—circAPP and aimed to investigate the role of circAPP in progression of BCa. Method: Public data of RNA sequencing was used to identify signicant circRNA related to BCa. The role of circRNAs in progression in BCa was assessed in cytotoxicity assay, transwell assay and ow cytometry. Biotin-coupled RNA pull-down and uorescence in situ hybridization (FISH) were performed to evaluate the interaction between circRNAs and miRNAs. Results: The expression of circAPP was higher in BCa tissues and cells than in normal samples. In vitro experiments showed that knockdown of circAPP inhibited cell proliferation and impeded the metastasis of BCa cells. Mechanistically, we demonstrated that circAPP acts as a sponge for miR-186-5p and promotes host gene APP’s expression. Clinically, circAPP predicts worse overall survival of BCa patients, indicating its prognostic value. Conclusion: Our study identied that circAPP modulates metastasis of BCa through miR-186-5p/APP aixs and may serve as a promising prognostic biomarker for BCa, which provides novel insights into treatment of BCa. Introduction: Bladder cancer (BCa) is the fourth most frequent cancer diagnosis in men and the most common malignant tumor of the urinary system. Bladder urothelial carcinoma (BUC) is the most pandemic subset of BCa, accounting for approximately 90% of all cases of BCa. Despite the development of treatment strategy for BCa, the 5-year survival of BCa is still unsatis ed. Postoperative recurrence and distant metastasis make ve-year prognosis for advanced BCa worse. In this regard, identifying novel biomarkers and potential therapeutic targets for BCa diagnosis and treatment, is urgently needed (1). Circular RNAs (circRNAs) are a class of single stranded RNAs that constructs a closed loop by connecting the linear 5′ and 3′ ends (2). Due to its unique structure and stabilizing feature, circRNAs are considered to be promising biomarkers for prognosis and diagnosis of cancer patients, which draws growing attention (3). In regarding of BCa, high-throughput sequencing and microarray identi ed a large number of novel dysregulated circRNAs in cell lines or tissues, indicating potential roles of these circRNAs in BCa development and progression (4,5). Previous studies revealed that several oncogenic and antioncogenic circRNAs could regulate many aspects of malignant phenotype of BCa including cell proliferation, cell cycle arrest, apoptosis, metastasis, angiogenesis, and chemoresistance(6, 7). For example, CircITCH inhibited cell proliferation by sponging miR-224 to up-regulated of PTEN in BCa(8). In addition, RAB27A promoted proliferation and chemoresistance of BCa by inducing protein transport and small GTPasemediated signal transduction. Despite of signi cant functions of circRNAs in BCa, more work is needed to further identify novel circRNAs in prognosis and biomarker and explore their mechanisms in carcinogenesis (9). In this study, we analyzed public RNA sequencing (RNA-Seq) and veri ed a upregulated circRNA-circAPP in BCa tissues. Functional experiments showed that circAPP expression promoted BCa invasion and proliferation. Furthermore, we found that circAPP modulated the expression of its host mRNA-APP by sponging miR-186, which activated metastasis of BCa. Finally, circAPP was identi ed to be a potential biomarker for prognosis of BCa. Methods: Tissues and serum specimen collection This study has been approved by the Ethics Committee of the Nanjing Medical University A liated Cancer Hospital and was performed in accordance with the provisions of the Ethics Committee of Nanjing Medical University. We obtained the written informed consent from all the patients. 40 paired Human BCa tissues, ANT were obtained from the Department of Thoracic Surgery, Jiangsu Cancer Hospital between 2010 and 2016 (Nanjing, China). Cell cultures The BIU-87 cells and 5637 cells were obtained from the Chinese Academy of Sciences Cell Bank and were authenticated by the providers by DNA-ngerprinting analysis or isoenzyme analysis, and were tested negative for mycoplasma contamination. BIU-87 cells and 5637 cells were cultured in RPMI-1640 medium (Keygen Biotech, Nanjing, China) supplemented with 10% fetal bovine serum (Gibco, Grant Island, USA). Cells were maintained in an atmosphere of 5% CO2 in a humidi ed 37°C incubator. Cells were authenticated by STR analysis at Guangzhou Cellcook Biotech Co., Ltd. (Guangzhou, China) Characterized Cell Line Core Facility within the last three years and routinely tested negative for mycoplasma contamination. Over-expression or knockdown of genes Human circAPP linear sequence was obtained from esophageal squamous cell carcinoma tissues by PCR and inserted into plasmid vector pcDNA 3.1 (Hanbio, shanghai, China). Human APP cDNA was ampli ed with PCR primers and subcloned into pcDNA3.1 empty vector (Hanbio). The small interfering RNA (siRNA) of circAPP and mAPP were provided by RiboBio (Guangzhou, China). The transient transfection of the overexpressing plasmids were performed using the Lipofectamine 3000 kit (Invitrogen, Carlsbad, CA) according to manufacturer's instructions, and the transient transfection of siRNA were performed using the Lipofectamine iMax kit(Invitrogen) according to manufacturer's instructions. RNase R treatment & Quantitative PCR. Total RNA was isolated from cells and tissues using Trizol reagent (Life Technolohies, Scotland, UK) according to the manufacturer's protocol. And the RNA was extracted from serum with miRNeasy Mini Kit (Qiagen, Hilden,Germany). Nuclear and cytoplasmic RNA was extracted using nuclear and cytoplasmic RNA puri cation kit (Fisher scienti c, Vilnius, Lithuania). For RNase R treatment, 1 µg of total RNA was incubated 30 min at 37°C with or without 3U of RNase R (Geneseed, Guangzhou, China). Reverse transcription was then performed using random hexamers (Takara, Dalian, China) and quantitative PCR (qPCR) was performed using SYBR Green master mix (Applied biosystems, Vilnius, Lithuania). To quantify expression of circRNA transcripts, divergent primers were designed to amplify across the back-splicing junction. Ampli cation was performed using the StepOnePlus Real-Time PCR System (Applied Biosystems, Foster City, CA) and Ct thresholds were determined by the software. Expression was quanti ed using 2-ΔΔCT method using GAPDH (for mRNA/ circRNA) or U6 small nuclear RNA (for nuclear RNA fraction) as reference genes. Western blotting Brie y, total protein of cells was extracted using RIPA (Thermo Fisher Scienti c, Waltham, USA) with a cocktail of proteinase and phosphatase inhibitors (Thermo Fisher Scienti c) according to its protocol. Equal amounts of protein lysates were resolved by SDS-PAGE gels and then transferred on a PVDF membrane (Millipore, Massachusetts, USA). After incubation with a primary antibody at 4°C overnight, the membranes were hybridized with a secondary antibody at room temperature for 1 h. Blots were visualized using ECL detection (Thermo Fisher Scienti c). Transwell and Matrigel assay For migration assay, 4 × 104 cells were seeded into the upper transwell assay chambers with 8µm pore lters (Millipore) in serum-free medium. For invasion assay, 4 × 104 cells were seeded into the upper matrigel assay chambers with a matrigel-coated membrane (Corning, Massachusetts, USA) in serum-free medium. The lower chamber contained medium with 10% FBS as chemokine. After incubation for 24 hours for migration and 48h for invasion at 37℃, non-migrating or non-invading cells were gently removed and cells migrated to the bottom of the membrane were xed with 4% paraformaldehyde, stained with crystal violet solution for 30 min, and visualized under a microscope at × 100 magni cation. Wound-healing assay Transfected cells were cultured in 6-well plates. After the cells reached 90% con uence, a standard 200µl pipette tip was subsequently utilized to scratch linear wounds. In addition, the cell monolayers were cultivated in FBS-free medium. After scratching, the images of the wound closure were captured at 0, and 36h. RNA pull-down The Biotin-labled RNA probes of circAPP and scramble were synthesized by GenePharma Company (Suzhou, China). RNA pull-down assay was performed using a Biotinylated Protein Interaction Pull-Down Kit (Thermo Fisher Scienti c). In brief, 2×107 cells incubated in lysis buffer on ice for 30min. The streptavidin-coated magnetic beads were incubated with biotinylated probes at room temperature for 30 min. The beads-probe complex was added to lysis, and mixed at 4℃ for 2h. The bound miRNA were eluted from the packed beads. The miRNA in the capture complex were identi ed by qRT-PCR. RNA-Fluorescence in situ hybridization assay and Fluorescence immunocytochemical staining RNA-Fluorescence in situ hybridization (FISH) assays were performed using a RNA-FISH kit (GenePharma, China) according to the manufacturer's instructions. Cy3-labeled antisense probe was synthesized by GenePharma company (Suzhou, China) against the junction site of circAPP. In brie y, 5637 cells were xed with 4% paraformaldehyde. After pre-hybridization with 1× PBS/0.5% Triton X-100, cells were blocked and hybridized in hybridization buffer with Cy3-labeled probe at 37°C overnight. Cells were stained with DAPI (300 nmol/L). Statistics All statistical analyses were performed with SPSS 25.0 software. Qualitative variables were analyzed by chi-square test or sher's exact test. For continuous variables, if which obey the normal distribution, student's t test is used to compare the differences. Otherwise, variables were compared using nonparametric test for which with an abnormal distribution. Differences between groups were compared using analysis of variance (ANOVA) when applicable or a nonparametric test. Correlation analysis was performed using the Pearson correlation coe cient method. Unless otherwise speci ed, the results are presented as the means ± standard deviation (SD). All statistical tests were 2 sided, and P < 0.05 was considered statistically signi cant. Data availability The circRNA microarray is available in the Gene Expression Omnibus (GEO) (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi) under accession numbers GSE147985). The source data of other gures are provided as a Source Data le. All other data are available from the authors upon reasonable requests. Results: Identi cation and characterization of circAPP in bladder cancer. We rstly analyzed the published circRNA microarray data of human bladder cancer (BCa) tissues and paired normal bladder tissues(1). And we found circAPP was signi cantly upregulated in BCa (Fig. 1A). CircAPP (circBase(2) ID: hsa_circ_0003323) was derived from the 12 and 13 exons of Amyloid Beta Precursor Protein (APP) gene (Fig. 1B). CircAPP was signi cantly increased in 40 paired BCa tissues compared to normal tissues (Fig. 1C). We con rmed that circAPP was relative enriched in 5637 BCa cell line and less enriched in BIU-87 BCa cell line compared with the normal urothelial cell line SV-HUC-1 ( Fig. 1D). CircAPP was only y detectable in cDNA but not genomic DNA (gDNA) from 5637 cell lines by qRT-PCR with divergent primers, while mAPP could be ampli ed in both cDNA and gDNA using convergent primers (Fig. 1E). Besides, RNase R digestion assay showed that the circular isoform was resistant to RNase R, whereas the linear isoform was obviously decreased after RNase R treatment (Fig. 1F). In addition, the subcellular localization of circAPP was detected with qRT-PCR analysis using nuclear and cytoplasmic fractions of 5637 cells and FISH assay. We found that circAPP was enriched in the cytoplasm fraction and mainly distributed in the cytoplasm (Fig. 1G). Taken together, these results indicated that circAPP was up-regulated in BCa tissues and cell lines and was predominantly localized in the cytoplasm. CircAPP promoted the invasion and migration of Bca cells. To investigate the potential biological effect of circAPP on BCa cells, we established circAPP stably overexpressing BIU-87 cell lines via transfecting with circAPP vector. We also used RNA interference (siRNA) to silence the expression of circAPP in 5637 cells. The overexpression and knock-down e ciencies of circAPP, mAPP, and PreAPP were detected by qRT-PCR analysis, surprisingly, we found that the circAPP and mAPP were upregulated upon overexpressed plasmid transfected. And circAPP and mAPP were decrease after siRNA was transfected ( Fig. 2A-B). The results of transwell, matri gel, and wound healing demonstrated that circAPP facilitated the aggressive the BCa cells (Fig. 2C-E). To explore the mechanism of circAPP regulated mRNA of its host gene, we analyzed the mAPP-CLIP seq data in StarBase database (3), there were 153 miRNA recognized could interact with the 3' UTR region of mAPP. There were nine miRNA could been potentially sponged by circAPP in StarBase with tight screening criteria. With comprehensively analyzing the two data, there were six miRNAs in common (Fig. 3A). After screened the expressed correlation between candidate miRNAs and mAPP, we found that only miR-186-5p was signi cantly negative related to the expression of mAPP (Fig. 3B). The positions of putative binding sites in circAPP and mAPP were analyzed in StarBase (Fig. 3C). To explore whether circAPP and mAPP can act as effective miRNA sponges, we performed Argonaut 2 (Ago2) reciprocal immunoprecipitation (RIP) assay. The results demonstrated that both circAPP and mAPP could e ciently adsorbed Ago2 protein (Fig. 3D). The biotinylated circAPP probe, mAPP probe, and scramble probe were designed and applied to perform RNA pull-down assay. The pull-down e ciency was veri ed in 5637 via qRT-PCR. The results released that both circAPP and mAPP could pulled down miR-186-5p (Fig. 3E-F). The expression of circAPP was positive related to the expression of mAPP. We rstly applied qRT-PCR to detect the correlations between circAPP and mAPP in 40 BCa tissues and matched adjacent normal tissues. We found that circAPP was signi cant positive related to mAPP (Fig. 5A). As the same as circAPP, mAPP was upregulated in BCa tissues (Fig. 5B). In TCGA database, the Kaplan-Meier survival curves released that BCa patients with higher miR-186-5p expression level had a better overall survival (OS) (Fig. 5C). BCa patients with higher mAPP expression level had a worse prognosis (Fig. 5D). In BCa tissues, the results of In Situ Hybridization (ISH) assay released that the expression of APP positively associated with the expression of circAPP (Fig. 5E). Taken togther, these ndings suggested that circAPP regulated the mAPP in BCa and could as a potential diagnosis and therapy biomarker for BCa. In conclusion, we identi ed a novel oncogenic player circAPP from the BCa circRNA microarray, and veri ed the results using 40 paired BCa tissues and BCa cell lines. We released that circAPP promoted the metastasis of BCa cells. Importantly, circAPP could increase the expression of its host gene APP via competing endogenous miR-186-5p. Therefore, circAPP may be a promising independent prognostic biomarker and potential target in BCa therapy (Fig. 6). Discussion: In this study, we explored the effect of circAPP on the metastasis of BCa and demonstrate the regulatory mechanism of miR-186-5p/APP positive feed-back loop pathway. We rst discovered that circAPP is frequently upregulated in BCa and correlated with poor patient prognosis, indicating its applicability as a promising prognostic biomarker in BCa. In addition, we demonstrated that the inhibition of circAPP reversed the metastasis of BCa cells and thus inhibited the progression of BCa. Furthermore, we revealed that circAPP acted as a positive feed-back loop and regulated the expression of APP via miR-186-5p. These results suggested that circAPP might promote the progression of BCa. At early time, circRNAs were de ned as a type of circular RNA transcript via aberrant RNA splicing and initially regarded as functionless byproducts(1). However, circRNA's functions in cancer have been increasingly reported with the rapid spread of high-throughput sequencing (10). Based on published studies, circRNA could involved in various pathological processes via miRNA sponges, interacting with RNA binding proteins, transcription or splicing, and translating proteins (11)(12)(13). The role of circRNAs and the underlying mechanisms in BCa has been reported before. However, more speci c mechanisms of circRNAs in metastasis of BCa need to be further identi ed. Using published circRNA microarray data of human bladder cancer (BCa), we identi ed an up-regulated circRNA-circAPP (circBase(2) ID: hsa_circ_0003323), which was derived from the 12 and 13 exons of Amyloid Beta Precursor Protein (APP) gene (14,15). Most circRNAs are generally recognized to be low expression in tumor, probably due to RNA splicing process affected by accelerating cellular proliferation rate(16). But high-throughput sequencing technology identify several circRNAs enriched in tumor tissues. In order to con rm the trend, we validated the expression of circAPP in 40 paired tumor and normal tissues, which was consistent with sequencing's result. In addition, loss-of-function experiments revealed that knockdown of circAPP inhibited the metastasis of BCa cells in vitro, which indicated that circAPP might play a vital role in progression of BCa. The host gene of circAPP is Amyloid precursor protein (APP) which is a transmembrane precursor protein and is widely expressed in the central nervous system and peripheral tissues including the liver and bladder (15,17). Published studies revealed that APP usually cleaved and produced a variety of short peptides, which exerts different physiological properties and functions in metabolic disease and cancers. Tsang et al found APP cleaved and generated sAPPα mediating breast cancer migration and proliferation(18). In addition, they reported that patients with positive APP expression may require vigilant monitoring of their disease and more aggressive therapy in another study (19). Furthermore, zhang et al revealed that APP was signi cantly increased in the human bladder cancer tissues compared with matched normal bladder tissues and inhibited proliferation, migration and invasion of human bladder cancer cells(20). Mechanically, knockdown of APP signi cantly decreased the phosphorylation of extracellular regulated protein kinases(20). Obviously, APP is an oncogene in BCa. Intriguingly, circRNA could regulate or facilitate the function of host gene in disease progression via multi-ways. For example, su et al uncover circPHIP enhances its malignancy via miR-142-5p which directly targets the expression of PHIP and ACTN4(4). In addition, SMO-193a.a, encoded by circSMO, induced SMO activation via interacting with SMO, enhancing SMO cholesterol modi cation, and releasing SMO from the inhibition of patched transmembrane receptors(3). Those results indicated a potential relationship between circRNAs and host genes, which may one of the signi cant functions of circRNAs. In order to explore the relationship between circAPP and APP, we analyzed the expression correlation and identi ed a positive correlation between them, indicating a potential regulated role. Mechanically, sponging miRNAs is one of the most common and signi cant role of circRNAs regulating the progression of cancer, which constructing a competition relationship between circRNAs with targeted mRNAs. Previous studies identi ed that most circRNAs have miRNA-binding sites. Since then, miRNA sponge function of circRNAs has been comprehensively investigated in many biological processes(10). Li et al revealed that circARNT2 functions as an oncogene by sponging miR-155-5p, leading to PDK1 upregulation, and nally sensitizes HCC cells to cisplatin(7). We primarily found six miRNAs in common via overlapping mAPP-CLIP seq data with circRNA binding miRNAdata in StarBase database with tight screening criteria(21). After screened the expressed correlation between candidate miRNAs and mAPP, we found that only miR-186-5p was signi cantly negative related to the expression of mAPP. In addition, biotinylated RNA pulldown assay and an RNA FISH assay further validated and con rmed the interaction between circAPP and miR-186-5p in BCa. Furthermore, the expression of circAPP was signi cant inverse correlated with the expression of miR-186-5p in BCa tissues. Therefore, circAPP might serve as a sponge for miR-186-5p and thus perform a series of functions. Due to highly conserved and broadly expression, covalently closed loop structures and stability, and tissue-speci c features, previous investigations indicate that circRNAs demonstrate promising and considerable potential for use as diagnostic and prognostic biomarkers in cancers(8, 16). Among BCa, apart from tissues detection, circRNAs can be also detected in blood and urine. Here, we detected and revealed an up-regulated trend of circAPP in cancer patients' blood compared with those of health. Also, circAPP can discriminate the poor survival patients from BCa patients, indicating the prongostic function in clinical, while further exploration is clearly warranted in other external cohorts. However, there are still several limitations in this study. Firstly, we didn't perform the functions of circAPP in vivo experiments. Further investigation need to be completed. At the same time, the upstream regulatory mechanism of circAPP were not investigated. Furthermore, the mechanism of circAPP regulating the progression of BCa is not comprehensively explored. Perhaps circAPP could also exerts its functions via RNA binding proteins, transcription or splicing, and translating proteins ways. These issues should be further pursued in subsequent studies. Conclusion: In summary, our research indicates that circAPP is highly expressed in BCa tissues and acts as oncogenes in development and progression of BCa. Mechanistically, circAPP promotes metastasis of BCa by adsorbing miR-186-5p and in turn increasing APP expression. This circAPP/miR-186-5p/APP axis provides novel insights and strategies for BCa. Declarations Ethics approval and consent to participate: The study was approved by the Regional Ethics Committee at The A iated Changzhou No. 2 People's Hospital of Nanjing Medical University. Consent for publication: The consent was obtained from patients. Availability of data and materials: Yes. Figure 1 We rstly analyzed the published circRNA microarray data of human bladder cancer (BCa) tissues and paired normal bladder tissues(1). And we found circAPP was signi cantly upregulated in BCa (Fig. 1A). CircAPP (circBase(2) ID: hsa_circ_0003323) was derived from the 12 and 13 exons of Amyloid Beta Precursor Protein (APP) gene (Fig. 1B). CircAPP was signi cantly increased in 40 paired BCa tissues compared to normal tissues (Fig. 1C). We con rmed that circAPP was relative enriched in 5637 BCa cell We also used RNA interference (siRNA) to silence the expression of circAPP in 5637 cells. The overexpression and knock-down e ciencies of circAPP, mAPP, and PreAPP were detected by qRT-PCR analysis, surprisingly, we found that the circAPP and mAPP were upregulated upon overexpressed plasmid transfected. And circAPP and mAPP were decrease after siRNA was transfected ( Fig. 2A-B). The results of transwell, matri gel, and wound healing demonstrated that circAPP facilitated the aggressive the BCa cells (Fig. 2C-E). Figure 3 There were nine miRNA could been potentially sponged by circAPP in StarBase with tight screening criteria. With comprehensively analyzing the two data, there were six miRNAs in common (Fig. 3A). After screened the expressed correlation between candidate miRNAs and mAPP, we found that only miR-186-5p was signi cantly negative related to the expression of mAPP (Fig. 3B). The positions of putative binding sites in circAPP and mAPP were analyzed in StarBase (Fig. 3C). To explore whether circAPP and mAPP can act as effective miRNA sponges, we performed Argonaut 2 (Ago2) reciprocal immunoprecipitation (RIP) assay. The results demonstrated that both circAPP and mAPP could e ciently adsorbed Ago2 protein (Fig. 3D). The biotinylated circAPP probe, mAPP probe, and scramble probe were designed and applied to perform RNA pull-down assay. The pull-down e ciency was veri ed in 5637 via qRT-PCR. The results released that both circAPP and mAPP could pulled down miR-186-5p (Fig. 3E-F). The biotincoupled miR-186-5p were used to con rm the interaction (Fig. 3G). Figure 5 We found that circAPP was signi cant positive related to mAPP (Fig. 5A). As the same as circAPP, mAPP was upregulated in BCa tissues (Fig. 5B). In TCGA database, the Kaplan-Meier survival curves released that BCa patients with higher miR-186-5p expression level had a better overall survival (OS) (Fig. 5C). BCa patients with higher mAPP expression level had a worse prognosis (Fig. 5D). In BCa tissues, the results of In Situ Hybridization (ISH) assay released that the expression of APP positively associated with the expression of circAPP (Fig. 5E). Figure 6 Importantly, circAPP could increase the expression of its host gene APP via competing endogenous miR-186-5p. Therefore, circAPP may be a promising independent prognostic biomarker and potential target in BCa therapy (Fig. 6).
2021-06-22T17:56:00.448Z
2021-04-12T00:00:00.000
{ "year": 2021, "sha1": "ab27d8009bb233b77535cc91565fb5e2bce7e7b1", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-293144/v1.pdf?c=1631895205000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "e580dc3545ec4bc1b2a5c10ede0064e49642271f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234263249
pes2o/s2orc
v3-fos-license
The Impact of Environmental Stimuli on Hotel Service Employees’ Service Sabotage—Mediation Role of Emotional Intelligence and Emotional Dissonance : While scholarly inquiries into Service Sabotage (SS) have received ample attention in the literature of various industries, the role of Emotional Intelligence (EI) and Emotional Dissonance (ED) in employee-customer relations in the context of Environmental Stimuli (ES) in the tourism accommodation sector has remained unexplored. The role of employee–customer interaction in tourism is paramount for a hospitality organization’s growth, sustainability, and profitability. We hypothesized hotel service employees’ EI and ED can be influential factors to SS. Adopting the Mehrabian–Russell model (M–R) and Stimulus-Organism-Response (S-O-R) framework as conceptual paradigms, we tested the effect of hotel ambiance on employees’ emotions, which can have significant effects on SS. The study revealed that ES links to behaviors and elicits EI and ED as human emotional responses to environments that have a parallel mediating effect on mitigating or neutralizing the negative effect of SS in an organization. The findings provide important insights into an organization’s awareness of the provision of ES as a positive factor for employees, subsequently forming their behavioral consequences of EI and ED which can mitigate the negative impacts of SS. The study yields important implications on how hospitality organizations should pay attention to the impact of rule-breaking behaviors. Theoretical and practical implications are also discussed. Introduction In the globally competitive hospitality and tourism sector, it is important to investigate and learn the emotional/behavioral responses of service employees in relation to working ambiance with the aim of enhancing loyalty/satisfaction, sustainability, and profitability in the organization [1][2][3][4][5]. Particularly in the labor-intensive tourism and hospitality sector where the service encounter is fundamental. Service employees play a pivotal role as they have constant service encounters with customers [6,7]. "As part of their daily work, hospitality employees need to interact with others, essentially the customers. During these interactions, the employees have to perform emotional labor, which is referred to as the management of feeling to create a publicly observable facial and bodily display" (as cited in Xu et al., 2020 [8] (p. 1). One of the core aspects of organizational psychology revolves around the nature of the interaction between employees and customers, which has drawn the attention of policymakers to the sustenance of employee emotional well-being [3,9]. There is an ample number of studies on environmental stimuli (i.e., atmospheric/ambiance) on consumer emotions and behavior [10][11][12][13][14] however, there are limited empirical studies on exploring the effects of workplace attributes on the emotions and behavior of service employees in the tourism and hospitality sector, specifically, in hotels. To the authors' best knowledge, this is the first study that tests the effects of environmental stimuli/ambiance (i.e., air quality, temperature, humidity/dryness, aroma/scent, background music, noise, etc.) on employee emotional behavior (i.e., emotional intelligence and emotional dissonance) and the consequential implications for service sabotage. 'Service sabotage occurs when a customer-contact employee knowingly acts in a manner that disrupts an otherwise satisfactory service encounter' [7] (p. 326). The assumption is that service employees are susceptible to ambiance attributes that can trigger emotional intelligence (positive behavior) or emotional dissonance (negative behavior) [1,15]. Kwak et al. (2018) wrote, 'emotional dissonance is considered the negative product of emotional labor [1] (p. 228). Schreuder et al. (2016) observed, 'how we perceive our environment affects the way we feel and behave [11] (p. 1). The impressions of our ambient environment are influenced by its entire spectrum of physical characteristics (e.g., luminosity, sound, scents, [and] temperature) in a dynamic and interactive way'. Service employees who are in constant service encounter in the tourism and hospitality sector can view their work as repetitive, unfulfilling, monotonous, and often mindnumbingly boring, and are situated in a low paid sector with long working hours [7,18]. Humborstad et al. (2007) argue that it is the responsibility of managers to provide an organizational environment as part of the internal cultural effort toward a favorable working ambient to cultivate commitment instead of inadvertently facilitating service sabotage [18]. This study then draws on Mehrabian and Russell's Environmental Psychology model [19], as well as a Stimulus-Organism-Response (S-O-R) framework [14,20,21] to explain the effects of environmental variables on service employees' emotional states and ultimately their behavior towards customers with implications for service sabotage. Notwithstanding the substantial amount of studies on the impact of the physical environment on human psychology and behavior, previous research has been limited to particular elements in the physical environment (e.g., lighting and music), nonetheless, a combined effect of several physical environmental elements in the tourism and hospitality sector remains under-researched. Steg et al. (2014) elaborated on this gap, not necessarily on tourism, but analyzing environmental behavior as 'the strength of normative goals depends on individual factors (in particular biospheric values), as well as situational factors (that is, situational cues that activate or deactivate different types of values) that are generally overlooked in environmental behavior research' [20] (p. 105). The main objectives of the current study are (i) to adapt the Mehrabian-Russell models (i.e., M-R and S-O-R) to a hotel ambient context (i.e., service employees working area and reception counter) and (ii) to investigate the impact of ambiance characteristics (e.g., air quality, temperature, humidity, odor, music, etc.) on emotional behavior (emotional intelligence and emotional dissonance). (iii), to examine the effect of emotional behavior towards service sabotage. Conceptual Framework The relationship between physical environment (i.e., servicescape) [5,21] and service providers' (i.e., service employees) response behavior during a service encounter can be analyzed and explained in the context of environmental psychology as elaborated and theorized by Mehrabian-Russell' model [19]. In the context of environmental psychology, the Mehrabian-Russell model has become an epistemological platform to investigate and analyze the physical environmental impact on people. 'Mehrabian and Russell introduced pleasure, arousal and dominance as three independent emotional dimensions to describe people's state of feeling' [22] (p. 406) (see also Figure 1). The Mehrabian-Russell model has been utilized in various studies, mainly in marketing and consumer behavior research [12,14,23,24]. Morrison et al. (2011) found that their 'conceptualization predicts that in a retail fashion store focusing on the female youth market loud music and a pleasant (vanilla) aroma will significantly and independently impact shoppers' pleasure and arousal' [14] (p. 562). Mehrabian and Russell described pleasure purely in terms of positive or negative feelings [19]. Service employees' feelings and their response behavior during the service encounter can be contextualized in the environmental characteristics of the working environment. For instance, 'luminosity of light sources, the nature and level of ambient noise and acoustics, the presence of specific odors, color hues, and shades, and materials and atmospheric factors such as temperature and humidity, all generate sensory input, and combined contribute to specific reactions in the observer' (as cited in Schreuder et al., 2016 [11] (p. 2). Mehrabian and Russell (1974) have also introduced the Stimuli-Organism-Response (S-O-R) model, which was adjusted by Bitner (1992) and Lin (2004) under the servicescape framework [19,25,26]. 'In this model, the environmental stimuli (S) first evokes an emotional response in individuals (O), which, in turn, potentially elicits either approach or avoidance behavior (R)' [11] (p. 3). Jani and Han (2015) have applied this model to examine hotel ambiance and its impact on guests and how it can affect loyalty through the mediation effect of consumption emotions (i.e., positive or negative) [12]. Nevertheless, 'the relevance of emotional variables is already supported in the Stimulus-Organism-Response (S-O-R) paradigm and the atmosphere and servicescape models' (as cited in Errajaa et al., 2018 [27] (p. 102). This has been highlighted by Mehrabian-Russell models [19], which suggests that environmental stimuli (ambient characteristics) can result in internal reactions (e.g., emotional intelligence and emotional dissonance) and thus, in turn, induces a response behavior/reaction (e.g., pleasant or unpleasant) with consequences towards avoiding service sabotage or acting upon it. Our study is an attempt to fill the gap in the knowledge by integrating hotel ambiance and service employee response to service sabotage by contextualizing the mediating effects of emotional intelligence and emotional dissonance. Notwithstanding the volume of research regarding emotional exultation, the impact of the workplace environment and its impact on employee emotional behavior and its influence on service sabotage as well as job performance remain unexplored [28]. The review of Emotional Intelligence (EI) and Emotional Dissonance (ED) and their mediating effect in relation to ambient characteristics, especially pertinent to service employees in the tourism sector remains scant. However, Bitner (1992) in his study on the impact of physical surroundings on consumers and employees has a brief reference to employees, but with a focus on consumers [23]. For the conceptual model, see Figure 2. Ambient/Ambiance Ambient or the physical environment is referred to 'background conditions that exist below the level of our immediate awareness' (e.g., air quality, temperature, humidity, ventilation, noise, architecture, color, odor, texture, and functionality, etc.) [24] (p. 150). Bitner (1992) refer to ambient as 'servicescape', which is defined as 'the overall or total construct of environmental dimensions, rather than being a single component [23]. The components of servicescape have been classified into three dimensions: Ambient conditions; space/function and sign; and symbols and artifacts' (as cited in Durna et al., 2015 [25] (pp. 1730-1731). Even though there is a degree of overlap between ambient, ambiance, and servicescape, ambiance is referred to as feelings of pleasure, stimulation, and immersion when one is embedded in a physical environment/ambient [19,23]. Thus, 'servicescape can therefore be used as a tool for making experience evaluations of customers [employees] easier' (as cited in Durna et al., 2015 [25] (p. 1729). Lee (2011) furthermore emphasizes the necessity for the hotel industry to use different components of servicescape, such as ambiance, service, convenience, decor, and design, to be competitive within the market [26]. Bitner (1992) further noted how ambient conditions including background characteristics of the working environment (e.g., in a hotel), and can be affected by temperature, lighting, noise, music, scent, spatial layout, and equipment with their functionality having the ability to facilitate the performance [23]. Numerous studies have focused on employees as one aspect of the service provision environment (e.g., restaurant, hotel, supermarket, department store, etc.) in relation to employee presentation and interaction with customers [29][30][31][32]. Some authors have also raised the issue of 'emotion' mildly, mainly in relation to patron and customer behavioral response to ambiance characteristics [33][34][35]. However, employees' emotional intelligence and emotional dissonance that are affected by ambiance characteristics with the consequential behavior of service sabotage remaining to be unexplored. Emotional Intelligence 'Emotional intelligence is a type of social intelligence that involves the ability to monitor one's own and others' emotions, to discriminate among them, and to use the information to guide one's thinking and actions' [36] (p. 433). Lechner and Paul (2017) studied service employees' emotion authenticity as well as the variability of displayed emotion from customers' point of view [37]. They highlighted that 'display of positive emotions in service employee-customer interactions is key to satisfactory service delivery in many service industries' [37] (p. 195). Cheung and Tang confirmed the quality of work-life and workplace as an important mediator in affecting emotion [38]. Grandey (2000) and Gross (1998) realized that employees need to experience pleasant internal feelings as psychological mechanisms in the workplace, e.g., smile-congruence to emotional intelligence, something that organizations desire as an emotional expression in the workplace [33,39]. However, organizations may have overlooked the effect of workplace ambiance, which may have different emotional consequences (i.e., emotional dissonance) and eventually actionize service sabotage [3]. However, the effect of ambiance characteristics on the variability of service employees has been poorly understood. Mayer et al.'s (2011) classification of emotional intelligence established a reasonable argument for the role of emotional intelligence as the behavioral state of a person's action in the context of environmental psychology [19,40,41] (see also Table 1). For instance, in Assimilating Emotion in Thought, Mayer et al. (2011) claimed that 'using emotions to prioritize thinking in productive ways' can be by ambient or workplace and associated characteristics of that space (e.g., regarding the service employees in this case) [34] (p. 532). Bitner (1992) categorizes service environment (where service employees are embedded) 'into ambient conditions, space or function, and sign/symbols/artifacts as the three main dimensions [23]. Ambient conditions include temperature, lighting, noise, music, and scent are those that have an effect on the five sense organs, while space refers to the arrangement of facilities in the service environment in a particular order for the attainment of a particular function' (as cited in Jani and Han, 2015 [12] (p. 49). Table 1. Classification of emotional intelligence. Assimilating Emotion in Thought Understanding and Analyzing Emotion Reflective Regulation of Emotion Identifying and expressing emotions in one's physical states, feelings, and thoughts. Identifying and expressing emotions in other people, artwork, language, etc. Using emotions to prioritize thinking in productive ways. Generating emotions as aids to judgment and memory. Emotional Dissonance In contrast to emotional intelligence, emotional dissonance carries a negative connotation with disturbing consequences in the context of environmental psychology. Emotional dissonance is 'defined as the extent to which felt emotion differs from the emotion that should be expressed as required by display rules' [35] (p. 64). However, emotional dissonance can be the cause of behavior not necessarily in following the rules that are required from employees. For example, a service employee in a hotel may develop emotional behavior/dissonance due to ambiance characteristics and not act in a friendly manner with tourists during a service encounter. This is because emotional dissonance (as a mediator) [43] is a stressor that debilitates the effective performance of the task and as such can become a threat to organizational reputation [44]. Zapf et al., (1999) argued that emotional dissonance is a stressor that impairs the effective fulfillment of the task and as such can become a threat to employee well-being [45]. Mehrabian and Russell (1974) in their discourse of emotion 'operationalized pleasure purely in terms of positive or negative feelings [19]. However, two decades later, Mehrabian (1996) 'operationalized pleasure in a rather different way and used connotations such as excitement, relaxation, love, and tranquility versus cruelty, humiliation, disinterest and boredom' (as cited in Bakker et al., 2014 [22] (p. 410). In the context of Mehrabian's (1996) operationalization of pleasure and displeasure ( Figure 1) as well as the S-O-R model [23,46], the response behavior of service employees can engender emotional dissonance (i.e., cruelty, humiliation, disinterest, and boredom), and consequently service sabotage. Moreover, the path towards such a response has a link to the servicescape or ambient and associated characteristics. Emotional dissonance, in which service sabotage is its manifestation, is indirectly explained by the theory of Conservation of Resources (COR) [3,40]. Based on the COR, it is plausible for a situation (e.g., characteristics of ambientservicescape) to be emotionally charged (emotional dissonance or physical depletion), which can undermine employee work ethic and value (authenticity). Lee and Ok (2014) elaborated that such employees: 'May feel emotional and physical depletion, a lack of energy and extreme tiredness, even feelings too drained of emotional resources to cope with continuing demands. This may lead to poor self-esteem and self-efficacy, and they begin to feel less competent and less successful, reducing their sense of personal accomplishment, causing them to evaluate themselves negatively in productivity' [3] (p. 178). The assumption is that servicescape or ambient can affect the expression of emotion in ones' physical states, feelings, and thoughts (Table 1) [45] with negative implications including service sabotage. Similarly, research on emotion work has identified several person-and work-related antecedents of emotional dissonance (or surface acting). With respect to service sabotage, 'surface acting implies a state of emotional dissonance' [35] (p. 64). It is plausible to argue that emotional dissonance as a mediator for service sabotage in the context of situational setting (ambient), conceptualized 'as a clash between 'real' and 'false' emotion predicated on an authentic self that is transmuted in organizational settings' [41] (p. 1530). This is in line with Grandey's (2000) assertion that 'the job environment or a particular work event may induce an emotion response in the employee (e.g., anger, sadness, anxiety), and behaviors may follow that would be inappropriate for the encounter (e.g., verbal attack, crying, complaining) [33] (p. 99). Service Sabotage Dysfunctional tendencies of employees are not limited to the service sector of tourism; it has been witnessed in the past and in various other sectors. In the studies of the hospitality and tourism sectors, Lee and Ok (2014) cited that 'more than 85% of customer-contact employees reported having engaged in some forms of service sabotage, and 100% of the service employees in one study reported that service sabotage occurs every day in their workplace' [3] (p. 178). Along with Mehrabian-Russell's theory of S-O-R, COR [40] also offers an alternative explanation for the emotional response with implications for service sabotage as an individual appraisal or specific environmental approaches to stress [47]. The recent revision of COR 'unfolds its development, focusing on the individual within the context of work [ambient setting] as one of the most significant arenas' [47] (p. 170). One should bear in mind that 'ambient' and its components contain dimensions of function, impact, and interaction. In a hotel setting, this has implications for users/customers and service encounters. Durna et al. (2015) found that 'ambient conditions are composed of temperature, quality of air, voice, music, smell, and etc., whereas space/function is composed of design, decoration, and business equipment [25] (p. 1731). Signs, symbols, and artifacts are used in a physical environment with the aim of transmitting what is necessary for users or for enabling communication'. Service sabotage as a behavioral/emotional dissonance response to the effect of ambient/servicescape is contextualized by Trigg (2006) [48]. He argued 'that context does have an influence on the distinction between disinterestedness and interestedness and that the hotel lobby is an excellent illustration of a spatial context that facilities disinterested delight for the reason that it is largely impersonal, indifferent, and so universal' [48] (p. 418). We assume that the ambiance elements of servicescape not only is of strategic importance pertaining to the image of high-quality service in the hotel sector, they are also evoking emotional responses from employees [25]. Bitner (1992) applied the S-O-R paradigm for and employed the term servicescape in reference to the atmospheric description in service settings to understand the impacts of physical surroundings on employees and customers [23]. In concordance with Bitner (1992), Turley and Milliman (2000) also discussed the organism and emotional responses of both customers and employees with an implication for customers in terms of the affective image of servicescape, and for employee behavior in terms of service sabotage [23,32]. Proposing Hypothesis Based on the aforementioned literature and constructs, the following hypothesis are proposed: Hypothesis 1a (H1a). Ambiance/servicescape positively evokes emotional intelligence. In sum, by integrating all the hypotheses we proposed a parallel mediation of EI and ED between the relationship of ambiance and SS: Hypothesis 3 (H3). Emotional intelligence and emotional dissonance mediate the relationship between ambiance/servicescape and service sabotage in a parallel manner. Sampling and Survey Procedures The respondents of this research were service employees who are continuously in contact with customers. The target population was employed by five-and four-star hotels in north Cyprus. A total of 6 five-star and 3 four-star hotels were selected from 4 locations, which are classified as a tourist hotspot. For the location of the hotels, see Figure 3. Prior to the distribution of the questionnaires, the general managers of the hotels were contacted and the purpose of the study explained. After receiving their consent, questionnaires were delivered. In the meantime, we explained and requested that the questionnaires be completed on a timely basis, meaning that respondents should have enough time to focus rather than to rush through the process [49]. Employees who were not in regular contact with customers were excluded. Overall, 400 survey questionnaires were distributed, of which 378 valid questionnaires were returned (90.0% response rate). The drop-off/pick-up method for survey research was used whereby questionnaires were delivered by hand to the managers in each hotel to be distributed. A pretesting/pilot study was adhered to in order to identify items that respondents might have difficulty understanding or interpreting in a way that was different to what the researcher intended [50]. For the purpose of this study, a judgmental/purposive sampling was applied, which is a highly-used method in the field of organization studies [51]. Data collection was accom-plished during the summer of 2019 prior to the pandemic outbreak. For the demographic characteristics of the respondents, see Table 2. Measurement Instrument All valid measurement instruments were chosen from previous studies relevant to ambiance and emotional behavior (e.g., [14,32,47,52]). Each item is scored on a 1-5 Likert scale ranged from (one = strongly disagree) to (five = strongly agree). The reliability test in the study demonstrated that these measurements provided adequate levels of internal consistency as alpha values were above the cut-off point of 0.70 [53]. Turley and Milliman (2000) found that there have been ample studies and evidence about the impacts of the service environment on consumer behavior, especially in restaurants and hotels within the hospitality industry [39,52,54]. However, there has been limited research done from the perspective of employee behavior. Nevertheless, most of the studies on this topic have been on marketing and customer behavior with limited studies on service employees [55]. In this study, we adopted 6 items for measuring hotel ambiance from previous research [14]. The ambiance score of each hotel was rated by service employees. All of these statements were measured by a five-point Likert scale. The Cronbach's alpha for the ambiance was 0.904. The results of coefficient alpha scale items are listed in Table 3. The Wong and Law Emotional Intelligence Scale (WLEIS) [56] (i.e., 16-items) were applied, which are conceptualized by Mayer and Salovey (1997) and include four components of EI (i.e., self-emotion appraisal, other-emotion appraisal, use of emotion, and regulation of emotion) [45]. The EI was rated by service employees. The coefficient alpha for the emotional intelligence scale in this study was 0.929. The emotional dissonance was rated by service employees, using 9 items drawn from Chu and Murrmann's (2006) known as the Hospitality Emotional Labor Scale (HELS) [57]. The coefficient alpha for emotional dissonance in this study was 0.819. The HELS evaluates the emotional labor of hospitality employee performance during the service encounter. The example of sample items are: "The emotions I show to customers match what I truly feel" and "I fake a good mood when interacting with customers". For Service Sabotage (SS) construct, nine items were used derived from the work of Harris and Ogbonna (2006) [58]. The Service Sabotage scale is rated by customer contact with employees. In this study, service employees were asked to indicate the extent to which they agreed with each statement about their service sabotage related behaviors. The Cronbach's alpha for service sabotage was 0.917. Control Variables To rule out alternative explanations for the findings, previous studies indicated that demographic features such as age, gender, organizational tenure, and organizational experience are linked to ED, EI, and SS [52,59,60]. Therefore, demographic characteristics were used as control variables in all analyses to guarantee that the relationships among variables are not confounded. Data Analysis and Results Prior to the estimations, case and variable screening were checked and no missing data were observed. The normality of the data set was checked and the skewness and kurtosis of the variables were within recommended ranges of ±3.3 as the upper threshold for normality indicating the relatively normal distribution of the data [61]. In order to confirm the convergent validity, a Confirmatory Factor Analysis (CFA) was performed via AMOS 24.0 [54], which measures various fit statistics for the assessment of the measurement model in CFA. Cronbach's alpha coefficient, composite reliability, and the Heterotrait-Monotrait Ratio of Correlations (HTMT) were measured to confirm the reliability of the variables. HTMT as a new criterion for assessing the discriminant validity is used instead of the average variance extracted which is measured by the suggestion of Voorhees et al. (2016) [62]. In order to analyze the parallel mediation effect, the macro PROCESS model 4, V.3.5 for SPSS V.25 was employed using a bootstrapped 5000 sample size via the 95% confidence interval [63]. The exploratory factor analysis was performed to assess how much variance in the study's variables can be explained through a single factor. If a single factor emerges or one general factor accounts for most of the covariance in the independent and dependent variables, a significant Common Method Variance (CMV) is present [64]. All four variables were entered into the exploratory factor analysis, using the extraction method of maximum likelihood through the rotation method of Promax with Kaiser normalization, in order to determine the number of factors needed to consider for variance in the variables. As a result of factor analysis, it is revealed that all distinct factors with eigenvalues are found greater than 1.0, and are therefore not a single factor. The factors together accounted for 51.53% of the total variance. Consequently, results displayed that the potential of the common method bias is minimized. Table 4 exhibits the results for means, standard deviations, and correlations of the study and control variables. Gender, age, organizational tenure, and organizational experience, as control variables, are not significantly correlated with dependent variables. The ambiance is positively and significantly related to Emotional Intelligence (r = 0.543, p < 0.001), which means H1a is supported. Emotional Intelligence negatively and significantly related to service sabotage (r = −0.280, p < 0.001), which shows H1b is supported. The ambiance was found as positively and significantly related to emotional dissonance (r = 0.117, p < 0.1). H2a proposed that ambiance is negatively related to ED and results showed that H2a is partially supported. ED is related to Service Sabotage positively (r = 0.345, p < 0.001). Therefore, H2b is also supported. Note: Diagonal elements in italic are the means; upper diagonal elements in bold are HTMT ratios; lower diagonal elements are correlations; † p < 0.100; * p < 0.050; ** p < 0.010; *** p < 0.001 (2-tailed); SD: Standard Deviation, HTMT = Heterotrait-Monotrait Ratio of Correlations. Table 5 summarizes the result of unstandardized coefficients for all variables. The results demonstrated that all the relations in these analyses were significant and accordingly the parallel mediation conditions were supported. Specifically, hypotheses 1a and 1b proposed that ambiance via EI is associated with service sabotage. The significance relation of ambiance with EI (B = 0.455, p < 0.001) and EI with service sabotage (B = −0.260, p < 0.050) proposed that EI mediate this relation significantly. Based on this hypothesis, 1a and 1b were supported. Hypothesis 2a and 2b predict the relation between ambiance and service sabotage is mediate by EI. The significance relation of ambiance with ED (B = 0.283, p < 0.001) and ED with service sabotage (B = −0.426, p < 0.001) proposed that ED mediate this relation significantly. Although there is evidence of a significant mediator role in hypothesis 2a, it was demonstrated that ambiance is negatively associated with ED while the result of Table 4 proposes that ambiance is positively related to ED. Thus, hypothesis 2a is partially supported. All control variables were insignificant except age in the relationship between ambiance and ED (B = −0.206, p < 0.050). Hypothesis Test The results indicate that beyond control variables (age, gender, tenure, and experience), the ambiance is significant as an independent of service sabotage (B = −0.381, p < 0.001). The significant mediating effects of both EI (B = −0.119, CI: −0.201, −0.044) and ED (B = 0.121, CI: 0.066, 0.181) reveals that the parallel mediation of EI and ED are significantly supported. Thus, hypothesis 3 is supported. The total effect that is the summation of all indirect effects and direct effect is estimated hypothetically through regressing ambiance on SS alone. The total effect was statistically significant (B = −0.379, p < 0.001). Therefore, employees reported lower service sabotage even though the ambiances indirect effect through both emotional intelligence and emo-tional dissonance was taken into account. Again, this indicates the parallel mediation role of the mediators in this relationship. Although we did not hypothesize the direct effect of ambiance on SS, the process model always suggested a null hypothesis of direct effect to be equal to zero. According to the main findings, the model is revised as shown in Figure 4. Discussion and Conclusions The current study tries to clarify how the environmental stimuli and emotional labor of service employees can affect service sabotage, and how EI and ED can mediate this relation. According to the M-R model and the S-O-R framework, we hypothesized that: (a) EI and ED are important sources of SS, (b) ED mediates the relation of ambiance and Service Sabotage negatively, and (c) Emotional Intelligence also mediates the relation of Ambiance and Service Sabotage but positively. To examine the hypotheses, we analyzed the effect of ambiance on ED and EI, the effect of these two on SS, and at the end pointed out the effect of ambiance on SS with two mediators' effect. The result of our tests showed that ambiance positively affects the ED and EI, which means that the environmental stimuli easily influence the psychometrical and personal factors of hospitality employees during their organizational experience. These findings validate the claims of numerous scholars (e.g., [11,[65][66][67]) who are in accord with Schreuder et al.'s (2016) assertion that: 'Environmental characteristics such as luminosity of light sources, the nature and level of ambient noise and acoustics, the presence of specific odors, color hues and shades, and materials and atmospheric factors such as temperature and humidity, all generate sensory input, and combined contribute to specific reactions in the observer [e.g., service employees] (as cited in Schreuder et al., 2016 [11] (p. 1). As predicted, ED is positively related to service sabotage, while EI reduces service sabotage. This is also in concordance with Lee and Ok's (2014) study who applied COR theory to the topic of SS and hospitality industry employees' interaction with tourists with implications for service quality [3]. The study has also revealed that a negative association between EI and service sabotage, as predicated on characteristics of servicescape, can affect employees as a way to realize and regulate their own, and others' emotions with less engagement in SS. It was also determined that staff with high EI are more engaged in an effortful process through which employees change their internal feelings to align with organizational expectations, producing more natural and genuine emotional displays (deep acting), unlike ED, which is more likely to involve faking the required emotions (surface acting) [68]. The mediating role of EI and ED in service sabotage emanating from servicescape is in line with in which 'trait EI was found to be associated with less mood deterioration and less emotional reactivity (emotional intensity, action tendencies, and bodily sensations) following a laboratory stressor' [reduced tendency towards service sabotage] [69] (p. 107). The result of the study has revealed that hotel service employees with high EI can save emotional resources to use for appropriate emotional labor, reducing negative environmental stimuli, and thus, feel frustration as much as others which may otherwise lead to service sabotage. This also signifies the significance of servicescape/ambiance characteristics on emotional labor outcome. The result also validates that the workplace (servicescape) should be perceived as a rational environment, where emotions/behaviors can be affected by its attributes. As Grandy (2000) articulated, 'the situation acts as a cue to the individual, and the individual's emotional response tendency (physiological, behavioral, cognitive) provides information to that individual and the others in the social environment' (servicescape) [33] (p. 98). Consequently, employees reported lower service sabotage through the parallel mediating effect of both emotional intelligence and emotional dissonance in the context of ambiance-service sabotage relationship. In this study, the results showed that the ambiance had a significant effect on reducing SS. Although we did not hypothesize this effect, the M-R model, and S-O-R framework can perfectly explain this relationship. Meaning that ambiance as the independent source of emotions can mitigate employee SS behavior. This is another evidence for the important role of ambiance in reducing employee SS directly and indirectly through the mediators of EI and ED. In relation to previous studies, this study is in line with Härtel et al. (2005), who studied organizational environment and emotion in the workplace and asserted that organizational environment (including ambiance characteristics) could induce positive emotions and reduce negative emotions [70]. Furthermore, this study complements the findings by So et al. (2020), who stated that one's emotional state is an important outcome of environmental stimuli (e.g., ambiance characteristics) in the context of the S-O-R model, which can also result in responses such as emotional behavior [15]. This study is also in agreement with findings of Lee and Ok (2014), who revealed that hospitality employees who experience emotional discrepancy are more likely to engage in SS [3]. They focused on the effect of burnout however, burnout is not the only stimuli to cause the depletion of psychological energy and emotional reaction. This study is also in concordance with Bitner's (1992) classical work who categorized 'service environment into ambient conditions, space or function, and signs/symbols/artifacts/as the three dimensions' (Jani and Han, 2015, p. 49) which influence positive or negative emotional responses [12,23]. This study is also in line with Liu and Perrewé (2005) who studied Counterproductive Work Behavior (CWB) in the context of psych-evolutionary theory of emotion, which indicated that 'a healthy work environment that allows for a wider range of emotional expression is very important both for the employee and the organization' [71]. Liu and Perrewé (2005) reiterated that employees can become dissatisfied because of a noisy servicescape and lack of resources which can demotivate and possibly result in CWB such as aggression, hostility, sabotage, and theft [71]. Theoretical Implications This study, by using M-R and S-O-R models, explains the effect of service ambiance [12,72] components as a stimulus on emotional labor and SS. In this regard, it can be stated that examining servicescape and environmental stimulus has contributed to an understanding of the components (e.g., air quality, temperature, humidity, odor, music, color, shades, ventilation, noise, architecture, texture, functionality, etc.) of the working environment and their impact on emotional behavior with consequences for SS. In addition, it contributes to both practitioners and the literature by providing various suggestions to hotel managers and researchers. It also provides the theoretical reasoning and empirical testing of how ambiance affect emotional labor and how emotional labor expands to SS which theoretically affirms the utility of the extended M-R and S-O-R models with the inclusion of EI and ED personality as the mediators within the tourism and hospitality industry. Previous studies have focused on personal traits and on-the-job attitude (e.g., [12,43,73,74]) and organizational injustice [75]. Others have focused on emotional responses (e.g., Jani and Han, 2013; Harris and Ogbonna, 2006), customer mistreatment of employees [76], Service Sabotage [35], and the influence of work status on organizational citizenship behavior [77]. However, there has not been any comprehensive study of servicescape/ambiance characteristics and service employee emotional labor (EI and ED) with consequences for SS. Investigating how workplace/ambiance characteristics affect negative workplace service employee emotional behaviors, such as SS, is somewhat lacking. This study explored EI and ED as two critical mediators in the model. This finding enriches the literature of SS and suggests the plausibility of relationships between workplace ambiance characteristics and employee emotional response to the amelioration of SS, which is considered deviant behavior. Practical Implications The practical implications of this study should be acknowledged and emphasized because of the frequent occurrence of SS in various organizations, especially in the tourism and hospitality sector [78]. Consequently, it disrupts and degrades the quality and value of service encounters, which is vital for a service sector organization's reputation and sustainability. In terms of practical implications, service sector managers (e.g., hotels and restaurants) should benefit from this empirical research that various factors including the workplace ambiance can play a role in inspiring service employees and their emotional response (EI and ED) either to curtail service sabotage or to entice it. The accommodation sector managers who wish to reduce and control SS behaviors should focus their efforts on the servicescape's impact by arranging meetings and listening to service employees' perceptions regarding the characteristics of the workplace. This way, they can understand service employees' feelings, attitudes, and suggestions. This approach by management can bear fruits in different ways. First, it can generate respect and loyalty between employees and the organization as employees will consider it a valuation of their labor (i.e., feeling valued) [79]. Managers should also pay attention that SS is not a uniform behavior, meaning that different service employees will apply different ways to sabotage their services during the service encounter. Service sabotage can take place in different forms by deferent employees including Thrill Seekers, Apathetic, Customer Revengers, and Money Grabbers [7,80]. Managers need to be aware that what constitutes sabotage in the workplace is not in isolation from workplace characteristics therefore, understanding and acknowledging it can contribute to the sustainability of the organization. Limitations and Pathway for Future Studies Notwithstanding the study's substantive contributions, it has some limitations. Primarily this study is a correlational attempt however, an experimental study can investigate cause and effect relationships between SS and each construct (characteristics of workplace/servicescape/ambiance). This is not only a limitation, it is also a suggestion for future studies. Furthermore, the analysis and discussions presented herein are based on the much smaller sample (five-star and four-star hotels) and future studies may investigate employees' service sabotage behaviors in other accommodation sectors as well as tourism subsectors with the application of a comparative study to draw a holistic picture. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
2021-05-11T00:03:49.132Z
2021-01-16T00:00:00.000
{ "year": 2021, "sha1": "96cdd9639703067e02cd1614547657f2ef1518a8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/2/876/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "7cec0606dfb22b3839d50a0650ad4c96c0cd55d0", "s2fieldsofstudy": [ "Psychology", "Business", "Environmental Science" ], "extfieldsofstudy": [ "Psychology" ] }
218486919
pes2o/s2orc
v3-fos-license
Generating Derivational Morphology with BERT Can BERT generate derivationally complex words? We present the first study investigating this question. We find that BERT with a derivational classification layer outperforms an LSTM-based model. Furthermore, our experiments show that the input segmentation crucially impacts BERT's derivational knowledge, both during training and inference. Introduction What kind of linguistic knowledge is encoded by the parameters of a pretrained BERT (Devlin et al., 2019) model? This question has attracted a lot of attention in NLP recently, with a focus on syntax (e.g., Goldberg, 2019) and semantics (e.g., Ethayarajh, 2019). It is much less clear what BERT learns about other aspects of language. Here, we present the first study on BERT's knowledge of derivational morphology. Given a cloze sentence such as this jacket is . and a base such as wear, we ask: can BERT generate correct derivatives such as unwearable? The motivation for this study is twofold. On the one hand, we add to the growing body of work investigating BERT's linguistic capabilities. BERT segments words into subword units using a WordPiece tokenizer (Wu et al., 2016), e.g., unwearable is segmented into un, ##wear, ##able. The fact that many of these subword units are derivational affixes suggests that BERT might acquire knowledge about derivational morphology (Table 1), but this has not been tested. On the other hand, we are interested in derivation generation (DG) per se, a task that has been only addressed using LSTMs Deutsch et al., 2018), not models based on Transformers like BERT. Our contributions are as follows. We show that pretrained BERT overgenerates highly productive BERT DCL this jacket is [MASK] wear [MASK] . un ##able Figure 1: Basic experimental setup. We input sentences such as this jacket is unwearable . to BERT, mask out derivational affixes, and recover them using a derivational classification layer (DCL). Type Examples Prefixes anti, hyper, non, pseudo, un Suffixes ##able, ##ful, ##ify, ##ness, ##ster affixes and analyze methods to increases its performance. After finetuning, BERT beats an LSTM model. Furthermore, we show that the input segmentation crucially impacts how much derivational knowledge is available to BERT, both during training and inference. We also publish the largest dataset of derivatives in context to date. 1 Dataset of Derivatives We base our study on a new dataset of derivatives in context similar in form to the one released by , i.e., it is based on sentences with a derivative (e.g., this jacket is unwearable .) that are altered by masking the derivative (this jacket is .). The sentences are accompanied by the base (wear) and the derivative (unwearable). While use Wikipedia, we extract the dataset from Reddit. 2 Since most productively formed derivatives are not part of the language norm initially (Bauer, 2001), social media is a fertile ground for studies on derivational morphology. For determining derivatives, we use the algorithm introduced by Hofmann et al. (2020a), which takes as input a set of prefixes, suffixes, and bases and checks for each word in the data whether it can be derived from a base using a combination of prefixes and suffixes. The algorithm is sensitive to morpho-orthographic rules of English (Plag, 2003), e.g., when ity is removed from applicability, the result is applicable, not applicabil. Here, we use BERT's prefixes, suffixes, and bases as input to the algorithm. Drawing upon a representative list of 52 productive prefixes and 49 productive suffixes in English (Crystal, 1997), we find that 48 and 44 of them are contained in BERT's vocabulary. We assign all fully alphabetic words with more than 3 characters in BERT's vocabulary except for stopwords and previously identified affixes to the set of bases, yielding a total of 20,259 bases. We then extract every sentence including a word that is derivable from one of the bases using at least one of the prefixes or suffixes from all publicly available Reddit posts. The resulting dataset comprises 413,271 distinct derivatives in 123,809,485 context sentences, making it more than two orders of magnitude larger than the one released by . 3 Setup To examine whether BERT can generate derivationally complex words, we use a cloze test: given a sentence with a masked word such as this jacket is . and a base such as wear, the task is to generate the correct derivative such as unwearable. The cloze setup has been previously used in psycholinguistics to probe derivational morphology (Pierrehumbert, 2006;Apel and Lawrence, 2011) and was introduced to NLP in 2 We draw upon the entire Baumgartner Reddit Corpus, a collection of all public Reddit posts available at https: //files.pushshift.io/reddit/comments/. 3 Due to the large number of prefixes, suffixes, and bases, the dataset can be valuable for any study on derivational morphology, irrespective of whether or not it focuses on DG. this context by . We frame DG (derivation generation) as an affix classification task, i.e., we predict which affix is most likely to occur in a given context sentence with a given base. A prediction is judged correct if it is the affix in the masked derivative, i.e., we ignore affixes that might generate equally well-formed derivatives. We confine ourselves to three cases: derivatives with one prefix (P), derivatives with one suffix (S), and derivatives with one prefix and one suffix (PS). We use mean reciprocal rank (MRR), macroaveraged over affixes, as the evaluation metric. We extract all derivatives with a frequency f ∈ [1, 128) from the dataset. We divide the derivatives into 7 frequency bins with For each bin, we randomly split the data into 60% training, 20% development, and 20% test. Following , we distinguish two lexicon settings as to whether bases seen during training reappear during test (SHARED) or not (SPLIT). Notice we focus on low-frequency derivatives since BERT is likely to have seen high-frequency derivatives multiple times during pretraining and might be able to predict the affix because it has memorized the connection between the base and the affix, not because it has knowledge of derivational morphology. Since BERT distinguishes word-initial (wear) from word-internal (##wear) tokens, predicting prefixes requires the word-internal form of the base. However, only 795 bases have a word-internal form. We test four strategies for remedy: adding a hyphen between prefix and base in its word-initial form (HYP); simply using the word-initial instead of the word-internal form (INIT); tokenizing the base into word-internal subword units (TOK); training a projection matrix on the bases with both forms to map word-initial to word-internal tokens (PROJ). Despite its simplicity, the first option clearly performs best with pretrained BERT and is adopted for BERT models on P and PS ( Models All BERT models use BERT BASE and add a derivational classification layer (DCL) with softmax activation for prediction. We examine three BERT models and two baselines. See Appendix A.2 for details about hyperparameters. BERT+DCL+: We finetune BERT and DCL on DG. For PS, we also test an ensemble model combining the best P and S models for a given frequency bin by means of beam search (BEAM). BERT-DCL+: We only train DCL on DG, keeping the model weights of pretrained BERT fixed. This is similar in nature to a probing task. BERT-DCL-: We use pretrained BERT and leverage its pretrained language modeling head as DCL, filtering for affixes. LSTM: We use the neural encoder described in , which combines the left and right contexts of the masked derivative with a character-level representation of the base form. To allow for a direct comparison with BERT, we do not use the character-based decoder proposed by Vylomova et al. (2017) but instead add a dense layer to perform the prediction. However, for compara-bility, we evaluate the LSTM and the best BERTbased model on the suffix dataset released by against the reported performance of the encoder-decoder model. 4 Random Baseline (RB): The prediction is a random ranking of all affixes. Tables 3 and 4. For P and S, BERT+DCL+ clearly performs best. BERT-DCL-is better than LSTM on SPLIT but worse on SHARED. BERT-DCL+ performs better than BERT-DCL-, even on SPLIT (except for S on B7). S has higher scores than P for all models and frequency bins, which might be due to the fact that suffixes carry information about the part of speech and hence are easier to predict given the syntactic context. Regarding frequency effects, the models benefit from higher frequencies on SHARED since they can connect bases with certain groups of affixes. 5 The results on the dataset released by confirm the superior performance of BERT+DCL+ (Table 5), beating even the LSTM with additional POS information on SHARED (but not on SPLIT). For PS, BERT+DCL+ also performs best in general but is beaten by LSTM on one bin and BEAM on two bins. The smaller performance gap as compared to P and S can be explained by the fact that BERT does not learn statistical dependencies between two masked tokens (Yang et al., 2019). Results are shown in How does the performance of BERT vary across affixes? Firstly, pretrained BERT (BERT-DCL-) overgenerates several affixes, in particular non, re, er, ly, and y, which are among the most productive affixes in English (Plag, 1999) (see Appendix A.3 for details). To probe this effect more quantitatively, we measure the number of hapaxes formed by means of all affixes in the entire Reddit data, a common measure of morphological productivity (Pierrehumbert and Granell, 2018). This analysis shows a positive correlation: the more productive an affix, the higher its MRR value ( Figure 2). Secondly, several affixes seem to be particularly prone to confusion. Examples include semantically very similar affixes (e.g., ify and ize) and affixes denoting points on the same scale, often antonyms (e.g., anti and pro). This can be related to work showing that BERT has difficulties with negated expressions (Kassner and Schütze, 2019). Impact of Input Segmentation We have shown that BERT can generate derivatives if it is provided with the morphologically correct segmentation. At the same time, we observed that BERT's WordPiece tokenizations are often morphologically incorrect, an observation that led us to impose the correct segmentation using hyphenation (HYP). We now examine more directly how BERT's derivational knowledge is affected by using the original WordPiece segmentations versus the HYP segmentations. We draw upon the same dataset as for DG but perform binary instead of multi-class classification, i.e., the task is to predict whether, e.g., unwearable is a possible derivative in the context this jacket is . or not. 6 As negative examples, we combine the base of each derivative (e.g., wear) with a randomly chosen affix different from the original affix (e.g., ation) and keep the sentence context unchanged, resulting in a balanced dataset. We only use prefixed derivatives for this experiment. MP .636 .648 .659 .675 .683 .692 .698 .765 .777 .796 .808 .800 .804 .799 WP .572 .578 .583 .590 .597 .608 .608 .740 .756 .767 .769 .767 .755 .753 We train binary classifiers using BERT BASE and one of two input segmentations, the morphologically correct segmentation (MP) or BERT's Word-Piece tokenization (WP). The BERT output embeddings for all subword units belonging to the derivative in question are max-pooled and fed into a dense layer with a sigmoid activation. We examine two settings: training only the dense layer while keeping BERT's model weights frozen (FR), or finetuning the entire model (FT). See Appendix A.4 for details about hyperparameters. Morphologically correct segmentation (MP) consistently outperforms WordPiece tokenization (WP), both on FR and FT (Table 6). We interpret this in two ways. Firstly, the type of input segmentation used by BERT crucially impacts how much derivational knowledge can be learned, with positive effects of morphologically valid segmentations. Secondly, the fact that there is a performance gap even for models with frozen weights indicates that a morphologically invalid input segementation can blur the derivational knowledge that is in principle available to BERT. Taken together, this provides further evidence for the importance of morphologically valid segmentation strategies in language model pretraining (Bostrom and Durrett, 2020). Related Work BERT (Devlin et al., 2019) has been the focus of much recent work in NLP. Several studies have been devoted to the linguistic knowledge encoded by BERT's model weights, particularly syntax (Goldberg, 2019;Hewitt and Manning, 2019;Lin et al., 2019) and semantics (Ethayarajh, 2019;Wiedemann et al., 2019;Ettinger, 2020). There is also a recent study examining morphosyntactic information in BERT (Edmiston, 2020). There has been relatively little recent work on derivational morphology in NLP. Both and Deutsch et al. (2018) propose neural architectures that represent derivational meanings as tags. More closely related to our study, develop an encoder-decoder model that uses the context sentence for predicting deverbal nouns. Hofmann et al. (2020b) propose a graph auto-encoder that models the morphological well-formedness of derivatives. Conclusion We show that BERT can generate derivationally complex words and even beats LSTM-based models when finetuned on this task. Furthermore, we demonstrate that the input segmentation crucially impacts how much derivational knowledge is available to BERT. This is of relevance for the subject of language model pretraining in general.
2020-05-05T01:01:14.834Z
2020-05-02T00:00:00.000
{ "year": 2020, "sha1": "68ca25ff5acca02fba5617f5e2ebf39fe3ffbceb", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "68ca25ff5acca02fba5617f5e2ebf39fe3ffbceb", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
91379353
pes2o/s2orc
v3-fos-license
Identifying functional targets from transcription factor binding data using SNP perturbation Transcription factors (TFs) play a key role in transcriptional regulation by binding to DNA to initiate the transcription of target genes. Techniques such as ChIP-seq and DNase-seq provide a genome-wide map of TF binding sites but do not offer direct evidence that those bindings affect gene expression. Thus, these assays are often followed by TF perturbation experiments to determine functional binding that leads to changes in target gene expression. However, such perturbation experiments are costly and time-consuming, and have a well-known limitation that they cannot distinguish between direct and indirect targets. In this study, we propose to use the naturally occurring perturbation of gene expression by genetic variation captured in population SNP and expression data to determine functional targets from TF binding data. We introduce a computational methodology based on probabilistic graphical models for isolating the perturbation effect of each individual SNP, given a large number of SNPs across genomes perturbing the expression of all genes simultaneously. Our computational approach constructs a gene regulatory network over TFs, their functional targets, and further downstream genes, while at the same time identifying the SNPs perturbing this network. Compared to experimental perturbation, our approach has advantages of identifying direct and indirect targets, and leveraging existing data collected for expression quantitative trait locus mapping, a popular approach for studying the genetic architecture of expression. We apply our approach to determine functional targets from the TF binding data for a lymphoblastoid cell line from the ENCODE Project, using SNP and expression data from the HapMap 3 and 1000 Genomes Project samples. Our results show that from TF binding data, functional target genes can be determined by SNP perturbation of various aspects that impact transcriptional regulation, such as TF concentration and TF-DNA binding affinity. Introduction The transcriptional regulation of genes is one of the key biological processes, which is governed by transcription factors (TFs) binding to the regulatory region of target genes to initiate transcription. To determine genome-wide TF binding sites, techniques such as chromatin immunoprecipitation followed by DNA sequencing (ChIP-seq) or DNase I hypersensitive sites sequencing (DNase-seq) have been widely used [1]. Since TF binding may not be functional, a TF perturbation experiment is performed for functional validation, where those genes that are both bound by TF and differentially expressed after the perturbation are identified as functional target genes. One of the most commonly used perturbation techniques for functional validation of TF binding signals has been based on RNA interference (RNAi) [2]. RNAi uses small interfering RNAs to deplete the mRNAs transcribed from the gene encoding a given TF and then the perturbation effects are measured by genome-wide expression profiling before and after the perturbation [3,4]. More recently, perturbation techniques based on clustered regularly interspaced short palindromic repeats (CRISPR)-Cas9 have been used to study the transcriptional gene regulation by TFs [5]. In particular, CRISPR perturbations followed by single-cell RNA sequencing (scRNA-seq) have been performed to assess the impact of the genetic changes in the TF on genome-wide gene expression phenotypes [6,7]. Compared to RNAi, CRISPR-based methods have significantly higher accuracy and efficiency for TF perturbation. However, both types of perturbation experiments are costly, time-consuming and have off-target effects [8,9]. In addition, both methods suffer from the well-known limitation that they cannot distinguish genes directly affected by the perturbation from those genes indirectly affected in the downstream. In this study, instead of experimental perturbation, we propose to leverage naturally occurring perturbation of gene expression by genetic variants, which is captured in single nucleotide polymorphism (SNP) and gene expression data collected for a population of individuals, to determine functional target genes. There are several advantages in using SNP perturbation over experimental perturbation. First, SNPs provide more subtle and possibly more meaningful perturbation than artificial perturbation, since they are perturbations that exist in nature. Second, we can take advantage of the existing population SNP and expression data, especially because such datasets are often collected for expression quantitative trait locus (eQTL) mapping, an approach that has been widely used to understand the genetic basis of expression variation in population [10][11][12][13]. The key challenge in using SNP perturbation is that unlike in experimental perturbation, where only few genes are perturbed at a time, millions of SNPs across the genome perturb the expression of all genes simultaneously, making it difficult to decouple the perturbation effect of each individual SNP. To address this challenge, we propose a statistical framework based on a probabilistic graphical model [14], called a conditional Gaussian Bayesian network (cGBN), for modeling and learning a gene regulatory network under SNP perturbation. Our computational approach uses a TF binding map as prior knowledge of the relationship between the TF and its putative target genes. Given this prior knowledge, our learning algorithm determines functional target genes and gene regulatory networks under SNP perturbation along with the SNPs that perturb this network, using population SNP and expression data ( Figure 1). Given the estimated cGBN, we show that an inference algorithm can be used to infer the indirect SNP perturbation effects on the expression of genes in the downstream of the directly interacting TF and target genes. We apply our approach to determine functional targets from the ChIP-seq data for 76 TFs and DNase-seq data collected for a HapMap lymphoblastoid cell line (LCL) in the encyclopedia of DNA elements (ENCODE) project [1], using the gene expression and SNP data for the HapMap 3 and 1000 Genomes Project samples [15][16][17]. We examine the functional target genes that we identified with SNP perturbations of different aspects that characterize TF-target interactions, such as TF concentration, regulatory sequences of target genes, and TF coding sequences. In particular, we compare the functional target genes identified by the perturbation of TF concentration in our approach with those identified by RNAi of TFs in a previous study [3], and show that the two types of perturbation can affect the expression of downstream genes differently because of the different nature of the perturbation. Compared to previous approaches for combining TF binding data with other genomic data, our methodology provides a computational framework for identifying both eQTLs and their regulatory roles within a single statistical analysis. Most of the previous studies used a two-stage computational approach, which involved the identification of eQTLs followed by the investigation of the regulatory roles of those eQTLs based on TF binding data [18][19][20][21]. More sophisticated approaches based on Bayesian networks have been used in this follow-up investigation of eQTLs to construct a gene regulatory network affected by those eQTLs, and then to compare this network with the TF-target relationships suggested by TF binding data [22]. Several other previous studies have used TF binding data as prior knowledge in eQTL mapping to re-weight the SNPs in the region bound by TF as more likely candidates for eQTLs [23,24]. Compared to these methods, we focus on the goal of determining functional targets under SNP perturbation rather than identifying eQTLs. Toward this goal, our approach performs a single statistical analysis to simultaneously construct the transcriptional regulatory network under SNP perturbations and identify eQTLs perturbing this network, while incorporating TF binding data as prior knowledge to guide the learning algorithm. Methods overview We briefly describe our computational approach for determining functional target genes from TF binding data using population genotype and expression data (see Methods). Our computational approach represents a regulatory network over TFs, their targets and downstream genes under SNP perturbations as a cGBN. A TF binding map is used as prior knowledge on the network structure of the cGBN, which is then updated by our learning algorithm to contain only the functional TF-target interactions, given population SNP and expression data ( Figure 1). More specifically, our cGBN models a conditional probability density p(Y|X) for q gene expression levels Y = (Y 1 , . . . , Y q ), conditional on p SNP genotypes X = (X 1 , . . . , X p ). This density is defined over a network, with directed edges among the q gene expression nodes to model the regulatory network, with additional edges from the p SNP nodes to the gene expression nodes to model the eQTLs that perturb the gene expressions in the regulatory network. Our model estimation procedure, called A* lasso, learns both the network structure and the probability density associated with this network. We consider three different ways that SNPs affect TF-target interactions to modify the expression of target genes and further downstream genes. The first type of SNP perturbation we consider is the change in TF concentration, due to either SNPs or the expression variation of upstream genes, that induces changes in target gene expressions. Functional target genes validated under this scenario are represented in our cGBN as the gene expression nodes with edges from the TF expression node (red edges in Figure 1B). The second type of SNP perturbation we consider is SNPs in the regulatory region that influence the expression of nearby bound genes. Functional target genes validated under this scenario appear in our cGBN as the gene expression nodes with edges from cis eQTLs in the regulatory region (green edges in Figure 1B). While these cis eQTLs affect the corresponding target gene expression locally, a trans eQTL located in the TF coding region can influence the expression of many target genes globally, by modifying the TF amino acid sequence in the case of non-synonymous mutations or by modifying TF translation, folding, or splicing in the case of synonymous mutations [25] (blue edges in Figure 1B). Each putative target gene, which is derived from TF binding data as a gene near the bound region ( Figure 1A), provides three types of candidate edges corresponding to the above three scenarios. If any of these three types of edges are included in the estimated cGBN ( Figure 1B), we consider the corresponding candidate target gene a functional target ( Figure 1C). While direct targets are explicitly represented in the estimated cGBN, indirect targets are inferred via probabilistic inference. Our inference algorithm computes downstream perturbation effect sizes to quantify the impact of perturbation on each downstream gene expression under each of the three perturbation scenarios above. In experimental perturbation, since it is not possible to distinguish between direct and indirect targets among differentially expressed genes, all bound and differentially expressed genes are considered as functionally validated (Figure 2A). In contrast, the functional targets represented in our estimated model are only a subset of the bound and differentially expressed genes ( Figure 2B). Functional validation of TF binding in a lymphoblastoid cell line We applied our approach to determine functional target genes in a HapMap LCL (GM12878). We derived candidate target genes from ChIP-seq data for 76 TFs and DNase-seq data available from the ENCODE Project, and obtained SNP and expression data for 520 samples from the HapMap 3 and 1000 Genomes Project population [15][16][17] (see Methods). After preprocessing, we used in our analysis the expression levels of 9,940 probes corresponding to 9,553 genes and 87,267 SNPs in the promoter and exon regions of those genes. First, for each TF, we extracted from the estimated cGBN the validated target genes under each of the three perturbation scenarios, as genes with edges from TF expression, genes with cis eQTLs located within the TF bound region, and genes with trans eQTLs located in TF coding region. We found that the fraction of functionally validated target genes to candidate target genes varied from 3% to 69% depending on the TF ( Figure 3A). A large fraction of these functional target genes were those validated under the perturbation of TF concentration ( Figure 3B), and relatively fewer genes were validated under the perturbation of regulatory sequences or TF coding sequences ( Figures 3C and 3D). In addition, we found that for each TF, more functional target genes were perturbed by their cis eQTLs than by trans eQTLs located in the TF coding region. For example, 71 out of 83 TFs had more than 500 target genes validated with the perturbation by cis eQTLs ( Figure 3C), whereas none of the TFs had more than 200 target genes validated under the perturbation by trans eQTLs in the TF coding region ( Figure 3D). This is consistent with the observation from previous studies that trans regulatory elements tend to be evolutionarily more conserved than cis regulatory elements, because of the global impact of the potentially damaging changes in trans regulatory elements [26][27][28]. We examined whether the effectiveness of using TF concentration perturbation for functional validation depends on the strength of perturbation that exists in nature. We found that the number of target genes validated with the perturbation of TF concentration is highly correlated with the TF expression variance (R = 0.80; Figure 4). This suggests that our approach of using TF concentration perturbation is most effective when there exists sufficient variation in the TF expression levels within the population. Next, we examined the differential gene expression induced by each individual perturbation via probabilistic inference on our estimated cGBN. We first performed probabilistic inference to obtain downstream effect sizes for each downstream gene of each TF, under each of the three types of perturbation. Then, at different levels of downstream effect sizes, we compared the set of differentially expressed and bound genes (green bars in Figure 5) with a subset of those genes, which consists of functionally validated target genes (red curves in Figure 5). Our results in Figure 5 show that across perturbation types and TFs, 0.02% to 100% of the bound and differentially expressed genes were validated as functional targets of the given TF. This indicates that our computational method can determine direct and indirect targets by statistically assessing probabilistic dependencies, allowing us to identify functional targets potentially with higher accuracy than with experimental perturbation. Figure 5 also shows that the downstream effect sizes vary across TFs and that those TFs with stronger downstream perturbation effects tend to have a larger number of functionally validated target genes. Among the three types of perturbation we consider, the downstream effect sizes were the strongest for the perturbation of TF concentration and the weakest for trans eQTLs of functional target gene expression in the TF coding region. To characterize the biological functions that the TFs are regulating, we looked for the Gene Ontology (GO) terms [29,30] that are enriched in the set of functional target genes. For those TFs with more than 10 functional target genes validated under each of the three scenarios, we obtained significantly enriched GO slim terms (FDR of 5%) and the corresponding p-values ( Figure 6). For the TF concentration perturbation ( Figure 6A), the GO category of immune system processing is enriched for many of the TFs, which is consistent with the fact that an LCL is a human B cell immortalized after Epstein-Barr virus infection and thus has the phenotypes of highly activated B cells [31]. Since activated immune cells undergo cell proliferation and potential changes in metabolic processes [32], GO terms related to cell growth (e.g., cell cycle, cell proliferation) and metabolic processes (e.g., biosynthetic and catabolic processes) are also found enriched for many of the same TFs with enrichment in immune system processing. GO terms related to metabolic processes are also enriched for many of the TFs under the regulatory sequence perturbation. However, compared to TF concentration perturbation, the enrichment is overall weaker in the other two perturbation types ( Figures 6B and 6C), mainly because there were far fewer functional target genes validated under these scenarios. To see if the enrichment of the immune system processing GO category above in Figure 6A indicate a B cell immune response, we performed a similar GO enrichment analysis with the more fine-grained GO categories under the immune system processing GO silm category in the hierarchy. Our results in Figure 7 show that many TFs have target gene sets that are enriched in GO terms related to B cell activities. Overall, we found the biological functions that these TFs regulate are consistent with what is known about the B cell immune response in LCL. Target genes functionally validated by perturbing TF concentration Next, we examined our results obtained under TF concentration perturbation. In particular, to assess the effectiveness of our approach, we compared the bound genes that are differentially expressed under the perturbation of TF concentration in our analysis with those obtained from TF RNAi experiments in a LCL from a previous study [3]. Since both TF concentration perturbation in our analysis and RNAi vary TF expression levels, the downstream effects of such perturbations may be similar in both cases. On the other hand, RNAi perturbs a single gene at a time, whereas in population SNP/expression data, a large number of genes are perturbed simultaneously by a large number of SNPs. Thus, the downstream effects may be different between the two approaches because of genetic interactions, just as there is a significant difference in gene expression patterns between single and double mutants [33][34][35]. Below, we explore such similarities and differences in the perturbation effects and their impact on the functional validation of TF binding between experimental and our computational approaches. We obtained the bound genes that are differentially expressed after TF RNAi in a HapMap LCL (GM19238) [3] and compared these genes with an equivalent set of genes in our analysis, which consists of bound genes that are differentially expressed under TF concentration perturbation. We defined differentially expressed genes in our analysis as those genes with downstream effect sizes greater than 0.05 under TF concentration perturbation. This set of genes included genes that are both directly and indirectly affected by the perturbation. For 14 out of 72 TFs included in our study, microarray gene expression data were available for a HapMap LCL (GM19238) before and after TF RNAi with knockdown efficiency above 50% measured by qPCR [3]. From the 4,661 probe measurements that matched the probes used in our analysis, we determined the genes differentially expressed after RNAi, using the same procedure in [3] and applying the likelihood ratio test followed by multiple testing correction (FDR < 0.05). Then, among the candidate target genes derived from TF binding data ( Figure 8A), we compared the differentially expressed genes from TF RNAi experiments and with those from our analysis under TF concentration perturbation (the bar graphs in Figure 8B). We also examined the amount of TF concentration perturbation in our population data by observing the sample variance of TF expression (the red line graph in Figure 8B), since this can directly affect the amount of differential expression of downstream genes. For EZH2-1 and EZH2-2, we found that neither experimental nor naturally-occurring perturbation of TF expression led to significant differential expression of any downstream genes ( Figure 8B). Even though the TF expressions had substantial variability across individuals in our data, this expression variability did not induce changes in expression for downstream genes. Thus, for these TFs, we conclude that the results are in agreement between our method and the experimental method. For PAX5 and TCF12-1, whose expressions are not naturally perturbed and thus, have little variability across samples, only few of the bound genes were differentially expressed in our result, whereas many were differentially expressed in the TF knockdown experiments ( Figure 8B). This suggests that in general, if there is no or little naturally-occurring perturbation in TF expression, it is not feasible to leverage the TF concentration perturbation for a functional validation of TF binding or to reveal downstream genes affected by the TF. In this case, an experimental perturbation is necessary, suggesting a complementary role of experimental perturbation to naturally-occurring perturbation. For the remaining 10 TFs in Figure 8B, although the TF expression was perturbed both in the TF knockdown study and in our population data, the results agreed only partly between the two approaches. We hypothesize that an interaction between a TF and its co-regulators is the primary cause of this discrepancy. Without gene-gene interaction, a TF and its co-regulators would influence a target gene independently of each other. Thus, the two types of perturbation would reveal an identical set of differentially expressed downstream genes. Even though TF and its co-regulators are often simultaneously perturbed by many SNPs in our population data, once the effect of each individual SNP perturbation is teased out by our computational method, the effect of TF concentration perturbation would be identical to the effect of knocking down the TF via RNAi. The two types of perturbation may differ in their magnitudes, but otherwise, would induce the same downstream effect. However, if a TF and its co-regulators interact, we argue that the perturbation effect may differ between TF knockdown and SNP perturbation. For interacting TF and its co-regulators, the regulatory effect of the TF is dependent on the states of the co-regulators in cell environment. Then, TF perturbation can induce differential expression of downstream genes, only if the states of the co-regulators in the cell do not mask the perturbation effect. In population data, diverse genetic backgrounds and states of TFs/co-regulators are represented across samples, whereas a knockdown experiment provides perturbation results in a single sample with a single genetic background. Thus, SNP perturbation can potentially reveal many downstream genes of TF that knockdown experiments cannot. On the other hand, only an experimental knockdown would be able to reveal the downstream genes, if the perturbation of those interacting regulators does not exist in nature or is not represented in our data, or if the effect of TF perturbation is always masked by other regulators in our population data. Because our cGBN does not directly model gene-gene interaction, it is able to detect only those downstream genes that receive strong effects from the interacting regulators. Fully modeling gene-gene interaction within each probability factor in cGBN would be computationally expensive due to the large number of possible interactions that need to be considered. Instead, we used a simple linear model that includes the interaction effects of TF and its co-regulators, to assess the impact of gene-gene interaction on the bound genes that are differentially expressed in either type of perturbation (see Methods). Then, we examined the interaction effects on the genes in each of the following three categories: the bound genes differentially expressed only in the experimental perturbation, only in SNP perturbation, and in both types of perturbation. First, for those genes whose expression was perturbed only in the knockdown experiment, we asked whether they went undetected in our study because the regulatory effect of TF was indeed being masked by its interaction partner in nature or because our cGBN simply did not model the gene-gene interaction. Our linear model determined that the majority of the genes in this category, ranging from 75% to 100%, were under the influence of at least one pair of TF and its co-regulator ( Figures 8B and 8C). This suggests that by enhancing our cGBN to model gene-gene interaction, our approach could potentially capture the majority of those genes found only in the TF knockdown study. Second, for those genes that were perturbed only under SNP perturbation, we argue that they are likely to be regulated by interacting regulators, but were found to be unaffected in the TF knockdown experiments because of the interaction partners masking the perturbation effect. Our linear model found interaction effects on many of the genes in this category, providing evidence that these genes are indeed regulated by interacting regulators. The other genes in this category with no interaction effects may be under the influence of higher-order gene-gene interaction involving more than two interacting regulators. Finally, for the genes whose expressions were affected in both types of perturbation, we argue that they are influenced either by TFs acting independently of other regulators or by the interacting TF and its co-regulators. Many of the genes in this category were found to be regulated by the interacting TF and its co-regulator, while the remaining genes may be influenced by the TF. Target genes functionally validated under the perturbation of regulatory or coding sequences Now, we turn to the other perturbation scenarios and examine the functional target genes validated under SNP perturbation of target gene regulatory sequences and TF coding sequences. In particular, for each functionally validated target gene, we examine whether its cis eQTLs change DNA motif sequences recognized by TF and whether trans eQTLs in the TF coding region change the TF structure. Perturbing target gene regulatory sequences We examine whether cis eQTLs of functionally validated target genes disrupt the binding affinities of DNA motif sequences recognized by TFs. TF binding data provides information only on broad DNA regions bound by the TF, but not the precise location on DNA where the TF-DNA interaction occurs. We use the previously known TF binding site (TFBS) motif models to pinpoint TFBSs in the bound regions and then, assess the impact of cis eQTLs on TF binding affinities of the TFBS motif matching sequences. In order to identify TFBSs within the bound regions, where genetic variants can alter binding affinities, we scanned the genome of the same cell line (GM12878) for the ENCODE TF binding data, with motif position weight matrices (PWMs) from TRANSFAC and JASPAR databases [36,37]. For each of the 58 TFs whose PWMs were available in the databases, we found TFBS motif matches in the bound regions that overlap with the promoter regions of the functionally validated target genes, defined as 2000bp from the transcription start site (p-value <0.001; see Methods). Then, we identified those motif matches that contain SNPs or cis eQTLs of the functional target genes found by our computational method. Many SNPs in the bound promoter region overlapped with motif matches ( Figure 9A) and many of these SNPs overlapping with motif matches were cis eQTLs of the functional target genes found by our approach, ranging from 5.55% to 44.4% across 50 TFs with at least one motif match overlapping with the cis eQTLs of their targets. In addition, we found that there were many cis eQTLs in the bound promoter regions of the validated target genes that overlapped with motif matches ( Figure 9B). Across the 50 TFs with at least one motif match containing cis eQTLs, the fraction of cis eQTLs that coincide with motif matches for each TF ranged from 1.96% to 48.8%, and for 34 of these TFs, this fraction was above 10.0%. These cis eQTLs that lie on the TFBS motif matches could potentially change the binding affinities of the short DNA sequences recognized by the TF. To see if the cis eQTLs lying on the motif matches indeed disrupt TF binding affinities, we compared the effects of eQTLs on motif match scores with those of other SNPs in the bound promoter regions that are not eQTLs. To quantify SNP effects on binding affinities, we defined a score delta as the difference in motif match scores of two short sequences that are identical except for the different alleles at the SNP locus. For the 50 TFs with at least one motif match overlapping with cis eQTLs, we computed score deltas for all motif matches with cis eQTLs (722 motif matches across all TFs) and also for all motif matches with the other SNPs that are not cis eQTLs (5,165 motif matches across all TFs), and then compared the two score delta distributions. Overall, we found that the cis eQTLs resulted in higher score deltas than the other SNPs (rank sum test p-value = 0.0286; Figure 10A). We also examined mean score deltas for eQTLs and for the other SNPs within each TF, after averaging over all motif matches for the TF ( Figure 10B). Among the 50 TFs, 8 29 TFs had higher mean score deltas for cis eQTLs than for the other SNPs that are not cis eQTLs. This provides evidence that when bound genes are validated to be functional targets under the perturbation by cis eQTLs in our approach, those cis eQTLs are likely to change the binding affinities of the regulatory sequences recognized by TFs, leading to expression changes of the bound genes. When the motif matching sequences for multiple TFs overlap at the same SNP locus, we compute a max score delta for the SNP as the score delta for the TF with the larget score delta. The distribution over max score deltas for cis eQTLs is compared with the score delta distribution over the other SNPs in the bound promoter regions of the functionally validated targets, in terms of (C) the mean and standard deviation for motif matching sequences across all TFs and (D) the mean for each TF. When the same short sequence that contains an eQTL is a motif match for several TFs, we do not have knowledge of which TF's binding site is affected by the eQTL. In this case, so far, we assumed the eQTL influences the binding sites of all TFs with overlapping motif matches. Instead, we now hypothesize that an eQTL is most likely to influence the binding of the TF with the largest score delta, and examine the two score delta distributions for eQTLs and for the other SNPs under this hypothesis. We first computed the max score delta for each eQTL, defined as the maximum score delta over overlapping TF motif matches at the locus. Then, we compared the max score delta distribution for eQTLs (corresponding to 302 motif matches across all TFs) with the score delta distribution that we obtained above for the other SNPs. Overall, across the 45 TFs with at least one motif match assigned with the max score delta, the max score deltas for eQTLs were significantly higher than the score deltas for the other SNPs (rank sum test p-value=1.6 ×10 −16 ; Figure 10C). We made similar comparisons within each TF by computing the mean of max score deltas for eQTLs, averaged over all max score deltas within each TF, and comparing these with the mean score deltas for the other SNPs, averaged within the same TF. For 35 out of the 45 TFs, the means of max score deltas for eQTLs were larger than mean score deltas for the other SNPs ( Figure 10D). Our results show score delta distributions between SNPs and cis eQTLs were significantly different when cis eQTLs were assigned to TFs with the strongest evidence for a change in binding affinity. Perturbing TF coding sequences Next we examined the trans eQTLs in the TF coding sequences that affect the expressions of our functionally validated target genes. In order to study the functional role of those trans eQTLs, we first determined whether the trans eQTLs are synonymous or non-synonymous mutations. To further assess whether the amino acid changes from the non-synonymous mutations are likely to be deleterious, we scored the non-synonymous variants with SIFT, a tool for predicting if amino acid substitutions are likely to affect protein function based on a degree of sequence conservation across species [38]. In addition, we examined whether any of the trans eQTLs are located in known binding domains of the TF proteins according to the Uniprot database [39] and ScanProsite [40]. We also examined the synonymous variants for known functional roles, because they may impact protein function by influencing protein translation, splicing, and folding, even though they do not lead to amino acid changes [25]. Overall, we found that out of the 48 trans eQTLs, 16 trans eQTLs were non-synonymous missense variants, while the other 32 trans eQTLs were synonymous variants ( Table 1). Four of the 16 non-synonymous variants had SIFT scores less than 0.05, indicating they are likely to lead to deleterious amino acid changes. One of these non-synonymous variants (rs754093 located in the coding region of NFATC1) had the lowest SIFT score of 0.0, and at the same time, overlapped with the trans-activation domain of the protein, showing strong evidence that this eQTL found by our computational method is likely to affect the protein function of the TF. Among the 32 synonymous variants, we found that six trans eQTLs overlapped with the known binding domains and two trans eQTLs (rs325408 in the MEF2A coding region and rs2228129 in the POLR2A coding region) were lying in the splice region within one or two bases from the exon/intron boundary. Overall, several of the trans eQTLs in the TF coding regions found by our approach were supported by evidence that they affect protein function. ZNF143 561 T(0.52) 1 SIFT scores range from 0 to 1, with a value between 0 and 0.05 indicating that the substitution is predicted to affect protein function by the SIFT tool [38,41]. Discussion The binding of TFs is one of the key factors that determines which genes are transcribed in gene regulation. Experimental procedures such as ChIP-seq [42,43] and DNase-seq [43] have been widely used to elucidate where TF bindings occur in DNA. However, many of the TF bindings do not result in a change in target gene expression, making it necessary to perform a functional validation. The standard approach for a functional validation has been to perform a TF knockdown experiment such as RNAi [2], followed by gene expression profiling to determine functional targets whose expressions are affected by the TF knockdown. The well-known limitation of the experimental perturbation method is that the differentially expressed genes after TF knockdown may be indirectly affected downstream genes rather than direct targets of the TF. The experimental perturbation also suffers from off-target effects and low knock-down efficiency [44][45][46], which reduces accuracy. Recently, CRISPR-based perturbation methods have been used to study TF activities, but they suffer from many of the same problems though to a lesser degree [8,9]. Instead of experimental perturbation, for functional validation of TF binding, our approach leverages the naturally occurring perturbation of gene expression by genetic variants in the population, which is captured in population expression and genotype data. Our computational method, based on cGBN, addresses the computational challenge of decoupling the large number of SNP perturbations that are affecting the expressions of all genes simultaneously. Our computational method provides a framework for integrating TF binding data with population SNP and expression data to learn a gene regulatory network over TFs, their functional target genes, and the downstream genes, along with the eQTLs that perturb this network and to infer the indirect downstream effects of TF-target interactions. Our results on the LCL data from the ENCODE [1], HapMap 3 [15,16], and 1000 Genomes Project [17] demonstrated that functional target genes can be identified under SNP perturbation of various aspects of TF-target interactions, including perturbations of TF concentration, target gene regulatory sequences, and TF coding sequences. Our approach overcomes several limitations of experimental perturbation methods by computationally determining functional TF binding using naturally occurring genetic variants as a source of perturbation. First, unlike the experimental approach, our approach can distinguish between genes directly targeted by TF and genes in the further downstream that are only indirectly affected by the perturbation. Our approach models direct targets explicitly in our cGBN, while revealing indirectly affected genes via inference on this probabilistic graphical model. In addition, our approach does not suffer from the limitations associated with the technology for experimental perturbations such as off-target effects and low knockdown efficiency, since it leverages the genetic and expression variation found in nature. For example, in the knockdown experiments in [3], 53 out of 112 TF knockdown experiments were discarded due to low efficiency. By using natural genetic variation in the population, we can determine functional targets of TFs, as long as one or more aspects of TF-target interactions are perturbed in nature and the effects of such perturbations are captured in the population expression and SNP data. Our approach has several other advantages over experimental perturbation approaches. First, the gene regulatory network we learn from SNP perturbations is potentially more meaningful than what can be learned from experimental perturbation, since it captures the part of network that varies in natural populations. Another advantage is that since eQTL mapping [10,47] is widely used to study the genetic architecture of various diseases and tissues types, existing eQTL datasets can be used in our computational approach without the need to perform additional experiments. The main limitation of our approach is that our ability to determine functional TF binding from eQTL data is limited by the perturbations that are present in nature and are captured in the eQTL data. Increasing the sample size and diversity in samples will increase the chance that SNP perturbations necessary for the functional validation of target genes are represented within the eQTL dataset. However, if a TF is tightly regulated or if genetic variants do not vary the TF expression, TF binding sequences, or any other aspects of TF-target interaction, it is necessary to rely on artificial perturbation to reveal the functional target genes of the TF. Moreover, if a TF interacts with other co-regulators to regulate target genes, the TF and its co-regulators should be perturbed simultaneously to induce the variability in target gene expression. Otherwise, multi-factorial experimental perturbations would be necessary to reveal those functional target genes. There are several possible future directions. One possible direction is to consider perturbation by 11 rare and low-frequency variants, instead of focusing on common SNPs as in our study. In order to boost the limited statistical power of individual rare or low-frequency variants, a commonly used approach has been to collapse multiple rare variants and to perform an eQTL mapping on the collapsed variants. Two types of strategies have been previously proposed to collapse variants. The first is to combine rare or low-frequency variants based on proximity in genome, such as variants from the same gene or genome segment [48]. The other strategy is to collapse variants based on their proximity in the 3-dimensional structure of the chromatin, which can be obtained through chromatin conformation capture methods [49,50]. These strategies for collapsing variants could be combined with our computational methodology to take advantage of rare and low-frequency variants in functional validation of TF binding. Another future direction would be to perform an in-depth investigation of the target gene regulation by a TF that interacts with other regulators. In our comparison of the results from the TF knockdown study and from our study, we found evidence of multiple interacting regulators that influence target gene expression. Our cGBN can be extended to explicitly model interactions among regulators, though this would require a more efficient learning algorithm to handle a large number of possible gene-gene interactions. Then, it would be interesting to compare the functional targets identified by this computational model with the bound genes that are differentially expressed after double knockdown, if results from such experiments become available. It would be interesting to see whether the overlap of functionally validated genes between the two methods would increase. Methods Datasets We applied our computational approach to determine whether TF binding in an LCL from the ENCODE ChIP-seq and DNase-seq data [1] are functional, using SNP and gene expression data from the HapMap 3 and 1000 Genomes Project population [15][16][17]. We downloaded the ENCODE ChIP-seq data for 71 TFs and the DNase I hypersensitivity sites for the LCL (GM12878) processed with the ENCODE uniform peak calling pipeline [1]. For the five TFs whose ChIP-seq data are available from multiple experiments, we took the union of the binding sites from all experiments. For each TF, we overlapped the ChIP-seq binding region with the DNase I hypersensitivity region to determine the bound region. Then, we identified the putative target genes as those genes that are bound within 10kb from the transcription start and end sites of the gene. We identified 520 individuals whose LCL gene expression levels were profiled in a previous study of HapMap 3 population [15,16] and whose genome sequences were available from the 1000 Genomes Project Phase 3 [17]. We downloaded the expression data for 21,800 probes from the Illumina Human-6 v2 Expression BeadChip platform [15,16]. We included in our analysis all probes corresponding to TFs, but for other genes, we filtered out the probes with standard deviation less than 0.2. After discarding the redundant probes that recognize the same transcripts, we included in our analysis the remaining 9,940 probes corresponding to either gene-level or transcript-level expressions. For TFs with different probes corresponding to different transcripts, we model functional target genes of the individual transcript of TF and report our results using the transcript identifiers for each TF (S1 Table). For the same 520 individuals with expression data, we obtained the genome sequence data from the 1000 Genome Project Phase 3 [17]. After filtering out SNPs with minor allele frequency less than 0.05, we included in our analysis 87,267 biallelic SNPs in the promoter and exon regions of each gene whose expression levels were available, where the promoter region was defined as 2000bp from the transcription start site. Learning gene regulatory networks under SNP perturbation We introduce a statistical approach, based on probabilistic graphical models [14], to validate whether TF bindings captured by ChIP-seq and DNase-seq are functional using SNP perturbation of gene expression. We first describe our approach for learning a gene regulatory network under SNP perturbation from population gene expression and SNP data. Then, we show how TF binding data can be integrated into our model and learning algorithm as prior knowledge, to select functional TF-target interactions. Let Y = [Y 1 , . . . , Y q ] ⊤ denote the expression levels of q genes and X = [X 1 , . . . , X p ] ⊤ the SNP genotypes of p SNPs for the same individual, where X i ∈ {0, 1, 2} represents the minor allele frequency for SNP i (i = 1, . . . , p). We model the gene regulatory network as a directed graph over q genes, and the SNP perturbations of the gene expressions as edges from p SNPs to q genes. Then, each gene expression Y j (j = 1, . . . , q) can be influenced by the expression levels of other gene expression regulators or by genetic variants with edges pointing to Y j . We define a cGBN as a probability density over this graph that factorizes as follows: where Y pa(j) consists of gene expressions regulating the expression Y j and X pa(j) the set of SNPs perturbing Y j . We model each probability factor in Eq. (1) using a linear regression model: where β j = [β j1 , . . . , β |Y pa(j) | ] ⊤ is the regression parameters modeling the strengths of expression regulations by Y pa(j) , α j = [α j1 , . . . , α |X pa(j) | ] ⊤ is the regression parameters modeling SNP perturbations by X pa(j) , and σ 2 j models the noise. In order to simultaneously estimate the graph structure and regression parameters of cGBN from data, we extend A* lasso from our previous work for learning Gaussian Bayesian networks [51], which significantly improved the computation time of the previous algorithm based on dynamic programming [52]. A* lasso considers the structure learning problem as that of finding a topological ordering of the variables X and Y , where edge directions are always from left to right in the ordering. This problem is then solved with dynamic programming to search the space of variable orderings, combined with A* algorithm to reduce this search space. A* lasso learns the model parameters jointly with the network structure by embedding lasso as a scoring system within the dynamic programming. Given SNP data X = [x 1 , . . . , x p ] and gene expression data Y = [y 1 , . . . , y q ], where x i and y j are vectors of observations from n individuals for SNP X i (k = 1, . . . , p) and gene Y j (j = 1, . . . , q), our learning algorithm jointly learns the network structure and edge weights by maximizing the following L 1 regularized log-likelihood of data, under the constraint that the graph over Y is a directed acyclic graph: where Y −j is the expression data for all genes except for Y j and β j ∈ R q−1 is the corresponding regression parameters. The L 1 regularization ||c|| 1 = K k=1 |c k | for vector c = [c 1 , . . . , c K ] plays the role of setting only a small number of elements in c to non-zero values to encourage learning a sparse network structure. The non-zero elements in β j 's correspond to the presence of edges in the gene network, and the non-zero elements in α j 's correspond to the presence of SNPs that perturb the expression levels. The λ and γ are regularization parameters that control the amount of sparsity in β j 's and α j 's and can be determined by cross-validation. To solve Eq. (2) and learn the model in Eq. (1), we modify the original A* lasso to learn a conditional model by augmenting the variable ordering over Y with the conditioning variables X at the beginning of the ordering and allowing edges from X to point to variables only in Y . Functionally validating TF binding in ChIP-seq with SNP and expression data In order to determine functional TF binding, we integrate TF binding data into our A* lasso learning procedure. The putative target genes from TF binding data provide prior knowledge on the network structure, which is then updated by A* lasso to include in the model only the functional targets given the population expression and SNP data. We assume that each of the bound genes can be functionally validated under one or more of three types of perturbation, including perturbations of TF concentration, target gene regulatory sequences, and TF coding sequences. Below, we discuss how A* lasso uses TF binding data as prior information on the edge connectivities in cGBN under each of the three perturbation scenarios: • Perturbation of TF concentration: A bound gene is considered a functional target if the TF expression variability in the population leads to the variability in the expression of the bound gene. A functional target gene validated under this perturbation scenario is modeled in our cGBN as a node with an edge from the TF expression node (red edges in Figure 1B). Using TF binding data as prior knowledge, A* lasso determines whether to include in the estimated cGBN, an edge from TF expression to each of the TF bound genes. • Perturbation of target gene regulatory sequences: We consider a bound gene as a functional target, if the bound gene has genetic variants in its regulatory region that influence its expression. A functional target gene validated under this scenario has edges from genetic variants in the regulatory region of the target gene pointing to it (green edges in Figure 1B). For each of the bound genes, A* lasso evaluates edges from SNPs in the regulatory region of the bound gene and includes in the estimated model only those edges pointing to functional targets validated under SNP perturbations of the regulatory sequences. • Perturbation of TF coding sequences: A bound gene is a functional target, if genetic variants in TF coding region affect the expression of the bound gene. For example, nonsynonymous mutations in DNA-binding domains of TF can affect the TF binding affinity and influence the expression of many of its target genes globally. TF-target interactions validated under this scenario will appear in our estimated cGBN as edges from SNPs in the TF coding region to target gene expressions (blue edges in Figure 1B). During A* lasso learning, for each candidate target gene from TF binding data, A* lasso begins with candidate edges from genetic variants in the TF coding region to each bound gene, and selects those candidate edges supported by the population expression and SNP data. Given the estimated cGBN, we extract the set of functional target genes validated under each of the three perturbation scenarios (Figure 1). In order to reduce the computation time for learning our model over a large number of gene expressions and SNPs, we make additional assumptions that further constrain the search space over network structures. First, we focus on learning a regulatory network over TFs and their downstream genes by assuming all TFs are in the upstream of all the other genes and placing TFs in front of all the other genes in the variable ordering during learning. In addition, we assume that the downstream genes of TFs form regulatory modules, where edge connections exist from TFs to genes in each module and among genes within each module, but not between modules. To define the regulatory modules, we first applied hierarchical clustering to all downstream genes to find 40 gene clusters, each of which contained 100-400 genes, and then applied A* lasso on each module separately, to learn edges from TFs to each module and edges within each module. Inferring downstream effects of functional TF binding While we represent the direct targets of TFs explicitly in our model structure, we perform probabilistic inference on this graphical model to infer indirect targets in the further downstream. These downstream genes can provide additional insight on the biological processes each TF-target interaction is involved with. We consider two types of inference tasks, one for inferring downstream effects of perturbing TF concentration and the other for inferring downstream effects of perturbing target gene regulatory sequences and TF coding sequences. In order to determine downstream effects of perturbing TF concentration, we infer from cGBN p(Y |X) how TF expression variability in population leads to the expression variability in downstream genes, assuming the expressions of all the other genes are fixed. We accomplish this by inferring from p(Y |X) the conditional probability density p(Y TF d |Y TF , Y −TF d , X) for downstream gene expressions Y TF d conditional on the TF expression Y TF , the expressions of the rest of the genes Y −TF d , and SNPs X. This conditional probability density is Gaussian with mean We use a similar inference method to identify downstream genes influenced by SNP perturbation of target gene regulatory sequences and TF coding sequences. In order to quantify the downstream effect of a SNP in target gene regulatory or TF coding sequences that perturb target gene expression Y A , given the estimated cGBN for p(Y |X), we infer the conditional probability density for target gene Y A and its downstream genes Y A d , conditional on the rest of the gene expressions Y −A d and SNPs X. This conditional probability density is again Gaussian is a regression coefficient matrix. The rows of K corresponding to eQTLs of Y A represent the downstream effect sizes of the SNP perturbations on the genes in the downstream of gene Y A . While in general inference tasks in probabilistic graphical models is computationally expensive, efficient inference algorithms can be obtained when the local conditional probability densities p(Y j |Y pa(j) , X)'s in Eq. (1) are Gaussian [14]. In our inference tasks, the desired conditional densities are in the form of p(Y A |Y B , X), where Y A and Y B are two disjoint sets of gene expression variables. In order to derive this conditional density, we first re-write it as Γ ∈ R q×p and Σ is a q × q covariance matrix. We find the denominator as p(Y B |X) = N (Γ B X, Σ BB ) from p(Y A , Y B |X) via marginalization. Then, our desired conditional density can be obtained from p(Y A , Y B |X) and p(Y B |X) as , using the standard result in multivariate Gaussians. In order to obtain p(Y |X) = N (ΓX, Σ) from the factorized model in Eq. (1), we first assume the variables Y 1 , . . . , Y q are ordered according to the topological ordering of the variables found by A* lasso, where edges are allowed only from left to right. Then, we recursively construct a j × p matrix Γ (j) and a j × j covariance matrix Σ (j) , visiting each Y j for j = 1, . . . , q in the toplogical ordering as follows. Given the factorized density where Y 1:(k−1) = [Y 1 , . . . , Y k−1 ] ⊤ and β k is the regression parameters corresponding to Y 1:(k−1) , we begin with and compute the partial joint distribution iteratively for each k = 2, . . . , q as follows: We obtain the desired density p(Y |X) = N (ΓX, Σ) by setting Γ = Γ (k) and Σ = Σ (k) . Modeling interactions between TF and its co-regulators In order to assess the prevalence of gene-gene interaction in an LCL, we use a linear model that models two-way interactions involving each TF and its one other co-regulator. Given the estimated cGBN for p(Y |X), we determined the downstream effects of the TF concentration perturbation by deriving the conditional probability density p(Y TF d |Y TF , Y −TF d , X) for downstream genes Y TF d , conditional on TF expression Y TF , the rest of the genes Y −TF d , and SNPs X, whose expected value is given as follows: Then, we identified the genes in Y TF d whose corresponding entries in downstream effect sizes Θ > 0.15 as differentially expressed. To evaluate whether the discrepancy in the differentially expressed genes between the TF knockdown and our approach can be explained by gene-gene interactions, for each TF, we augment the linear model in Eq. (3) with interaction terms as follows: where Y TF diff consists of the expression levels of the bound genes that are differentially expressed under the TF knockdown or SNP perturbation. The Φ(Y TF ) = Y TF × [Y 1 , . . . , Y k ] in Eq. (4), where Y m ∈ Y −TF d for m = 1, . . . , k, represents the interaction between the given TF and its co-regulators. We include in Y −TF d in Eq. (4) only those genes with the corresponding entries in Θ > 0.15 in Eq. (3). We fit this model using a least-squared-error criterion with an L 1 regularization. We used different regularization parameters for the three sets of parameters corresponding to each of [Y TF , Y −TF d ], X, and Φ(Y TF ). The optimal regularization parameters were determined by cross validation. GO functional annotation We performed GO enrichment analysis on the set of functionally validated target genes in order to find the biological functions controlled by each TF. For each TF, we performed the Fisher's exact test to determine a set of significant GO terms in Biological Process (BP), using the 'runTest' function of the R package 'topGO' [53] and the universe of genes in the R package 'org.Hs.eg.db' that contains the mapping from genes to GO terms in human [54]. The p-values were adjusted for multiple testing with an FDR of 5% [55]. For enrichment analysis of GO slim categories, we performed the same procedure as described above, using the generic GO slim file developed by the GO Consortium [56]. Identifying TF binding sites with PWMs In order to pinpoint TF binding sites within the bound regions of functional target genes validated under perturbations of regulatory sequences, we scanned the bound regions that overlap with the promoter sequences located within 2,000bp from the transcription start sites of those target genes, using PWMs from the TRANSFAC and JASPAR databases [36,37]. For many of the TFs, multiple PWMs were available, each derived from different data sources (e.g., SELEX, ChIP-seq, DNA binding arrays, protein binding arrays, and 3D-structure-based energy calculations) or compiled from the literature and individual genomic sites. We computed the PWM score of a motif sequence as an average over scores from multiple PWMs from different data sources, after selecting a single PWM from each data source as follows. For PWMs derived from ChIP-seq data, we used the ones from LCL ChIP-seq data. If PWMs from LCL ChIP-seq were not available, we used the ones from another normal cell line. For PWMs from SELEX, we selected the ones from the most recent SELEX experiment, including both homodimer and heterodimer cases. For PWMs compiled from many genomic sites or from the literature, we selected a factor-specific PWM over a family-specific PWM, a PWM derived from human data over one derived from non-human data, and a PWM constructed from a larger number of genomic sites. We added pseudocounts to entries in PWMs and re-normalized the PWMs. The pseudocounts were set to 0.004 for 'C' and 'G' and 0.006 for 'A' and 'T', if PWMs were available as a normalized probability matrix. For PWMs with unnormalized counts, the pseudocounts were set to 0.04 for 'C' and 'G' and 0.06 for 'A' and 'T', if the PWM was derived from a large number of binding sites (e.g., PWMs obtained through SELEX, DNA binding array data, or 3D structure based energy calculations), and 0.4 for 'C' and 'G' and 0.6 for 'A' and 'T', if the PWM was derived from a small number of binding sites (e.g., PWMs compiled from individual genomic binding sites or from the literature). A position-specific scoring matrix was then constructed from the resulting PWMs using background nucleotide frequencies with 40% GC content. We assessed the significance of motif matches at α = 0.001, based on the null distribution obtained by enumerating and scoring all possible sub-sequences of motif length via dynamic programming. For motifs with length greater than 11, we used an approximate null distribution by binning the scores. All computations were made using the Biopython package [57]. We quantify the effects of SNPs on TF binding affinities as score deltas, defined as differences in TFBS motif scores between the two motif matching sequences that are identical except for the SNP locus. The score delta distributions are compared between cis eQTLs and the other SNPs in the bound promoter regions of functionally validated target genes in terms of (A) the mean and standard deviation of score deltas across all TFs and (B) the mean score deltas for each TF, where score deltas were averaged over all motif matching sequences containing SNPs or cis eQTLs for the given TF.
2019-04-03T13:07:22.156Z
2018-09-10T00:00:00.000
{ "year": 2018, "sha1": "8cd78b8184ce9d49225935b03fc1cd605a51c770", "oa_license": "CCBYNC", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2018/09/10/412841.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "93bf6c8e4879ee12a8fcabaf680ef2b4f0387ab1", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Biology" ] }
6796649
pes2o/s2orc
v3-fos-license
Development of a Mass Sensitive Quartz Crystal Microbalance (QCM)-Based DNA Biosensor Using a 50 MHz Electronic Oscillator Circuit This work deals with the design of a high sensitivity DNA sequence detector using a 50 MHz quartz crystal microbalance (QCM) electronic oscillator circuit. The oscillator circuitry is based on Miller topology, which is able to work in damping media. Calibration and experimental study of frequency noise are carried out, finding that the designed sensor has a resolution of 7.1 ng/cm2 in dynamic conditions (with circulation of liquid). Then the oscillator is proved as DNA biosensor. Results show that the system is able to detect the presence of complementary target DNAs in a solution with high selectivity and sensitivity. DNA target concentrations higher of 50 ng/mL can be detected. Introduction Biosensors are small devices which utilize biological reactions for detecting target analytes [1][2][3]. Such devices intimately couple a biological recognition element (interacting with the target analyte) with a physical transducer that translates the bio-recognition event into a useful electrical signal. Common transducing elements, including optical, electrochemical or mass-sensitive devices, generate light, current or frequency signals, respectively. There are two types of biosensors, depending on the nature of the recognition event. Bio-affinity devices rely on the selective binding of the target analyte to a surface-confined ligand partner (e.g., antibody, oligonucleotide) [4]. In contrast, in bio-catalytic devices, an immobilized enzyme is used for recognizing the target substrate. Specific DNA sequence detection is a major issue in life science. An important advance in this field was done during the last two decades with the design of DNA biosensors. They are more efficient by comparison to DNA hybridization tests performed on membranes that are less sensitive, less selective, time consuming and not time resolved. DNA biosensors are now intensely developed for diagnostic applications, environmental monitoring and food controls. DNA detection biosensors are based in the hybridization process of combining complementary, single-stranded DNA into a single molecule. The quartz crystal microbalance (QCM) oscillator circuits are useful to design DNA-biosensors [5]. A QCM sensor typically consists of an oscillator circuit containing a thin AT-cut quartz disc with circular electrodes on both sides of the quartz. Due to the piezoelectric properties of the quartz material, an alternating voltage between these electrodes leads to a mechanical oscillations of the crystal. These oscillations are generally very stable due to the high quality of the quartz (high Q factor). If a mass is adsorbed or placed onto the quartz crystal surface, the frequency of oscillation changes in proportion to the amount of mass. Therefore, these devices can be used as high sensitivity microbalances intended to measure mass changes in the nanogram range by coating the crystal with a material which is selective towards the species of interest. The quartz crystal acts as a signal transducer, converting mass changes due to the hybridization process into frequency changes. One of the main advantages of this device is the ability to control a QCM's selectivity by applying different coatings, which makes this sensor type extremely versatile. The design of crystal controlled oscillators used as QCM sensors in fluids is a difficult task due to the wide dynamic values of the resonator resistance that they should support during their operations [6]. The piezoelectric quartz experiences a strong reduction of its quality factor due to the increase of the losses (R Q ) caused by the liquid. Figure 1 shows the BVD equivalent circuit of a piezoelectric resonator modified by Martin and Granstaff [7] for a quartz crystal loaded by the mass of a material layer and a liquid. The standard oscillator designs, as Pierce or Colpitts, does not work well since, although they provide a great stability in frequency and a low phase noise, their gain and phase are very sensitive to the losses of the resonator [8]. A good design of a sensor oscillator for liquid media will maintain the necessary loop gain and phase for the oscillation (Barhausen condition) in a wide margin of values of the loss resistances of the quartz. This work deals with the design and implementation of a high frequency QCM electronic oscillator circuit for its use as high sensitivity DNA biosensor. The QCM oscillator sensor is able to detect the presence of complementary DNAs in a solution that match the sequence on a given strand in function of the changes in the output frequency of the oscillator. The design is adapted so that the Barkhausen conditions are satisfied even when the quartz is immersed in liquid media. An experimental characterization of the frequency stability of the oscillator is carried out, with object of determining the resolution of the sensor. The behavior of the oscillator as DNA biosensor is proven, by monitoring its frequency during the process of immobilization of probe DNA on the gold-covered quartz surface of the QCM oscillator and during the hybridization of complementary target DNA present in a solution. Finally, a calibration of the DNA biosensor with buffer solutions of different target DNA concentrations is carried out and the minimum concentration of DNA detectable is determined. Experimental Section A home-made quartz crystal electronic oscillator circuit was designed to drive the quartz at its resonance frequency and use it as QCM sensor in liquid media. Miller oscillator topology was selected, and a working frequency of 50 MHz was chosen in order to have a high sensitivity QCM sensor system [9]. The Miller topology is a high-frequency stable topology that allows designing sensors of high resolution [10][11][12]. This topology, in spite of not being the most adequate topology for obtaining the best frequency stability [13], experimentally showed a good capacity to work under strong damping [14,15]. Miller oscillators allow solving the problems that have the standard oscillators to work in liquid, as Colpitts or Clapp, thanks to its ability of supporting a wide range of values of the resonator resistance Mass Load Non Loaded Quartz due to the damping. Once decided the configuration, the design and simulation of the circuit was done with the help of PSpice. To model the quartz resonator, the experimental values of the parameters of the equivalent electric circuit in distilled water were used. They are summarized in Table 1. The Burr-Brown OTA660 transconductance amplifier was used as the active device [16]. To determine the values of the components, the design considerations for this topology in [11,12] were realized. The OTA was polarized using a resistance of 270 Ω to have a high gain. In Figure 2, a simplified scheme of the designed oscillator is shown. Once the oscillator circuit was designed and simulated with PSpice, it was implemented on a printed circuit board (PCB). A temperature sensor was also incorporated to externally control the temperature of the circuit by means of a WATLOW heater monitor with stability better than 0.1 °C. Quartz crystal resonator was connected by a silver conducting paste, through wires, to a BNC adaptator, which permits the connection of the quartz to the oscillator circuitry. An experimental cell was developed: the crystal was mounted between two O-ring seals inserted in a plexiglass cell [17]. The electronic oscillator circuit was experimentally characterized in the measurement environment. The output frequency of the oscillator was connected to a Fluke PM6685 frequency counter controlled by a lab-made software program that allows storing the frequency samples. The temperature of the electronic circuit was controlled by a Watlow regulator. Experiments were made with the plexiglass cell (and therefore the quartz and its environment) included in a BMT Climacell climatic cell which allows maintaining constant the ambient temperature and humidity. A micropump (Pharmacia, P1) was used to provide a constant flow of liquid circulating over the surface of the crystal. The flow rate was chosen low (50 L/min) to minimize noise in the quartz. In order to characterize the designed system, a study of the frequency stability of the oscillator was carried out by means of the Allan deviation  y () [18]. Allan deviation characterization is commonly used because it allows the determination of the stability of an oscillator in a time interval, , for a certain application. The oscillator detection limit, i.e., the smallest frequency deviation that can be detected in presence of noise is equal to [6] Δf noise () =  y ()·f 0 , where f 0 is the nominal frequency. In QCM applications, the mass resolution can be obtained by the relationship between the detection limit and Sauerbrey sensitivity [17] of the sensor by Resolution = Δf noise /k, where k = 2.26·10 −6 ·f o 2 (Hz g −1 cm 2 ) is the mass sensitivity coefficient, known as the Sauerbrey coefficient. A detection limit of 2 Hz was calculated by Allan deviation in static conditions (water, without any circulation). Therefore the designed system has a mass resolution of about 357 ng/cm 2 . A disulfide-DNA biosensor was designed using the QCM oscillator by immobilization of a 20-base DNA-disulfide probe A in NaCl solution on the gold quartz surface. The immobilization process is illustrated in Figure 3(1). The solution for DNA immobilization was 0.5 M NaCl referred to as "NaCl". Immobilization of the recognition element on the surface of the transducer, in our case the DNA-disulfide probe A, is a key stage in the construction of a biosensor. The covalent union of the probe DNA with the gold electrode of the quartz takes place thanks to that the used concentration of DNA contains sulfur (S) that will carry out the union between the gold of the electrode and the DNA. To carry out the immobilization the concentration of DNA is added to the NaCl solution and a 50 μL/min constant flow of this solution is maintained in the plexiglass cell in which the quartz resonator is included. After the immobilization, DNA-disulfide probe A was hybridized in HEPES solution with a complementary DNA target A (2). A/A are 20-base complementary sequences. Hybridization experiments were performed in 0.05 M HEPES, with 0.5 M NaCl, adjusted to pH 7.2 with drops of 1 M NaOH, referred to as "HEPES" [19]. The dehybridization solution was 0.5 M NaOH, with 3 M NaCl, referred to as "NaOH". Results and Discussion DNA biosensor oscillator frequency changes recorded during successive circulation of DNA solutions are presented in Figure 4. There is a first Δf A = −1,560 Hz frequency change during circulation of a 20 μg/mL DNA-disulfide NaCl solution attributed to chemical adsorption of the DNA-disulfide probe A on the gold surface of the quartz (1). DNA-disulfide adsorption Δt is equal to 6,120 s. The next frequency shift is attributed to increase of viscosity and density between NaCl and HEPES solutions. There is no frequency shift during circulation of 20 μg/mL non-complementary DNA B HEPES solution indicating that there is no hybridization or non-specific adsorption of the non-complementary DNA strands B (2). DNA-QCM 27 MHz oscillator biosensor designed and studied in previous works [5]. With respect to the frequency stability, a detection limit of 2 Hz is calculated by Allan deviation for the best result in static conditions (water, without any circulation). Therefore the designed system has a mass resolution of about 357 ng/cm 2 in front of the 665 pg/cm 2 determined for the 27 MHz oscillator in the same conditions [6]. On the other hand, in dynamic conditions the detection limit worsens to 40 Hz with the liquid circulation. Therefore the 50 MHz DNA biosensor has a mass resolution of 7.1 ng/cm 2 . In the case of the 27 MHz oscillator, a detection limit of 20 Hz and a mass resolution of 13.1 ng/cm 2 are determined in dynamic conditions. Hence, in conclusion, the 50 MHz system improves the mass resolution in static or dynamic conditions (with or without any liquid circulation). Relating to the possible influence of factors as changes of detection buffer or temperature, exactly the same conditions of experiment are used between the 27 MHz and the 50 MHz in term of buffers, temperature, probes and targets. For the buffer changes, the DNA detection is done in the same buffer (HEPES) before and after the addition of the DNA target. So the influence of the buffer is cancelled. For the temperature, all is thermostated, even the electronic part, and AT cut quartz crystals were used. Probe A can also be dehybridized by circulation of a NaOH solution and hybridized again with the complementary DNA target A. Finally, a calibration of the DNA-QCM 50 MHz oscillator biosensor with buffer solutions of different target DNA concentrations was carried out. In Figure 5 the obtained frequency curves are shown. It was found that the designed oscillator is able to detect DNA target A concentrations higher to 50 ng/mL. It can be observed a difference of frequency change during the DNA target detection between Figures 4 and 5. This signal difference is due to that the quartz resonator is not the same in the two figures, and the DNA probe quantity varies from one quartz to another (due to reproducibility of the probe immobilization). Δf (Hz) Conclusions A high sensitivity DNA biosensor using a QCM electronic oscillator circuit was designed. The oscillator circuitry was adapted to satisfy the Barkhausen condition, even with the quartz immersed in a liquid media and therefore presenting very low quality factors. A study of the frequency noise of the developed QCM system was carried out in order to determine the resolution of the sensor. A mass resolution of 7.1 ng/cm 2 was founded in dynamic conditions (with liquid circulation). The behavior of the QCM oscillator as DNA biosensor was proved. Results show that the system is able to detect the presence of complementary target DNAs in a solution with high selectivity and sensitivity. DNA target concentrations higher of 50 ng/mL can be detected.
2014-10-01T00:00:00.000Z
2011-08-03T00:00:00.000
{ "year": 2011, "sha1": "d8016d4a5e3ab325ba0559af6a287bdc308fabeb", "oa_license": "CCBY", "oa_url": "http://www.mdpi.com/1424-8220/11/8/7656/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e7fc7a3aa5a0bb15449dda6670b3d2f5fa57ed6c", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Chemistry", "Engineering", "Computer Science", "Medicine" ] }
109936900
pes2o/s2orc
v3-fos-license
An Artificial Management Platform Based on Deep Learning Using Cloud Computing for Smart Cities In today's world; smart city management uses sensors, cameras and mobile devices which is internet-connected. These devices, called Internet of things (IoTs) generate large amounts of data. A big data-based approach is needed to store these data appropriately and to provide real-time access to the data. In smart city management applications that use many different sources of information, traditional machine learning methods for classifying large data and generating meaning can be inadequate. Deep learning approach is widely used today to solve similar problems. In this study, a cloud computing based architectural approach is proposed that enables data mining using deep learning on big data produced by IOTs. Introduction Smart city is a concept based on the principle of using technological infrastructure in city management for the purposes of management, planning, analysis, improving the quality of service new services [1].In this sense, smart cities include smart management, smart transportation, smart technologies, smart economy, smart health and so on, as in "Fig.1".In today's world, many countries are producing new projects for smart cities in metropolitan areas and allocating huge budgets for this purpose.This is the biggest disadvantage of smart cities as well as their advantages [2].In smart city applications, it is necessary to collect, evaluate and analyze the data to be obtained from many sources of information and sensors and to interpret the results, and the actions need to be achieved [3].The action to be achieved can be cost reduction, taking new managerial decisions, improving the quality of service and so on.For instance, smart traffic application gives real-time road information for the drivers by means of mobile applications.This information may indicate real-time situations such as instant traffic density on the route, if any, maintenance work and accident.The input data of the system to be created for this purpose consist of traffic lights, city surveillance and cameras.The system has outputs such as informing, dynamic operation of traffic lights and directions [4].Smart city applications generally require algorithms with different types of data inputs and outputs that will perform realtime learning on this big data.Traditional machine learning methods are not sufficient for smart city applications in terms of processing power and memory consumption. Deep Learning (DP) is successfully applied in image recognition, object tracking, analysis and interpretation applications on big and multilayer data [5,6].Fig. 1.Smart city applications [5] Smart city applications, as in this example, generally require algorithms with different types of data inputs and outputs that will perform real-time learning on this big data.Traditional machine learning methods are not sufficient for smart city applications in terms of processing power and memory consumption.Deep Learning (DP) is successfully applied in image recognition, object tracking, analysis and interpretation applications on big and multilayer data [5,6].Deep Belief Networks, Convolutional Neural Networks (CNNs) and Deep Boltzmann Machines are widely used for DP applications in the literature.However, CNNs are more commonly used along with their easier training, need for fewer parameters, ease of implementation and success rates compared to other methods. The biggest reasons for the popularity of deep learning networks after 2011 are the fact that learning process is simpler and that the costs of graphics process unit (GPU) are at very reasonable levels compared to other machine learning algorithms.Deep learning algorithms working on GPUs provide speed increase by tens of times compared to traditional Central Programming Units (CPUs) depending on GPU properties [8,9].Apart from PC and mobile devices, various sensors and electronic appliances have structures connected to internet via wifi or mobile networks.Nowadays, the number of these devices, that are called internet of things (IoT), has exceeded 1.2 billion [10].IoTs collect information such as moisture, heat, carbon dioxide ratio, motion, speed, etc. through the sensors they have.Thus, IoTs are widely used in remote and real-time applications such as smart building management, smart parking and smart monitoring.Even world's giant companies like Google are producing development cards and cloud-based software solutions for IoT, and IoTs expected to have a greater importance in our lives in the near future [11].When the number of IoTs used in distributed architecture for different purposes, the amount of data they produce and the rate of data generation are taken into account, the resulting data type is considered as big data [12].It is necessary to store these data by modeling with a suitable architecture for processing, to clear inconsistent and unnecessary data and to perform dimension reduction before processing the data.A Big data which is appropriate for this structure is usually not regular and consistent, therefore it cannot be stored as sql-based.For this purpose, it should mainly be stored in a cloud-based and distributed structure with a nosql -based database [13].Big data is mentioned by the concepts called 5V in the literature.These 5V concepts can be associated with smart cities and IoTs as the following.Variety: It refers to variety of data.Data is not hierarchical and uniform when IoTs that include image, video, automation sources and many sensors as input data in smart city applications are taken into account.Velocity and Volume: It refers to the growth rate and size of data.More than 1.2 billion IoT's worldwide, more than 200 million cameras that are connected to internet are recording 1.4 trillion hours video per year [8].Verification: The data obtained from many different sources do not have a hierarchical and homogeneous structure.It is necessary to filter out the data that are unnecessary or disturb the hierarchical structure for learning from data.Value: It refers to the value created by the data.It is the value created by the output which is obtained as a result of data analysis.A new smart city application, ensuring cost in the management of city resources and improving the quality of services offered to the public can be as examples.When the literature on smart city applications is examined, two major technologies, IoTs and Big Data, come to the forefront.Paganelli et al. defined a web architecture for accessing remote IoTs to be used in smart cities.In the study, each IoT was represented by a unique identity.Along with the Restful-based web architecture, the data from these IoTs were accessed via json apis [13].Dlodlo et al. defined the cloud-based data storage architecture by giving information about IoT development environments such as Aurdino and Raspery Pi for smart applications to be used in smart cities [15]. Shah and Misra implemented a mobile application that performs remote monitoring by reading the environmental variables such as moisture, temperature and CO2 in the environment through IoTs for smart environmental monitoring application with the aim of reducing air pollution [16].Sakhardande et al. proposed a structure that uses more than one IOT network with power supply and wifi network in order to perform monitoring even in disastrous situations in smart cities [17].Costa and Santos defined architecture for the use of Big Data in smart cities.In the study called Basis, they presented an architecture that uses hdfs in which Big Data is stored with cloudbased hadop technology [14].Horban proposed an architecture that performs data mining on Big Data for smart energy management by detailing the relationship between the concept of Big Data and smart cities [18].Alshawish et al. proposed an architecture for the use of Big Data in smart city applications.The prominent suggestion in the study is the 6-step, reusable Big Data pipeline architecture for smart management [19]. IoTs Restfull framework for IOTs to be used in Smart City [13] IoTs Research on Iots platforms that can be used in Smart cities [15] IoTs IoTs-based approach for smart city's condition monitoring [16] IoTs IoTs-based framework for smart city disaster management [17] Big Data Big Data structure for smart cities [14] Big Data Data mining analysis on Big Data for smart energy use [18] Big Data Big Data applications in smart cities [19] Two major technologies, Big Data and IoT, come to the forefront in smart city applications in the literature examples summarized in Table-1.However, when today's needs for smart city applications are taken into account, it is thought that Deep Learning will be the most appropriate machine learning on Big Data from IoTs for the processes of real-time action decisions, analysis and acquisition of valuable information. In addition to these two major technologies, the proposed system needs to use a cloud-based distributed architecture to handle large data and produce results.In this study, an architecture using Deep Learning was proposed for valuable knowledge mining in smart city applications. Proposed Approach In this study, an approach based on Deep Learning that uses CNN on Big Data coming from all IoTs was proposed for smart city management.The study involves the steps given in "Fig.2".A unique id that will replace the identity is given for each IoT.This is necessary to obtain the type, location, sensors and values of the relevant IoT [13] in a restful architecture as in the study. The collected data should be stored in a distributed architecture in accordance with the Big Data architecture on a nosql-based cloud server.In Big Data architecture, data should be stored on distributed server clusters and be subjected to map-reduce process without processing [20].The outputs obtained after the mapreduce processing will create the entries for the Deep Learning training process. Internet of Things (IoTs) Electronic devices such as cameras, sensors and measuring devices are regarded as Things provided that they are connected to at least one sensor, a unique identity and internet.A device that fulfills these three conditions is accessible from all over the world, so it is manageable.Nowadays, IoT platform is provided with many corporate development cards and hardware and software supports.IoT developer kit for Google cloud-supported platform is seen in "Fig.3" [21].The developer kit has sensors such as wifi, acclerometer, temperature, light, rotary and distance and is ready for use via google cloud platform. Big Data When the fact that there are 1.2 billion IoT around the world is taken into account, Big Data is the type of data that occurs in terms of the variety, rate, size of the produced data and the value created by it.The basic properties that should be known about Big Data are explained below in accordance with the Big Data life cycle is seen in "Fig.4". Hadoop Distributed File System (HDFS): It is the file system consisting of distributed server clusters for Big Data.Map: Map-Reduce processing was announced by Google in 2004 [23].In the map phase, the data received from the host node is divided into smaller segments and distributed to the child nodes.Reduce: In this phase, concrete analysis results are obtained on the data obtained in the map phase. Hadoop: It is a project that performs Map-Reduce process on Distributed File System [24]. Deep Learning Deep Learning, which is a machine learning method and is a special form of Artifical Neural Networks (ANN) is successfully applied in applications such as information retrieval [25], image recognition, object tracking and language processing [26,27].CNNs are preferred for the reasons such as the simplicity of its training process, fast running in test phase and ease of application among Deep Learning Algorithms.In general, CNNs consist of 5 stages. Input Layer: It creates the input data of the system.In the proposed method, the outputs of Big Data Map-Reduce process will create the CNN entry for smart city management.The use of outputs after Map-Reduce process in Deep Learning training will ensure that the training data does not contain any inconsistencies and that a model that characterizes the learning model very well will emerge. Convolutional Layer: It is also known as subsampling.It is the process of subsampling by taking a core matrix of smaller size on input data as it is given in "1". Pooling Layer: It is the feature extraction step.Feature selection is performed on the entries multiplexed in the previous step.A kernel matrix is also used while performing feature selection.For instance, In the event that 2x2 kernel matrix is used, max, min or mean value is selected as feature from a total of 4 values.Thus, a feature vector is obtained as given in "2,3". Conclusions Smart cities is a whole of concepts in which smart applications are used for the purposes of the management of city resources, improving the quality of service, obtaining valuable information that will affect managerial decisions and reducing costs.It consists of sub-elements such as smart traffic, smart health, smart building, smart monitoring and smart infrastructure. IoTs are the source of information in smart city applications.The fact that they are manageable due to the sensors they have and their internet connection provide unmanned source of information in smart city applications.However, both the number of IoT, the amount of data produced and the rate of data generation require the use of a Big Data-based structure for IoTs. Traditional machine learning algorithms will be inadequate in processing this data in terms of workload and working speeds in order to perform data mining, supervised/unsupervised learning in such applications where there is a large number of data input and the data type is Big Data.Especially after 2011, Deep Learning has been successfully applied in many applications such as object recognition, scene interpretation, language processing and driverless tools for the reasons of reduced GPU costs, working speeds, ease of implementation. In this study, a deep learning based approach was proposed with the purpose of performing artifical management in smart cities where IoTs constitute information resources. The proposed approach has four steps.It was projected that the raw data received from IoTs are stored in a distributed architecture in accordance with the Big Data architecture in the first step, that a consistent and well-modeled data set to be used in the training process is obtained by performing Map and Reduce process on Big Data in the second step, and that a CNNbased Deep Learning training is performed in the third step.The final step is that the whole system works in a distributed cloud architecture.Incoming requests are met by the load balancer and the most appropriate node is forwarded and processed.This approach provides for expandable, modular and parallel operation.Thus, in this study, an approach using deep learning was proposed with the aim of obtaining valuable information by performing data mining in a smart city management. Fig. 2 . Fig.2.Block diagram of the proposed approach Fig. 4 . Fig.4.Big data map-reduce process [22] 2.3.Cloud Computing All client service providers for today's big data-processing web applications.It uses cloud based technology to meet both scenarios such as disaster recovery and simultaneous service with all clients.The major advantages of cloud-based technology, shown in "Fig.5", are listed below [17,21,24]. Diseaster Recovery: One or more of the cloud services provide continuous protection, even if it is inoperable [17]. Load Balancing: Provides load balancing for all clients (monitoring) or IoTs interacting with the system. Distributed Computing: Data distributes the load on the appropriate services.The incoming service meets the most appropriate service. Backup: It provides affordable solution for backup and big data. Performance: Adding and removing new nodes to the network can be done easily depending on the performance required. Table 1 . Literature Review
2019-04-11T09:24:03.967Z
2017-08-21T00:00:00.000
{ "year": 2017, "sha1": "9671f73c85563d91f2df1bb8c77b8eddb5b1199a", "oa_license": "CCBYSA", "oa_url": "https://dergipark.org.tr/tr/download/article-file/519263", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "9671f73c85563d91f2df1bb8c77b8eddb5b1199a", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Engineering" ] }
212746159
pes2o/s2orc
v3-fos-license
ErMiao San Inhibits Angiogenesis in Rheumatoid Arthritis by Suppressing JAK/STAT Signaling Pathways ErMiao San (EMS) is composed of theCortex Phellodendri chinensis andAtractylodes lancea, and it has the function of eliminating heat and excreting dampness in terms of traditional Chinesemedicine to damp heat syndrome. Previous reports indicate that EMS possesses anti-inflammatory activity; however, its action on angiogenesis of rheumatoid arthritis (RA) has not been clarified. 1e present study aims to determine the antiangiogenic activity of EMS in collagen-induced arthritis (CIA) mice and in various angiogenesis models. Our data showed that EMS (5 g/kg) markedly reduced the immature blood vessels in synovial membrane tissues of inflamed joints from CIA mice. It also inhibited vascular endothelial growth factor (VEGF)-induced microvessel sprout formation ex vivo. Meanwhile, EMS suppressed VEGF-induced migration, invasion, adhesion, and tube formation of human umbilical vein endothelial cells (HUVECs). Moreover, EMS significantly reduced the expression of angiogenic activators including interleukin (IL)-1β, IL-6, and tumor necrosis factor-alpha (TNF-α) in synovium of CIA mice. More interestingly, EMS blocked the autophosphorylation of VEGF-induced JAK1, STAT1, and STAT6 in CIA mice and VEGF-induced HUVECs. 1ese findings suggest for the first time that EMS possesses the antiangiogenic effect in RA in vivo, ex vivo, and in vitro by interrupting the targeting of JAK/STAT activation. Introduction Rheumatoid arthritis (RA) is a progressive, systemic, and autoimmune disease characterized by synovial inflammation, hyperplasia, pannus formation, and cartilage and bone destruction [1][2][3], with the incidence in men and women rising steeply with age over 45 years [4], which seriously compromises human health. Pannus is the major reason and basic pathology in RA causing joint destruction, and angiogenesis is a key factor in generating and maintaining pannus [5]. Angiogenesis, the formation of new capillaries from preexisting vasculature, plays a critical role in the pathology of RA [6]. Approaches that target angiogenesis are part of a promising new era in the treatment of several conditions characterized by pathological angiogenesis, most importantly tumor growth in cancer and chronic inflammatory diseases such as RA [7]. erefore, finding an effective method to inhibit angiogenesis can become an effective treatment for RA. Angiogenesis is a complex process that is regulated by angiogenic mediators, including growth factors, primarily vascular endothelial growth factor (VEGF), and hypoxiainducible factors (HIFs), as well as proinflammatory cytokines, proteases, and others [6]. Among them, VEGF is the key regulator of angiogenesis and has been implicated in various biological activities, such as stimulating endothelial cell proliferation, migration, and formation of blood vessels [8]. e Janus kinases (JAKs) are part of an important signaling pathway that influences cellular responses to inflammation. JAK1 is a member of the JAKs family, which constitutes multiple signal transduction pathways with multiple members of the family of the STATs. JAK is activated by JAK1/STATs signaling pathways through the ligands and receptors on the surface of cells, inducing receptors dimerization and phosphorylation between each other. e generation development angiogenesis, invasion, and metastasis, and other links have taken part by activation of JAKs/STATs signaling pathways. Activation of JAKs/ STATs induces the angiogenesis by activating the transcription of various associated factors of angiogenesis such as VEGF. RA, as a common inflammatory disease of the immune system, belongs to the category of "Bi syndrome" in traditional Chinese medicine (TCM). TCM has long been used in treating RA. TCM has attracted much attention for its potential curative effect and few side effects. ErMiao San (EMS), formerly known as CangZhu San, was first seen in the book Effective Formulae Handed Down for Generations. In the Jin and Yuan Dynasties, the "Danxi Xinfa" has been renamed EMS for the first time. EMS is made up of two Chinese herbs, Cortex Phellodendri chinensis and Atractylodes lancea, and it has the function of eliminating heat and excreting dampness. It is commonly used in the treatment of RA, scrotal eczema, vaginitis, and others. e study found that EMS had anti-inflammatory activity by inhibiting tumor necrosis factor alpha (TNF-α) [9] and interleukin (IL)-1β [10]. erefore, we investigated the antiangiogenic activity of EMS in collagen-induced arthritis (CIA) mice and in various angiogenesis models, and its possible mechanism of the action associated with VEGF-induced JAK/STAT-mediated signaling pathway was also explored. IL-1β, mouse IL-6, and mouse TNF-α enzyme-linked immunosorbent assay (ELISA) kit were got from ABclonal (Boston, USA). Phosphorylated (p-) JAK1 antibody was bought from Cell Signaling Technology (Danvers, MA, USA); p-STAT6 antibody was purchased from LifeSpan Biological Sciences (California, USA); p-STAT1 antibody, JAK1 antibody, anti-STAT1, anti-STAT6, and anti-GAPDH antibodies were purchased from Abcam Company (Cambridge, UK). Animals. Six-to-eight-week-old DBA/1 mice were obtained from Shanghai SLAC Laboratory Animal Co. Ltd (production license No: SCXK 2017-0005). And 130 g to 150 g SD rats (for rat aortic ring assay) were obtained from Guangdong Medical Laboratory Animal Center (production license No: SCXK 2018-0002). All the experimental protocols were approved by the Research Ethics Committee of Shenzhen Peking University-e Hong Kong University of Science and Technology Medical Center, in accordance with the National Institutes of Health Guidelines for the Care and Use of Laboratory Animals. All animals were treated in accordance with the guidelines and regulations for the use and care of animals of the Center for Laboratory Animal Care, Shenzhen Peking University-e Hong Kong University of Science and Technology Medical Center. Preparation of EMS. An extract of EMS was prepared by decocting the dried prescription of herbs (15 g of Cortex Phellodendri chinensis and 15 g of Atractylodes lancea) with boiling water for 1 h and extracted three times. e obtained suspension was separated by filtration and condensed to the concentration of 1.0 g/mL solution and then stored at 4°C before administration. EMS was given to mice at 5 g/kg/d. Induction and Evaluation of CIA. We followed the methods of Liu et al. [11]. Eighteen male DBA/1 mice were randomly divided separately into 3 groups of equal numbers (n � 6): the normal control group (Control), the CIA group (CIA), and CIA mice treated with EMS at 5 g/kg dosage. Except the control group, all other mice were intradermally injected with 100 μg bovine type II collagen in 0.05 M acetic acid emulsified in complete Freund's adjuvant (CFA) at the base of the tail to induce arthritis. On day 21, mice were boosted intraperitoneally with 100 μg type II collagen in incomplete Freund's adjuvant (IFA). Mice were observed once every 1-2 days after primary immunization. Arthritis severity was evaluated by arthritis scoring, which was performed by two independent, blinded observers. All 4 limbs of the mice were evaluated according to a visual assessment of inflammation or swelling and scored from 0 to 4. To determine the disease severity, macroscopic scale ranging from 0 to 4 was used per paw, where 0 � normal, 1 � detectable arthritis with erythema, 2 � substantial swelling and redness, 3 � severe swelling and redness from joint to digit, and 4 � maximal swelling and deformity with ankylosis. e disease score was expressed as a cumulative value for all paws, with a maximum possible score of 16 per mouse. is resulted in a maximum possible score of 4 per limb. e arthritis score was the total of the scores for all 4 limbs (maximum possible arthritis score 16). Arthritis incidence values are the number of positives/total number in group. 2.5. Treatment of CIA with EMS. Daily EMS (5 g/kg) treatment was started from day 21 to day 40 after the first immunization. e treatment lasted for 20 days with a frequency of once a day. e agents were orally administered in a volume of 10 mL/kg. e mice in the control and CIA groups were administered with the same volume of saline. Histology and Histologic Scoring. Mice were sacrificed by cervical dislocation on day 40 after the first immunization. Both hind knees were dissected and prepared into sections for staining with hematoxylin and eosin (H&E). All sections were randomized and evaluated by two trained observers. Minor differences between observers were resolved by mutual agreement. e data was expressed as mean synovial vascularity (angiogenesis). e score was based on a scale of 0-3, as previously described [11]. Histochemical and Immunohistochemical Analysis. is assay was carried out as previously described [12]. In order to measure blood vessel in synovial membrane tissues of inflamed joints, the polyclonal antibody (rat antibody, dilution 1 : 50, Abcam, Cambridge, MA, USA) recognizing the CD31 panendothelial antigen was used for microvessel and single endothelial cell staining on 5 μm thick paraffin embedded sections of knee joints by immunohistochemical analysis as previously described [11]. For CD31 and alpha smooth muscle actin (αSMA, dilution 1 : 300) immunofluorescence studies, the sections were incubated overnight in 4°C. en the sections were incubated for 1 h at room temperature with goat anti-rat secondary antibody (dilution 1 : 400) and goat anti-rabbit secondary antibody (dilution 1 : 400). e results are expressed as the mean region of interest, representing the percentage of area covered with positively stained cells per image at a magnification of 400x. e vessels were determined using Pro-Plus Image 7 after dual staining for CD31 and αSMA. Paraffin sections of joints were mounted on poly-L-lysine-coated slides. e paraffin sections were dewaxed by routine method and incubated for 10 min with 3% H 2 O 2 . e sections were placed in a 37°C, 0.1% trypsinase for 5-30 min for antigen retrieval. Each section was incubated with normal goat serum for 20 min at room temperature, and then with primary antibody VEGF (dilution 1 : 50), CD31 (dilution 1 : 50) overnight at 4°C. After incubation with Polymer Helper for 20 min at 37°C, sections reacted with poly-HRP anti-rabbit IgG for 20 min at 37°C. e sections were then stained with 3, 3-diaminobenzidine and counterstained with hematoxylin. Specimens were examined using a Leica image analyzer and analyzed by computer image analysis (Leica Microsystem Wetzlar Gmbh., Wetzlar, Germany) in a blinded manner. To localize and identify areas with positively stained cells, 10 random digital images per specimen of the synovial membrane tissues were recorded, and quantitative analysis was performed according to the color cell separation. e results are expressed as the mean region of interest, representing the percentage of area covered with positively stained cells per image at a magnification of 400x. Ex Vivo Rat Aortic Ring Assay. is assay was performed as previously described [12]. Forty-eight well plates were covered by 100 μl of matrigel (5 mg/mL) and left to polymerize for 45 min at 37°C. Aortic rings were prepared from SD rats. Aortas were sectioned into sections 1-1.5 mm long, rinsed several times with PBS, placed on matrigel in wells, and covered with an additional 50 μl of matrigel for 45 min. e rings were cultured in 1 mL of H-DMEM medium with 10% fetal bovine serum (FBS) with or without VEGF (20 ng/mL) plus various concentrations of EMS (0.2, 0.4, and 0.8 mg/mL). e medium was changed every 3 days. After 9 days of incubation, the rings were fixed with 4% paraformaldehyde. e microvessel growth was photographed using phase contrast microscopy. Numbers and lengths of the vascular branches in aortic ring were measured in Pro-Plus Image 7. All experiments were done in triplicate. 2.9. Cell Culture. We followed the methods of Liu et al. [11], and they were carried out described as previously described [12]. Human umbilical vein endothelial cells (HUVECs) were purchased from ScienCell Inc. (Carlsbad, CA, USA). e cells were cultured in sterile H-DMEM supplemented with 10% FBS, 100 U/mL penicillin, and 80 U/mL streptomycin and were maintained at 37°C in a humidified 5% CO 2 incubator. HUVECs were used at passage numbers 4 to 6 in this study. Cell Viability Assay. We followed the methods of Liu et al. [11], and they were carried out as previously described [12]. HUVECs (5 × 10 4 cells/mL) were seeded in 96-well plates and incubated in sterile H-DMEM supplemented with 5% FBS, 100 U/mL penicillin, and 80 U/mL streptomycin for 24 h. Cells were then incubated with or without VEGF (20 ng/mL) and 1 h later with different concentrations of EMS (0.2, 0.4, and 0.8 mg/mL) for 24 h. Cell viability was determined by MTT method according to the manufacturer's instructions. e experiments were carried out 3 times in triplicate measurements. 2.11. Scratch Healing Assay. We followed the methods of Liu et al. [11], and they were carried out as previously described [12]. HUVECs were seeded in a 48-well plate (5 × 10 4 cells per well). Cells were incubated overnight yielding confluent monolayers for wounding. Wounds were made using a pipette tip and photographs were taken immediately (time zero). After washing with PBS, 100 μl of sterile H-DMEM medium (supplemented with 5% FBS, 100 U/mL penicillin, and 80 U/mL streptomycin), with or Evidence-Based Complementary and Alternative Medicine without VEGF (20 ng/mL), and with or without different concentrations of EMS (0.2, 0.4, and 0.8 mg/mL), was added to the wells. 12 h later, photographs were taken again. e distance migrated by the cell monolayer to close the wounded area during this time was measured. Results were expressed as a migration index, that is, the distance migrated by EMS-treated relative to the distance migrated by control cells. Experiments were carried out in triplicate and repeated at least three times. Transwell Migration Assay. We followed the methods of Liu et al. [11], and they were carried out as previously described [12]. e transwell migration assay was performed using a transwell chamber. HUVECs were seeded in the upper chambers (5 × 10 4 cells/mL, 200 μl/well) of H-DMEM media containing EMS (0.2, 0.4 and 0.8 mg/mL). e bottom chamber of the apparatus contained 600 μl of culture medium with or without VEGF (20 ng/mL for HUVECs). e chamber was incubated at 37°C for 6 h. Nonmigrating cells in the upper chamber were carefully removed with cotton swabs, and cells on the lower surface of the membrane were fixed with 4% paraformaldehyde and stained with crystal violet solution. e total numbers of migrated cells were then counted in five randomly selected fields for each insert (magnification ×400) using optical microscopy. All experiments were done in triplicate. Cell Invasion Assay. We followed the methods of Liu et al. [11], and they were carried out as previously described [12]. e upper surfaces of the transwell inserts were precoated with Matrigel (1.25 mg/mL, 20 μl/well) for 45 min at 37°C. e bottom chamber of the apparatus contained 600 μl of culture medium with or without VEGF (20 ng/mL for HUVECs). HUVECs (1 × 10 4 cells/well) were added to the upper chamber and incubated in normal growth medium with or without various concentrations of EMS (0.2, 0.4, and 0.8 mg/mL). After 14 h of incubation at 37°C and 5% CO 2 , noninvasive cells on the upper membrane surfaces were removed by wiping with cotton swabs. e cells were fixed and stained with crystal violet solution. Cell invasion was quantified by counting the cells on the lower surface under a phase contrast microscope at 400x magnification. e average number of migrating cells was counted in five random fields. ree independent assays were performed. 2.14. Cell Adhesion Assay. We followed the methods of Liu et al. [11], and they were carried out as previously described [12]. HUVECs (5 × 10 4 cells/mL) were seeded in fibronectin (FN, 20 mg/l) or bovine serum albumin (10 mg/mL, used as negative control) coated 96-well plates and, respectively, incubated in sterile H-DMEM medium (supplemented with 5% FBS, 100 U/mL 1 penicillin, and 80 U/mL 1 streptomycin) for 24 h. Cells were then incubated with or without VEGF (20 ng/mL) plus different concentrations of EMS (0.2, 0.4, and 0.8 mg/mL) for 24 h. After treatment, cells were washed twice with PBS and 200 μl of sterile H-DMEM medium containing 5% FBS and 10% (v/v) MTT reagent added to the cells. Absorbances at 490 nm were measured using a microplate reader. Results were expressed as cell adhesiveness. All experiments were done in triplicate. Tube Formation Assay. In order to examine the inhibitory effect of EMS on HUVECs tube formation. We followed the methods of Liu et al. [11], and they were carried out as previously described [12]. Matrigel (10 mg/mL) was plated in 48-well culture plates and allowed to polymerize at 37°C in 5% CO 2 humidified for 30 min. HUVECs were removed from culture, trypsinised, and resuspended in sterile H-DMEM medium (supplemented with 10% FBS, 100 U/mL penicillin, and 80 U/mL streptomycin). HUVECs (6 × 10 4 cells/mL) were added to each chamber, followed by the addition of various concentrations of EMS (0.2, 0.4, and 0.8 mg/mL) with or without VEGF (20 ng/mL), then incubated for 6 h at 37°C in 5% CO 2 . After incubation, the capillary-like tube formation of each well in the culture plates was photographed using phase contrast microscopy. Quantitation of the antiangiogenic activity of EMS on tube formation was by counting the number of branch points. All experiments were done in triplicate. Enzyme-Linked Immunosorbent Assay (ELISA). Sera from normal control, CIA, and EMS-treated CIA mice were collected. e expression levels of TNF-α, IL-1β, and IL-6 in sera were detected by ELISA according to the manufacturer's protocol. All experiments were done in triplicate. Statistical Analysis. e SPSS version 11.0 software for Windows (SPSS Inc., IL, USA) was used for statistical analysis. Continuous variables were expressed as X ± s. Pathological scores were analyzed by nonparametric Kruskal-Wallis tests. Other data were analyzed using ANOVA followed by a post hoc test or Student's t-test. Differences were considered statistically significant when P was less than 0.05. EMS Prevents Arthritis Progression and Decreases Disease Severity of Arthritis in CIA Mice. e CIA model in DBA/1 mice was used to investigate the effect of EMS on arthritis. Daily EMS treatment was started from day 21 to day 40 after the first immunization. Macroscopic evidence of arthritis such as erythema or swelling was marked in CIA mice of CIA group, while EMS significantly attenuated clinical symptoms of arthritis in CIA mice (Figure 1(a)). For arthritis incidence and arthritis clinical symptoms, the evidences of joint destruction by histopathological evaluation also showed EMS to be highly effective. Inflammation, pannus, cartilage damage, and bone erosion in the groups receiving EMS (5 g/kg) were markedly reduced (Figures 1(b) and 1(e)). As shown in Figures 1(c) and 1(d), EMS attenuated the increasing arthritis score and arthritis incidence from day 23 after first immunization in CIA mice. Taken together, these results indicate that systemic administration of EMS in mice suppresses the clinical and pathologic severity of CIA. EMS Inhibits Angiogenesis in Synovium Tissue of Joints in CIA Mice. Angiogenesis has been considered a critical step in the progression of chronic arthritis, as well as an early determinant in the development of RA. To assess the potential mechanism through which EMS exerts its antiarthritic action, H&E staining was used to examine the presence of vascular structures. As shown in Figures 2(a) and 2(b), the extent of vascular formation was indeed inhibited by the dose of 5 g/kg EMS, as the score of mean synovial vascularity inside the construct decreased compared with that of CIA group. CD31, a marker of blood vessels, was stained in synovium tissue of joints by immunohistochemistry. As shown in Figures 2(c) and 2(d), a significant amount of CD31 staining was present in synovium tissue of inflamed joints from model mice, and this was markedly attenuated in EMS-treated mice. In order to further elucidate the mechanism of EMS on angiogenesis in RA, the protein levels of angiogenic activators including VEGF were detected in joint synovia of mice by immunohistochemistry. EMS strongly reduced the VEGF expression in synovia of CIA mice (Figures 2(e) and 2(f )). In addition, EMS affected the morphology of the newly formed immature microvessel, identified by staining for CD31, which were not covered by αSMA-positive perivascular cells (Figures 2(g)-2(i)). Quantitative evaluation further revealed that doses of 5 g/kg EMS significantly reduced the number of immature CD31 + / αSMA − vessels (Figure 1) but not mature CD31 + /αSMA + vessels and decreased the total number of blood vessels (Figures 2(i) and 2(j)) in synovium tissues of inflamed joints of CIA mice by immunofluorescence analysis. ese results suggest that EMS has a potent antiangiogenic activity in vivo. EMS Suppresses VEGF-Induced Microvessel Sprout Formation Ex Vivo. e aortic ring assay, an ex vivo assay, mimics several stages of angiogenesis, including endothelial cellular proliferation, migration, tube formation, microvessel branching, and perivascular recruitment. As shown in Figures 3(a)-3(d), VEGF significantly triggered endothelial cell migration and microvessel sprouting, leading to the formation of a complex microvessel network emerging from the aortic rings and growing outward; by contrast, EMS inhibited microvessel sprouting in a dose-dependent manner, suggesting that EMS suppressed VEGF-induced microvessel sprout formation ex vivo. EMS Inhibited the Migration, Invasion, Adhesion, and Tube Formation of HUVECs. In angiogenesis, ECs migrate in response to several chemotactic factors [13]. erefore, we attempted to elucidate whether EMS affects VEGF-induced EC migration and chemotaxis. As shown in Figures 4(a) and 4(e), EMS suppressed VEGF-induced wounding migration of HUVECs. Moreover, EMS strongly inhibited the VEGFinduced chemotaxis of HUVECs, determined by using a transwell chamber (Figures 4(b) and 4(f )). To determine the effect of EMS on endothelial cell invasion, transwell chamber (precoated with Matrigel) experiments were also performed. EMS dose-dependently reduced the number of invasive cells migrating to the underside of the filters in the tanswell chambers after VEGF stimulation (Figures 4(c) and 4(g)), indicating a potent inhibitory effect of EMS on VEGF-induced endothelial cell invasiveness. In addition, the inhibitory effect of EMS on cell adhesiveness of HUVECs was determined by adhesion assay. EMS at a concentration ranging from 0.2 to 0.8 mg/mL significantly suppressed the cell adhesiveness of HUVECs induced by VEGF (data not shown). Moreover, a tube formation assay, which mimics angiogenesis, was detected to explore the effect of EMS on HUVECs tube formation. Robust and complete tubular-like structures of HUVECs were formed in the presence of VEGF. EMS disrupted tube formation in a concentration-dependent manner compared to the VEGF-induced group (Figures 4(d) and 4(h)). We also examined whether the above suppressive effect of EMS was due to its cytotoxicity. Confluent HUVECs were treated with EMS and/or VEGF for 24 h, and cytotoxicity was monitored by MTT assay. Our results showed that EMS did not exert any cytotoxic effects on HUVECs for 24 h under the experimental conditions used in the present study (Figure 4(i)), suggesting that EMS specifically suppresses the above functions of HUVECs. EMS Reduces the Expression Levels of Proangiogenic Mediators. In order to investigate the mechanisms by which EMS suppressed the angiogenesis in RA, we detected the protein expression levels of angiogenic activators including TNF-α, IL-1β, and IL-6 in sera of mice by ELISA. EMS significantly inhibited the expression of IL-1β ( Figure 5(a)), IL-6 ( Figure 5(b)), and TNF-α ( Figure 5(c)) in sera of CIA mice. Since activation of JAK/STAT pathway plays a critical role in RA, we explored whether EMS could also impair angiogenesis by inhibiting the activation of JAK/STAT signaling pathways. EMS has significantly decreased the levels of phosphorylation of JAK1 (Figures 5(d) Discussion Angiogenesis refers to the formation of new blood vessels from the development of existing capillaries or the posterior vein of capillaries, which plays an important role in the pathological process of RA. Previous reports indicate that EMS possesses anti-inflammatory activity. It is of great importance to understand its actions and potential drug targets in order to effectively use EMS for the therapy of RA. To further clarify the mechanisms of EMS acting on RA, we here discovered and demonstrated the antiangiogenic effect of EMS in vivo, ex vivo, and in vitro. Angiogenesis as a critical component of disease progression in RA involves the pannus formation and maintenances of the infiltration of synovial membrane. RA synovium contains a significant fraction of immature blood vessels [14]. Progression of the disease increases the presence and density of immature but not mature vessels and only immature vessels are depleted in response to anti-TNF-a therapy [14,15]. In the study, our results indicated that the immature blood vessels but not mature vessels in synovial membrane tissues of arthritic joints of CIA mice treated with EMS were significantly inhibited. In addition, EMS demonstrated potent inhibitory effect on sprouting of microvessel from rat aorta. erefore, these results indicate that EMS has potent antiangiogenic activity both in vivo and ex vivo. New vessel formation (angiogenesis) involves multiple steps including endothelial cell migration, invasion, adhesiveness, tube assembly, and remodeling [16]. In this study, we systematically investigated the potential effects of EMS on these key processes of VEGF-induced endothelial cell. erefore, these results indicate that EMS inhibits angiogenesis by inhibiting the key angiogenic processes in vitro. , microvessel length (c), and microvessel area (d) was counted, respectively. ree independent experiments were performed. Data are represented as means ± SEM (n � 3). ### P < 0.001, compared to the control group. * P < 0.05, * * P < 0.01, and * * * P < 0.001, compared to the VEGF-treated group. Evidence-Based Complementary and Alternative Medicine Further investigation revealed that the antiangiogenic efficacy of EMS was shown to be mediated via interfering with the endothelial cell function. A great number of proangiogenic factors, including VEGF, TNF-α, IL-1β, and IL-6, govern angiogenesis in RA [17]. Among these, VEGF is the most potent angiogenic regulator, which is produced in the synovium in response to proinflammatory cytokines such as TNF-α and IL-1β. VEGF acts in angiogenesis via inducing EC proliferation, migration, and tube formation [18][19][20][21]. In the present study, to observe the mechanism of the antiangiogenic effect of EMS, these angiogenic mediators were investigated. Our results demonstrated that EMS significantly reduces the expression level of VEGF in synovium of CIA mice. Taken together, EMS inhibits angiogenesis by downregulating proangiogenic factors including VEGF, TNF-α, IL-1β, and IL-6. JAK/STAT signaling pathway is an important multifunctional cytokine transduction pathway, which is involved in the regulation of various pathophysiological processes in vivo, such as cell proliferation and differentiation, immune regulation, inflammatory response, tumorigenesis, and development. Inflammatory factors can activate JAK kinase and promote STAT phosphorylation, which can cause inflammatory factor expression and cell injury, apoptosis, or proliferation. JAK and STAT1 were rarely expressed in synovium of OA patients or arthritis patients, but highly expressed in synovium of RA patients, and their expression decreased significantly after treatment [22]. STAT3 activation was only found in synovium of RA patients, but not in synovium of general arthritis patients [23]. De Hooge et al. [24] showed that granuloma formation and progressive arthritis could be seen in the yeast polysaccharide induced arthritis of STAT1 deficient mice, indicating that the antiinflammatory effect of STAT1 deficiency might be weakened. us, JAK/STAT pathway plays an important role in the pathogenesis of RA. e JAK/STAT pathway is recognized as one of the major oncogenic signaling pathways activated in a variety of human malignancies [25]. STAT proteins not only play a crucial role in tumor cell proliferation, survival, and ### P < 0.001, compared to the control group; * P < 0.05, * * P < 0.01, and * * * P < 0.001, compared to the VEGF-induced group. invasion, but also significantly contribute to the formation of a unique tumor microenvironment [26]. e incidence of RA is similar to tumor in many aspects. Furthermore, a link between STATs activation in endothelial cells and angiogenesis has been described in several studies [27][28][29][30]. To explore the molecular mechanism of the antiangiogenic effect of EMS, we assessed the ability of EMS on the expression and activation of JAK/STAT. Our data showed that EMS significantly blocked the kinase activity of JAK/STAT by downregulating VEGF-induced autophosphorylation of JAK1, STAT1, and STAT6 expression in CIA mice and HUVECs. ese results suggest the inhibitory effect of EMS on angiogenesis by targeting JAK/STAT. In conclusion, our results indicate that EMS significantly reduced synovial angiogenesis in CIA mice, inhibiting sprouting of microvessel in rat aorta. is reduction may be attributable to the inhibition of endothelial cell migration, adhesion, invasion, and tube formation. Furthermore, EMS exerts antiangiogenic effects by downregulating angiogenic activators and suppressing the JAK/STAT signaling pathway, which plays multiple roles in the regulation of angiogenesis. ese findings suggest for the first time that EMS possesses the antiangiogenic effect in RA in vivo, ex vivo, and in vitro by interrupting the targeting of JAK/STAT activation. erefore, EMS may act as a potential therapeutic agent for RA treatment through its antiangiogenic effects in the synovium. CIA mice were orally administered EMS (5 g/kg) for 20 days from the day of second immunization. e levels of IL-1β (a), IL-6 (b), and TNF-α (c) in sera of mice were measured by enzyme-linked immunosorbent assay (ELISA). e expression of phosphorylation of JAK1 (d), STAT1 (e), and STAT6 (f ) in joint synovia of CIA mice was detected by Western blot. HUVECs were starved in 10% FBS medium for 48 h pretreated with EMS for 2 h and then stimulated with VEGF (20 ng/mL) for 15 min before collected. e levels of phosphorylation of JAK1 (g), STAT1 (h), and STAT6 (i) were analyzed by Western blot, too. All experiments were done in triplicate. Mean ± SEM (n � 3) was calculated from independent experiments. ## P < 0.01, ### P < 0.001, compared to the control group; * P < 0.05, * * P < 0.01, and * * * P < 0.001, compared to the CIA/VEGF-induced group. Data Availability All data used to support the findings of this study are included within the article. Conflicts of Interest e authors declare that they have no conflicts of interest.
2020-02-20T09:05:12.840Z
2020-02-19T00:00:00.000
{ "year": 2020, "sha1": "83d6314a0b38181b64d022026c6e29fe761758ee", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ecam/2020/4381212.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "34c9d2019b4bf576a9b1f885a604574d347afd45", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
51622736
pes2o/s2orc
v3-fos-license
Lattice Microarchitecture for Bone Tissue Engineering from Calcium Phosphate Compared to Titanium Additive manufacturing of bone tissue engineering scaffolds will become a key element for personalized bone tissue engineering in the near future. Several additive manufacturing processes are based on extrusion where the deposition of the filament will result in a three-dimensional lattice structure. Recently, we studied diverse lattice structures for bone tissue engineering realized by laser sintering of titanium. In this work, we used lithography-based ceramic manufacturing of lattice structures to produce scaffolds from tricalcium phosphates (TCP) and compared them in vivo to congruent titanium scaffolds manufactured with the identical computer-aided design data to look for material-based differences in bony healing. The results show that, during a 4-week period in a noncritical-size defect in a rabbit calvarium, both scaffolds with the identical microarchitecture performed equally well in terms of bony regeneration and bony bridging of the defect. A significant increase in both parameters could only be achieved when the TCP-based scaffolds were doped with bone morphogenetic protein-2. In a critical-size defect in the calvarial bone of rabbits, however, the titanium scaffold performed significantly better than the TCP-based scaffold, most likely due to its higher mechanical stability. We conclude that titanium and TCP-based scaffolds of the same microarchitecture perform equally well in terms of bone regeneration, provided the microarchitecture meets the mechanical demand at the site of implantation. Introduction A utologous bone is still the gold standard bone substitute and after blood the most frequent transplanted material in clinics worldwide. 1 Since many years, bone tissue engineering is focused on substitution of autografts. However, due to its limited supply, additional pain at the second operating site, and donor site morbidity, smarter grafting methods are needed that would overcome these issues. 2 In recent years, special emphasis has been given to the use of mesenchymal stem cells [3][4][5][6] and bone morphogenetic proteins (BMPs [7][8][9] ) to further improve the performance of synthetic bone substitute materials. The first one, however, is costly and time-consuming and the second one has caused well-documented side effects reported over the last decade. 10 Therefore, the search for a bone substitute, which can compete with autologous bone, is still ongoing. From the material point of view, bone is a composite from collagen and calcium phosphate (CaP) deposited as hydroxyapatite. Therefore, CaP-based ceramics are the most widely applied synthetic biomaterials for repair and regeneration of damaged and diseased bone. 11 CaP not only resembles natural bone mineral but is also bioactive in terms of biocompatibility, osteoconductivity, and osteoinductivity. 12 Its biocompatibility is evident by the direct bone bonding capacity, its osteoconduction by the advancement of bone deposition on the surface, and its osteoinduction by a limited capacity to induce de novo bone formation close to CaP implants. All these bioactivities of CaP are attributed to the chemical composition, surface topography, macroporosity/microporosity, and the dissolution kinetics of the calcium (Ca 2+ ) and phosphate (PO 4 3-) ions, as reviewed in Ref. 12 To mimic cancellous autologous bone and allow bone ingrowth, CaP-based scaffolds are produced with a certain porosity as reviewed in Ref. 13 The ideal pore size was found to be between 0.3 and 0.5 mm. A more recent article reported that no difference in bone regeneration was seen up to a poresize of 1.2 mm. 14 In all these porous CaP-based scaffolds used for the identification of the ideal pore dimension, the pores were introduced by porogens causing a random distribution of the pores and no direct control over the size of the interconnections between the pores. To overcome these limitations of all those studies, the application of additive manufacturing produces scaffolds with defined pore sizes, pore locations, microarchitecture, and defined connections between pores. The process of additive manufacturing creates the scaffold layer upon layer for example by stereolithography or selective laser sintering. 15 In three-dimensional printing, another methodology in additive manufacturing, the object is formed in a powder bed by the deposition of a liquid binder through inkjet heads. 16 This methodology has already been applied for ceramics, 17 but certain limitations of design freedom exist to allow the release of powder entrapped inside the scaffold during the production in the powder bed. 18 In recent years, we aimed to study the ideal microarchitecture of bone substitutes by selective laser sintering where a high-intensity laser beam was used to build scaffolds in a titanium powder bed. 19,20 We found open titanium lattice structures to allow a fast defect bridging, which could not be accelerated by the double delivery of BMPs and their enhancer. 21 The disadvantage of using a titanium-based bone substitute is its nondegradability. Therefore, we looked for an additive manufacturing system to produce filigree structures from ceramics. In this study, we applied the CeraFab 7500 (Lithoz, Vienna, Austria) for the production of lattice structures from tricalcium phosphates (TCP), where fine structures are produced from a TCP slurry by a lithography-based printing process in an upside-down process. In this study, we report on the accuracy of the system and the morphology of the sintered material. Moreover, we studied its performance in vivo in comparison to titanium scaffolds produced by selective laser melting out of the same digital information file data (STL: standard triangulation language) in a noncritical calvarial defect and a critical calvarial defect model in rabbits. By doing so, we were able to compare both widely accepted materials in bone tissue engineering: titanium and TCP, for osteoconduction quantified by bony bridging and bony regeneration of the defects with scaffolds of identical microarchitecture. The aims of this study were to characterize a lithographybased additively manufactured TCP scaffold and compare it to a titanium-based scaffold with identical microarchitecture in the context of bone tissue engineering. Moreover, we wanted to compare the osteoconductivity of a lattice microarchitecture realized with two widely accepted, but very distinct materials to test for material-dependent effects on osteoconduction. Implant production Titanium implants were produced as previously reported. 19,20 For critical-size defects, lattice implants with ø15 mm outer diameter were produced; for noncritical-size defects, ø6 mm scaffolds were produced. The microarchitecture of all implants consists of orthogonal struts of ca. 300 mm diameter separated by ca. 500 mm wide interconnected channels. The TCP scaffolds were produced with a TCP slurry LithaBoneÔ TCP 200 (Lithoz, Vienna, Austria), solidified in the CeraFab 7500 system (Lithoz) by the exposure of the photoactive polymer from the slurry to a blue LED light in a layer-by-layer manner. 22 The so-formed TCP green part was composed of layers of 25 mm thickness and a resolution of 50 mm in the x/y-plane. Upon cleaning of the green part with LithaSol 20Ô (Lithoz) and pressurized air, they underwent a thermal treatment to decompose the polymeric binder and densify them by sintering. The final stage of the sintering procedure covered a dwelling time of 3 h at 1100°C, as recommended by the manufacturer of the slurry. The sintered scaffolds were transferred into a sterile bench, packed for transportation into the operation theater, and used as bone substitute implant without further sterilization. Scanning electron microscopy The scaffolds were examined using a Zeiss Supra V50 scanning electron microscope (SEM) (Carl Zeiss, Oberkochen, Germany). Scanning was performed under acceleration voltage of 12 kV with a distance of the sample to the detector of 9.5 cm. Surgical procedure Twenty-four adult (12 months old) New Zealand White rabbits were used in this study. The animals' weights were between 3.5 and 4.0 kg and they were fed a standard laboratory diet. The procedures was evaluated and accepted by the local authorities (108/2012 and 114/2015). To initiate the operation, the animals were anesthetized by an injection of 65 mg/kg ketamine and 4 mg/kg xylazine and maintained under anesthesia with isoflurane/O 2 . After disinfection, an incision from the nasal bone to the mid-sagittal crest was made, the soft tissue deflected, and the periosteum removed. Next, for noncritical-size defects, four evenly distributed 6-mm-diameter craniotomy defects were prepared with a trephine bur under copious irrigation with sterile saline in the operation field. Then, all defects were completed with a rose burr (1 mm) to preserve the dura. Next, all the defects were flushed with saline solution to remove remaining debris and the implants applied by gentle press fitting. Each of the animals received four different treatment modalities. The treatment modalities were assigned at random for the first animal, and thereafter, cyclic permuted clockwise. The treatments were grouped for titanium, TCP, and TCP/BMP. For the critical-size defects, a central 15-mm-diameter defect was generated as reported recently. 21 After the completion of implant placement, the soft tissues were closed with interrupted sutures. Four weeks after operation for the 6 mm defects and 16 weeks for the critical-size defects, the rabbits were placed under general anesthesia and sacrificed by an overdose of pentobarbital. The cranium containing all four craniotomy sites was removed and placed in 40% ethanol. Embedding was performed as previously reported. 20 Histomorphometry The evaluation of all implants was performed from the middle section using image analysis software (Image-Pro Plus Ò ; Media Cybernetic, Silver Springs, MD). The area of interest (AOI) was defined by the 6 mm, respectively, 15 mm defect dimension and the height of the implant. We determined the area of new bone in the AOI as percent of bone and bony integrated scaffold in the AOI (bony area, %). For the empty control value, the average area occupied by all scaffolds was taken into account, since the height of the implants exceeds the thickness of the calvarial bone. Bone bridging The determination of bony bridging was performed as reported earlier. 23,24 In brief, areas with bone tissue within the defect margin were projected onto the x-axis. Next, the stretches of the x-axis with projected bone tissue were summed up and related to the defect width of 6 mm (noncritical size), respectively 15 mm (critical size). Bone bridging is given in percentage of the defect width (6 mm, respectively 15 mm) where bone formation had occurred. Statistics The primary analysis unit was the animal. For all parameters tested, the treatment modalities were compared with a Kruskal-Wallis test, followed by pairwise comparison of treatment modalities with the Mann-Whitney test for dependent data (IBM SPSS v. 19). p-Values are displayed in the graphs and significance was set at a limit of p < 0.05. Data from seven to nine different rabbits are presented for each group for the noncritical-size defect and six for the critical-size ones. Values are reported in the text by meanstandard deviation or displayed in graphs as medianlower/upper quartile. After sintering, the scaffolds appeared white, suggesting that all the binder material was removed during the sintering process, including a 3-h sintering step at 1100°C (Fig. 1a, b). Next, we compared the produced scaffolds with the digital information file data put down in an STL file. In terms of precision, we evaluated the height and width of the pores and rods from the outer surface of the scaffold in top view and from the cross-section of the inner part as illustrated (Fig. 1d, f). The comparison of the aforementioned dimensions taken from the outside and the inner portion of the scaffold in comparison with the STL dimensions revealed that there are deviations in the building direction (see arrow in Fig. 1a, f), listed as pore height and rod height (Fig. 1e). These deviations arose since in these initial studies, no compensation parameters were used during the printing process; hence, a higher cure depth in z-direction was obtained than targeted in the CAD. SEM (Fig. 2) revealed that upon sintering, the TCP particles have partially fused and the scaffold appears to be highly microporous with pores between 2 and 4 mm in diameter. In vivo evaluation and comparison of TCP to titanium One aim of this study was to compare the effect of the bone substitute material (TCP or titanium) on bony bridging and bony regenerated area in case the bone substitute had the identical lattice microarchitecture. In a first series of experiments, we compared bony bridging and bone formation between empty defects, titanium, TCP, and TCP scaffolds loaded with 10 mg BMP-2 in noncritical-size defects. Since our model consists of four defects, the titanium implant with BMP-2 was omitted and the new TCP implant with BMP-2 was tested. The architecture of all scaffolds was identical because they were built with the same STL file. The histologies of the middle section (Fig. 3) revealed that bone formation had occurred during the 4-week period throughout all implants, irrespective of the material. For quantification, the extent of bony bridging and bone formation in the defect was determined (Fig. 4). In empty defects, 29.28% -19.16% of the middle section was bridged, 71.29% -23.95% for titanium, 89.28% -18.45% for TCP, and 96.87% -6.20% for TCP plus BMP-2, respectively. In the AOI, the percentage of bony regeneration in the middle section was 16.40% -6.79% for empty defects, 39.00% -9.14% for titanium, 56.94% -19.08% for TCP, and 73.13% -13.94% for TCP plus BMP-2, respectively. For both measures, all scaffolds performed better than the empty defects. No significant difference existed between titanium and TCP scaffolds. Only if doped with 10 mg BMP-2, the TCP-BMP scaffold performed significantly better than the titanium scaffold, both in terms of bony bridging ( p = 0.008) and bony regenerated area ( p = 0.001). This result also suggests that our additively manufactured TCP scaffolds could serve as BMP-2 delivery vehicle and combine osteoconduction with osteoinduction. Critical-size defect To compare identical lattice-based microarchitectures from titanium and TCP in more detail we, next, moved to a criticalsize defect in the calvarial bone of rabbits. The scaffolds for this aspect were produced as mentioned before (Fig. 5a), with the diameter of the lower part of 15 mm adjusted to criticalsize defects 21 (Fig. 5b). The samples were harvested 16 weeks after implantation. Histology revealed that the majority of the defect was bridged, although the scaffolds appeared partially disintegrated (Fig. 5c). The reason for this behavior is most likely related to the high microporosity of the TCP samples. This microporosity on the one hand enables rapid resorption of the materials, but on the other hand also increases the brittleness of the resulting scaffolds. This brittle behavior in combination with very delicate struts and with stresses arising from press-fit application of the implants can lead to partial disintegration of the TCP scaffold already during the placement of the implant. Another important factor for this behavior is that materials such as TCP inherently have different material properties compared to metals. For future studies, this needs to be accounted for by choosing a scaffold design compliant with the mechanical properties of TCP, in particular if applied in critical-size defects. In empty critical 15 mm defects, 40.13% -16.39% of the middle section was bony bridged after 16 weeks in vivo, 92.13% -11.45% for titanium, and 66.93% -14.19% for TCP, respectively. In the AOI, the percentage of bony regeneration in the middle section was 23.72% -10.02% for empty defects, 42.67% -7.38% for titanium, and 31.51% -7.54% for TCP, respectively. Compared to the untreated defect, bony regenerated area was significantly improved for titanium implants only (Fig. 5d). Bony bridging, however, was significantly improved for both biomaterials, titanium and TCP (Fig. 5e). In direct comparison, the percentage of bony bridging of the defect was significantly higher in titanium scaffolds than in TCP scaffolds. This is seen as clear indication that in the case of a critical-size defect, the mechanical strength of the scaffolds has to be ensured. Since the applied microarchitecture of the titanium scaffold was sufficient to heal the critical-size defect, an additional stimulation by BMP-2 was not tested in this model system. Discussion Cranial defect models are widely used for the testing of bone substitute materials. [25][26][27] Less known is the fact that cranial defects are clinically highly relevant in congenital anomalies, trauma, stroke, aneurysms, and cancer. 28 For preclinical testing, the low mechanical challenge posed in the cranium is an advantage, since it allows the testing of diverse materials without the need for costly fracture fixation devices, needed in long bone defect treatments. Additive manufacturing facilitates the realization of 3D objects in microarchitectural designs coded in computer files, predominantly in STL format (STereoLithography or Surface TrianguLated). In this study, we manufactured bone substitute scaffolds from TCP by a lithography-based methodology, characterized them, and compared their performance in vivo with titanium-based scaffolds produced with the identical STL file by selective laser melting. The results showed that in a noncritical defect, both scaffolds with identical microarchitecture, but from distinct biomaterials and surfaces performed equally well. In a critical-size p Values are provided in the graphs. Color images available online at www.liebertpub.com/tea defect, where the mechanical demand is higher, the titanium-based scaffold was superior, most likely due to the higher mechanical strength of titanium compared to TCP. Thus, for critical-size defects, the brittleness of TCP-based scaffolds will have to be compensated by using different design guidelines for TCP scaffolds as suitable for titanium. Creeping substitution of the implant over time by bone tissue is the ideal endpoint of a bone regeneration procedure. Therefore, we looked into additive manufacturing of TCP, since it remains biodegradable even after undergoing high temperature sintering. 29,30 For the production of personalized bone substitutes, the produced implant has to match the digital information deduced from computer tomographies. The lithography-based production of TCP scaffolds by the CeraFab 7500 system matches the lateral dimension of the planned STL file information. However, the height of the pores and rods along the building direction appear compressed (Fig. 1e). The deviation in z-direction derives from a curing depth that is higher than the thickness of an individual layer. This increased cure depth is necessary to ensure good adhesion between adjacent layers. The viscosity of the slurry could also play a minor role in this aspect. Deviation in the z-axis can be corrected before production by software compensation algorithms to adjust the detailed geometry of the STL file. Moreover, for scaffolds meant for bone tissue engineering, a high precision of the microarchitecture might not be the decisive factor, as long as the structure is widely open, porous, accessible for proteins, blood, and cells, and optimized for bone ingrowth by os-teoconduction. This has been shown when a library of diverse open porous titanium scaffolds was tested in vivo in the same model system. 20 The outer dimension, however, should perfectly match the defect, to produce personalized bone substitutes, which is the main advantage of the application of additive manufacturing for bone substitute production, especially if complex shapes have to be realized. 31 Titanium and TCP, both are known for their suitability to serve as bone substitute biomaterial. 32 In this study, we show that if scaffolds from titanium and TCP are produced with the identical microarchitecture as wide open porous lattice structures, both materials support osteoconduction, 20 defined in this study as bone ingrowth into porous structures (Fig. 4a) and as guiding cue to achieve defect bridging (Fig. 4b). Therefore, with the lithography-based additive manufacturing procedure, as used in this study, personalized osteoconductive scaffolds can be produced from TCP and other permanent or biodegradable ceramics like hydroxyapatite or bioglass. Since these ceramics vary in their mechanical characteristics, the bone substitute scaffolds can be adjusted to the mechanical need at the operation site with the right choice of the ceramic without changing the microarchitecture or macroarchitecture. Recent studies have reported on bone ingrowth and the presence of cells also in micropores, well below 0.1 mm in diameter. 33,34 The positive effect of microporosity on bone formation was suggested to reflect a better attachment of proteins to the surface, increased degradation products, and capillary forces (as reviewed in Ref. 35 ). In our study, we is significantly elevated in defects treated with titanium scaffolds compared to empty defects. For TCP scaffolds with identical architecture, only defect bridging was significantly improved compared to empty, untreated defects. While no significant difference was evident for bony area between both materials, for bony bridging, titanium performed significantly better than TCP. Values in (d, e) are displayed as box plots ranging from the 25th (lower quartile) to the 75th (upper quartile) percentile, including the median as solid black line and whiskers, showing the minimum and maximum values. Values outside the range of the box blot are shown as individual points. p Values are provided in the graphs. Color images available online at www.liebertpub.com/tea compared a microporous TCP-based scaffold (Fig. 2) to a material without pores (titanium) and could not detect a significant difference in defect bridging or regenerated bony area for wide open porous lattice structures. This is in line with other in vivo studies in sheep, where different levels of microporosity in TCP-based scaffolds had no effect on bony healing. 36 To our knowledge, this is the first direct comparison of a metal and a ceramic with the identical microarchitecture in vivo, pointing to the importance of the microarchitecture for osteoconduction and bone regeneration. When using scaffold designs of the exact same geometry, significant differences in terms of the mechanical integrity became visible for the critical-size defects (Fig. 5). These shortcomings of the brittle TCP scaffolds will have to be compensated by tailoring and improving the scaffold designs. It could be verified that a geometry working well for dense and ductile materials such as titanium is not necessarily the best fit for other types of materials, especially outside the material class of metals. For small noncritical defects, TCP-based scaffolds performed excellently. The anticipated difference in degradation capability will have long-time effects and was not subject of this study. By selecting the right ceramic in combination with the right scaffold microarchitecture, mechanical demands or degradation characteristics can be tuned to the need of individual patients in cranio-maxillofacial surgery, orthopedics, trauma, or dentistry. Especially in terms of developing optimized scaffold designs, which are compliant with the property profile of ceramics such as TCP, additional research is necessary to improve the performance of such implants.
2018-08-01T19:03:37.275Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "9e4dd65da4fa52c56046ffe1034ad739fcd9012b", "oa_license": "CCBYNC", "oa_url": "https://www.liebertpub.com/doi/pdf/10.1089/ten.tea.2018.0014", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9e4dd65da4fa52c56046ffe1034ad739fcd9012b", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology", "Materials Science" ] }
267432165
pes2o/s2orc
v3-fos-license
Outcomes of patients supported by mechanical ventilation and their families two months after discharge from pediatric intensive care unit Introduction The outcomes of children undergoing mechanical ventilation (MV) in a Pediatric Intensive Care Unit (PICU) remain poorly characterized and increasing knowledge in this area may lead to strategies that improve care. In this study, we reported the outcomes of children receiving invasive mechanical ventilation (IMV) and/or non-invasive ventilation (NIV), 2 months after PICU discharge. Methods This is a post-hoc analysis of a single-center prospective study of PICU children followed at the PICU follow-up clinic at CHU Sainte-Justine. Eligible children were admitted to the PICU with ≥2 days of IMV or ≥4 days of NIV. Two months after PICU discharge, patients and families were evaluated by physicians and filled out questionnaires assessing Quality of life (Pediatric Quality of Life Inventory™), development milestones (Ages and Stages Questionnaire), and parental anxiety and depression (Hospital Anxiety and Depression Scale). Results One hundred and fifty patients were included from October 2018 to December 2021; 106 patients received IMV (±NIV), and 44 patients received NIV exclusively. Admission diagnoses differed between groups, with 30.2% of patients in the IMV group admitted for a respiratory illness vs. 79.5% in the NIV group. For the entire cohort, QoL scores were 78.1% for the physical domain and 80.1% for the psychological domain, and were similar between groups. Children with a respiratory illness exhibited similar symptoms at follow-up whether they were supported by IMV vs. NIV. For developmental outcomes, only 22.2% of pre-school children had normal scores in all ASQ domains. In the entire cohort, symptoms of anxiety were reported in 29.9% and depression in 24.6 of patients% Conclusions PICU survivors undergoing mechanical ventilation, and their families, experienced significant morbidities 2 months after their critical illness, whether they received IMV or NIV. Children with respiratory illness exhibited a higher prevalence of persistent respiratory difficulties post PICU, whether they underwent IMV or NIV. Patients’ quality of life and parental symptoms of anxiety and depression did not differ according to the type of respiratory support. These findings justify the inclusion of patients receiving NIV in the PICU in follow-up assessments as well as those receiving IMV. Introduction One third of children admitted to a Pediatric Intensive Care Unit (PICU) require invasive mechanical ventilation (IMV) (1).Ventilation aims to support respiratory muscles, allow better gas exchange, and reduce oxygen consumption while awaiting recovery from critical illness.Despite its benefits, this life-saving therapy is also associated with several short-term complications, including ventilator-induced lung injury, ventilator-acquired pneumonia, and respiratory muscle atrophy (2).Patients requiring IMV often have a high acuity of illness and are at increased risk of complications such as delirium and withdrawal syndrome, which may impact their recovery trajectory.Noninvasive ventilation (NIV), or mechanical ventilation without intubation or tracheotomy tube, is a method of respiratory support used increasingly as an alternative to IMV, with the goal of minimizing complications (3). Few studies have assessed the long-term respiratory outcomes of children supported by MV, using either IMV or NIV, during their PICU stay.Two studies evaluated respiratory function in children requiring IMV for acute respiratory distress syndrome (ARDS) three months after their hospitalization.Both found that approximately one-third have persistent respiratory symptoms, and up to 80% have abnormal pulmonary function tests (4,5).Requiring IMV for a respiratory illness was also associated with a greater risk of subsequently requiring medical care for respiratory issues, with a quarter of patients needing care for another episode of respiratory distress or asthma exacerbation in the 12 months following discharge (6).Long-term data on outcomes of patients requiring IMV for a non-respiratory critical illness are missing.Moreover, data on long-term outcomes other than the respiratory status are lacking.Finally, outcomes of children after NIV are poorly described. The pediatric post-intensive care syndrome (PICS-p) framework was developed in 2018 to better recognize and assess the new onset or the worsening of impairments arising and persisting after a PICU stay (7,8).Data on PICS-p are still limited, but a growing literature suggests that significant issues can be appreciated after PICU hospitalization, including developmental delays, post-traumatic stress disorder (PTSD), and a decreased quality of life (QoL) (9)(10)(11)(12).The outcomes of mechanically ventilated children who survive a critical illness, regarding the different domains of PICSp, are still poorly characterized.The objective of this study was to report the outcomes of PICU survivors treated with mechanical ventilation 2 months after PICU discharge and to compare outcomes between patients receiving IMV vs. NIV during their PICU stay, with particular emphasis on quality of life. Methods We performed a post-hoc analysis of a single-center prospective study of PICU children followed at the PICU followup clinic at CHU Sainte-Justine, a Canadian university-affiliated hospital in Montréal, from October 2018 to December 2021.The local Institutional Review Board approved this study (2019-2261). Participants Patients were identified through the institutional database of the CHU Sainte-Justine (CHUSJ) PICU follow-up clinic.Critically ill patients were eligible for the PICU follow-up clinic if they were less than 18 years old at admission and underwent either IMV for at least 2 calendar days, or NIV for at least 4 days.Patients with congenital heart diseases or active oncologic diseases were not included in the PICU follow-up clinic as they benefit from comprehensive follow-up in other dedicated outpatient clinics.This clinic has been following PICU survivors since fall 2018, and the inclusion criteria were chosen arbitrarily prior to the establishment of the clinic to target a population at risk of PICS-p.In this study, we described this entire cohort of patients with MV, and reported the outcomes of patients who received IMV with or without NIV and patients who exclusively received NIV.Assessment at the 2-month follow-up visit included vital signs and anthropometric measurements, a standardized clinical evaluation by a pediatric critical care physician and the completion of questionnaires by parents and/ or patients, as described below. Outcomes Our primary outcome measure was quality of Life (QoL).QoL was assessed with the PedsQL 4.0 Generic Core Scales (≥24 months) and PedsQL Infant Scales (1-24 months) (13, 14).The PedsQL Generic Core Scales is a 23-item measure that evaluates 4 domains (physical, emotional, social, and school) using a 5point Likert scale (0 = never a problem to 4 = almost always a problem).Total scores range from 0 to 100, with higher scores representing better QoL.Normal values in healthy children [mean ± standard deviation (SD) = 84.1 ± 12.6] have been published (13).The PedsQL Infant Scales assesses 5 domains classified into two categories: 1) physical function and symptoms and 2) emotional, social, and cognitive symptoms.The format and scoring of the Infant Scales are identical to the Generic Core Scales.These two health-related QoL questionnaires have strong reliability and validity in general and specialized pediatrics (15).It is also reliable when filled by proxy for the entire spectrum of pediatric ages.The PedsQL Infant Scales are exclusively completed by parents and the Generic Core Scales are available in both versions (patient and parents). Secondary outcomes focused on the four domains of PICS-p: physical, cognitive, emotive and social health.Physical health included reported symptoms of dyspnea, voice change, oral aversion, fatigue, weakness, and sleep disorder during physician interview.Cognitive health and developmental delay in preschool children was documented with the Ages and Stages Questionnaires (ASQ-3), a developmental screening tool that assesses developmental stages in children from 1 month to 5 years old through 21 age-specific questions covering five domains (16,17).Developmental delays were detected by comparing individual scores to determined cut-off scores (18,19).Obtained scores were then classified into three categories: typical development, mild delay, and moderate-severe delay.For school-aged children, school delay was defined by a change in baseline school performance, assessed during the medical interview.Emotive health was assessed with the Young Child PTSD Checklist (YCPC) in children 1-6 years old (20) and the Child PTSD Checklist (CPC) in children 7-18 years old (21).These tools contain a first section assessing PTSD-related symptoms (avoidance behavior, impaired cognition and mood, and neurovegetative overactivation) and a second section assessing functional impairment.Patients and their parents answer each question on a Likert scale of 0 (never) to 4 (daily).Cutoff values are available for PTSD-related symptoms and functional impairment (21).Finally, parental psychosocial health was assessed through the Hospital Anxiety and Depression Scale (HADS).It is recommended by the National Institute for Health and Care Excellence (NICE) to diagnose anxiety and depression and can also be used to monitor disease progression (22).It includes 7 questions on anxiety and 7 questions on depression (23).Scoring for both categories can point to the absence, probable, or definite presence of symptoms of anxiety and depression.All patients and their family received all the questionnaires with variable response rate. Data collection Data from the PICU hospitalization and follow-up were entered in a case report form, including demographics, pre-PICU comorbidities, and hospitalization-related and follow-up data.Demographic data included age, sex, and weight.Pre-PICU comorbidities potentially associated with QoL or respiratory status included prematurity, airway anomalies, chronic pulmonary disease, and congenital or acquired heart disease.PICU admission data included primary diagnosis, severity of illness as measured by the worst daily Pediatric Logistic Organ Dysfunction Score-2 (PELOD-2) (24), length of IMV, length of NIV, use of vasopressors, and PICU length of stay (LOS).Respiratory disease as the admission diagnosis included children with bronchiolitis, pneumonia, empyema, acute respiratory distress syndrome (ARDS) and bronchospasm.Upper airway diseases included children with laryngitis, tracheitis and subglottic stenosis.All PICU data were manually retrieved from chart reviews.PICU follow-up data were retrieved from the PICU follow-up clinic standardized chart and included breathing difficulties, voice change, oral aversion, cyanosis, fatigue, weakness, sleep disorder and school delay (new onset of academic difficulties). Statistical analysis Analyses were performed with IBM SPSS Statistics (Version 28,0, Armonk, NY).Categorical variables were reported using numbers and percentages.Continuous variables were reported using median (IQR).Comparisons between the IMV ± NIV and NIV exclusively groups were performed using the Pearson chi-square for categorical data and Wilcoxon Mann Whitney for continuous data with non-normal distributions.The level of significance was set to p < 0.05. Results We included 150 patients of which 106 patients (71%) received IMV ± NIV (45 IMV only and 61 IMV + NIV) and 44 patients (29%) exclusively received NIV.Demographic and PICU-related data are summarized in Table 1.In this cohort, the mean age was 1.0 year (0.4-3.1), most were male (58.7%) with no previous medical illness (68.0%).The most common admission diagnosis were respiratory illness (44.7%) and septic shock (9.3%).Admission diagnoses differed between groups, with 30.2% of patients in the IMV group admitted for a respiratory illness vs. 79.5% in the NIV group.More patients in the IMV group required the use of vasopressors (39.6% vs. 9.1%). Quality of life Results from the QoL questionnaires are presented in Table 2.A total of 113 families (113/150, 77.4% in the IMV group and 73.8% in the NIV) completed the PedsQL scale, with a majority completed by proxy (92.0%).For the entire cohort, patients reported their QoL at 78.1% for the physical domain and 80.1% for the psychological domain.Overall physical (77.6% vs. 79.7%,p = 0.79) and psychosocial (80.0% vs. 80.3%, p = 0.72) scores were similar between groups.A similar proportion of children had a score below 1 SD under the mean for the corresponding validated population in both groups for the physical (30/82 patients vs. 9/31 patients, p = 0.45) and psychological (19/82 patients vs. 7/31 patients, p = 0.95) domains. PICS-p domains Reported symptoms are shown in Table 3 and data were available for all patients.Regarding the physical domain, fewer patients in the IMV group reported dyspnea (10.4% vs. 27.3%,p < 0.01).Fatigue was more common in the IMV group than in the NIV group (18.9% vs. 6.8, p = 0.01).There was no difference in voice changes and oral aversion between the two groups.Developmental outcomes were assessed in 99/118 of preschool children (83.9%).Only 22.2% of children had normal scores in all ASQ domains, but there was no difference between groups (Figure 1).Change in baseline school performance was reported in 13/28 (46.4%) and 0/4 (0%) of school-aged children in the IMV and NIV groups, respectively (p = 0.07).Emotional domains of PICS-p were assessed in 38 of 75 eligible patients (50.7% of ≥1 year old).In the IMV group, 3/28 (10.7%) of patients reported symptoms of PTSD, and 4/28 (14.3%) reported functional impairments.No patients in NIV group reported symptoms of PTSD or functional impairment (0/10).Finally, parental psychosocial outcomes as assessed through the HADS questionnaire were available for 110 of 150 (73.3%) families.In the entire cohort, symptoms of anxiety were reported in 29.9% and depression in 24.6%These outcomes were similar between the two groups (Figure 2). Discussion This study reports PICS-p related outcomes of a cohort of children undergoing mechanical ventilation in a level-3 PICU, 2 months after discharge.Outcomes of children treated with IMV and those exclusively treated with NIV are also described.Overall, QoL scores of children in our cohort were lower than those of healthy children and developmental delays were common across ventilation groups.PTSD symptoms and functional impairments were present in 10% and 14% of children with IMV, respectively, and were not seen in children with NIV.Parental anxiety and depression were reported in 29.9% and 24.6% of the entire cohort, and present in similar proportions in both groups. QoL scores were lower than those described in healthy populations (13-15) with 36.0% of the patients studied having scores 1SD below the mean.These results are consistent with a recent study by Watson et (25).A prospective study also reported a persistent decrease in QoL from baseline in 31% of children with ARDS, at 9 months post-discharge (26).Our study is, however, the first to report the outcomes of children receiving MV for a broader variety of indications than respiratory disease.Persistent dyspnea at follow-up was reported in 15.3%, of all patients, and in 31.4% of children with NIV.When comparing children admitted for a respiratory illness in the 2 groups (IMV ± NIV vs. NIV), there was no difference in respiratory difficulties at follow up.Our findings suggest that children admitted to the PICU for a respiratory illness, irrespective of the mode of ventilation required, are more likely to have residual respiratory symptoms post-PICU and would benefit from medical follow-up.These respiratory symptoms have also been shown to persist over time.For example, a study that investigated the long-term pulmonary outcomes of children less than 2 years old receiving IMV for bronchiolitis reported that a quarter of those children continued to exhibit respiratory symptoms such as asthma once school-aged (27).In terms of other physical symptoms, our study is also the first to report increased fatigue post-PICU in children with IMV.Furthermore, cognitive morbidity was frequent in our cohort, as 83.9% of preschool children exhibited developmental delay and 36.1%. of school age children reported change in baseline school performance after their PICU stay.A recent study, reporting a small cohort of children with normal development at the time of PICU admission for bronchiolitis and supported by high-flow nasal cannula and/or mechanical ventilation, reported that 44% had cognitive disability at 1 or 2 years after PICU discharge (28).Finally, functional decline has been described in 12% of children, 6 months after IMV for bronchiolitis (29).Functionality was not assessed in the current study. Critical illness and PICU admission both impact on the family of a critically ill child.We detected probable or definite symptoms of anxiety and depression in 29.9% and 24.6% of respondent parents, respectively.The incidence was equally high in parents of children receiving only NIV vs. children with IMV, suggesting that having a child undergoing a non-invasive support may be as distressing to caregivers as having a child who is intubated.This may be due to level of agitation and sedation of the patient, the underlying diagnosis, patient age or a variety of other factors that were not explored in this study. Our study contributes to the growing body of literature that highlights the extensive range of adverse mid-and long-term outcomes experienced by children admitted to the PICU.It reinforces the impetus for the development of robust and systematic post-PICU follow-up programs.Notably, our research is the first to show that all children, even those requiring NIV without IMV, also experience long-term complications.This underscores the necessity for providing support and follow-up for this group as well.While high illness severity is a recognized risk factor for adverse outcomes, our findings illuminate our incomplete understanding of the complex factors influencing the recovery of both families and their children, encompassing both psychosocial and physical dimensions. Our study does have some limitations.First, it is a single-center study, and it involved a heterogeneous patient population requiring mechanical ventilation.This heterogeneity may have contributed to a blurring of differences between the two groups, potentially resulting in non-significant findings for some of the outcomes compared.The descriptive nature of this retrospective study also prevents from establishing association between outcomes and mechanical ventilation, as post PICU morbidities described in this study may be secondary to other factors such as pre-existing .This study also has a patient selection bias, as families and patients voluntarily engage into follow-up at our clinic and may not be representative of the entire cohort of patients under mechanical ventilation.Furthermore, it must be noted that only patients undergoing NIV exclusively for at least 4 days were included in this study, and children with shorter duration of NIV were excluded.Consequently, the cohort of patients receiving NIV in this study may experience more severe morbidities than the comprehensive population of patients receiving NIV, as they might suffer from Hospital Anxiety and Depression Scale score.Results of the Hospital Anxiety and Depression Scale, completed by 110 parents.In the IMV group, 16.9% (13/77) reported probable symptoms of depression and 7.8% (6/77) reported definite symptoms of depression whereas 14.3% (11/77) of parents reported probable symptoms of anxiety, 14.3% (11/ 77) reported definite symptoms of anxiety.In the NIV group, 18.2% (6/33) reported probable symptoms of depression and 6.1% (2/33) reported definite symptoms of depression whereas 15.2% (5/33) of parents reported probable symptoms of anxiety, 18.2% (6/33) reported definite symptoms of anxiety.There was no significant difference in depression scores and anxiety scores between groups (p = 0.12 and p = 0.31, respectively).IMV, invasive mechanical ventilation; NIV, non-invasive ventilation.more severe illness and may be exposed for a longer period of time to PICU therapies.Lastly, our follow-up was limited to a relatively short period following the PICU stay and did not include any objective measurement of pulmonary function.Extending follow-up duration could provide a more comprehensive understanding of the issues and the required post-PICU care for this specific cohort.This is especially important considering that certain deficits might improve over time, as noted in previous studies assessing functional impairments (9,10).Therefore, conducting long-term outcome studies would offer a more nuanced perspective. Conclusion PICU survivors and their families experienced significant morbidities 2 months after their critical illness, whether they received IMV or NIV.Children with respiratory illness exhibited a higher prevalence of persistent respiratory difficulties post PICU, whether they underwent IMV or NIV.Children's QoL and parental anxiety and depression scores were similar irrespective of the type of respiratory support received.These results underscore the importance of extending post-PICU follow-up to include children who received NIV, as they too may benefit from ongoing care and support. FIGURE 1 FIGURE 1 Ages and stages questionnaire score.Results of the ages and stages questionnaire (ASQ) in 16 children, 1-60 months old, evaluating pre-school children's development in five domains.Here presented as typical development (normal) vs. any delay.IMV, invasive mechanical ventilation; NIV, non invasive ventilation. TABLE 1 Characteristics of PICU patients and ventilation data a . TABLE 2 Quality of life score 2 months after discharge from PICU. TABLE 3 Patients/parents-reported symptoms at 2-months follow-up. TABLE 4 Patients/parents-reported symptoms at 2-months follow-up in patients admitted with a diagnosis of respiratory illness.
2024-02-06T17:56:47.600Z
2024-01-31T00:00:00.000
{ "year": 2024, "sha1": "8ab7b497ab9737c101d23bf312084db98f6ec6d0", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fped.2024.1333634/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "464f84d8d4c79ce4da2c1af279785835b68dd7c7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3522195
pes2o/s2orc
v3-fos-license
TET family dioxygenases and DNA demethylation in stem cells and cancers The methylation of cytosine and subsequent oxidation constitutes a fundamental epigenetic modification in mammalian genomes, and its abnormalities are intimately coupled to various pathogenic processes including cancer development. Enzymes of the Ten–eleven translocation (TET) family catalyze the stepwise oxidation of 5-methylcytosine in DNA to 5-hydroxymethylcytosine and further oxidation products. These oxidized 5-methylcytosine derivatives represent intermediates in the reversal of cytosine methylation, and also serve as stable epigenetic modifications that exert distinctive regulatory roles. It is becoming increasingly obvious that TET proteins and their catalytic products are key regulators of embryonic development, stem cell functions and lineage specification. Over the past several years, the function of TET proteins as a barrier between normal and malignant states has been extensively investigated. Dysregulation of TET protein expression or function is commonly observed in a wide range of cancers. Notably, TET loss-of-function is causally related to the onset and progression of hematologic malignancy in vivo. In this review, we focus on recent advances in the mechanistic understanding of DNA methylation–demethylation dynamics, and their potential regulatory functions in cellular differentiation and oncogenic transformation. INTRODUCTION Eukaryotic DNA is tightly packaged into a highly organized chromatin structure with the assistance of special proteins called histones. 1 Approximately 146 base pairs of DNA are wrapped around a histone octamer that consists of two copies of four core histones (H2A, H2B, H3 and H4) to form the nucleosome, the smallest unit of chromatin. Nucleosomes are then linked by another histone protein called histone H1, followed by further compaction into a higher-order structure that makes up chromosomes. The amino-terminal tails of the core histone proteins are frequently subject to multivalent posttranslational modifications, such as acetylation, phosphorylation, methylation, sumoylation and ubiquitylation, altering the degree of local chromatin condensation and accessibility of genetic loci to the cellular machinery that dynamically modulates chromatin architecture and gene expression. In addition to these histone modifications, a methyl group can be covalently attached to the carbon-5 position of a cytosine (C) in DNA to form 5-methylcytosine (5mC). This process, called 'DNA methylation', is a type of epigenetic mechanism that influences transcription, X-chromosome inactivation, suppression of mobile genetic elements and genomic imprinting. 2 Recent studies have demonstrated that adenines in the mammalian genome are also methylated to produce N6-methyladenine, but in this review, DNA methylation refers to only cytosine methylation. 3 In most mammalian genomes, cytosine methylation occurs almost exclusively in the context of palindromic CpG dinucleotides. 4,5 Typically, cytosines in both strands of a DNA duplex are methylated symmetrically. CpG methylation is catalyzed by a family of DNA methyltransferases (DNMTs), which are classified into two large categories. 6 During early embryogenesis, DNMT3A and DNMT3B initially deposit methylation marks on unmethylated CpG, and thus are classified as de novo methyltransferases. Then, DNMT1, a maintenance methyltransferase, is largely responsible for the post-replicative inheritance of pre-existing methylation marks. During semi-conservative DNA replication, the ubiquitin-like plant homeodomain and RING finger domain 1 (UHRF1) preferentially recognizes CpGs in the hemi-methylated DNA via its SET and RING-associated (SRA) domain, and recruits DNMT1 to restore parental methylation patterns on the nascent strand. [7][8][9][10][11] Therefore, the absence of DNMT1/ UHRF1 can lead to the progressive dilution of cytosine methylation during successive rounds of DNA replication, a process called 'passive demethylation'. In addition, DNA demethylation can also take place in a replicationindependent manner via the combined action of various enzymes, as described later. It has long been considered that 5mC is a stably inherited epigenetic modification. However, a subset of 5mCs in the genome are epigenetically unstable and can be further modified enzymatically. Analyses of TET enzyme function have revealed that cytosine in DNA does not exist in a binary modification status (C versus 5mC) as previously believed, but it could adopt one of five different states. 12 In the early 2000s, the TET1 gene was first cloned as a fusion partner of mixed-lineage leukemia (MLL) H3K4 methyltransferase (also known as KMT2A) in a handful of acute myeloid and lymphocytic leukemia patients harboring the chromosomal rearrangement t(10;11)(q22;q23). 13,14 By a homology search, additional TET genes, TET2 and TET3, were also identified. However, TET protein function has only recently been determined. TET1 was identified in a search for mammalian homologs of J-binding protein (JBP) 1 and 2, the Fe(II) and 2-oxoglutarate (2OG)-dependent dioxygenases in Trypanosoma brucei that oxidize thymine in DNA to 5-hydroxymethyluracil (5hmU) during the synthesis of base J. [15][16][17] TET1 was shown to oxidize 5mC to 5hmC in cells and in vitro. The two cofactors, Fe(II) and 2OG, are indispensable for TET-mediated 5mC oxidation. Subsequent studies have shown that all three TET proteins belong to a family of dioxygenase enzymes and share identical catalytic activity to successively oxidize the methyl group of 5mC, yielding three distinct forms of oxidized methylcytosines (termed 'oxi-mCs'), 5-hydroxymethylcytosine (5hmC), 5-formylcytosine (5fC) and 5-carboxylcytosine (5caC). [18][19][20][21] Dysregulation of DNA methylation is a prominent feature of cancers. 22 Recent studies have clearly established that 5mC oxidation is also highly disrupted in most cancer types. [23][24][25][26][27] Numerous studies point to the fundamental roles of the key epigenetic regulators such as DNMTs, TETs and isocitrate dehydrogenase (IDH) enzymes in gene expression, development, cellular development and transformation. 28 Despite strenuous efforts over the last decade, the exact mechanism underlying enhanced malignant transformation upon the dysregulation of these factors remains poorly understood. Haematopoietic differentiation and transformation is one of the most extensively studied systems in this regard. Thus, in this review, we focus on the current mechanistic understanding of DNA methylation and demethylation pathways in mammals and its functional implications in cell development and transformation, focusing on the hematopoietic system. STRUCTURAL BASIS FOR SUBSTRATE RECOGNITION AND ITERATIVE OXIDATION BY TET PROTEINS TET proteins contain a carboxyl-terminal core catalytic domain that comprises a conserved cysteine-rich domain and a doublestranded β-helix domain (DSBH, also referred to as a 'jelly-roll fold') ( Figure 1). 16,17 Within the DSBH domain, there are key catalytic residues that interact with Fe(II) and 2OG. Upon cofactor binding, molecular oxygen oxidizes Fe(II) in the catalytic pocket, thereby inducing the oxidative decarboxylation of 2OG and substrate oxidation. 29 A large low-complexity insert is found within the DSBH domain and located at the exterior surface of the catalytic domain ( Figure 1). Although the precise function of this insertion remains to be determined, it may have regulatory roles via post-translational modifications, such as glycosylation and phosphorylation. 30,31 A study has shown that the deletion of this insert markedly increases 5hmC production by the TET2 catalytic domain. 32 TET proteins also have an additional domain that potentially regulates their chromatin targeting. At the amino-terminal region, TET1 and TET3 have a DNA-binding domain called the CXXC domain, which is composed of two Cys 4 -type zinc finger motifs. 16,17,33 Interestingly, the ancestral TET2 gene underwent a chromosomal inversion during evolution; as a result, the segment encoding its CXXC domain was separated from the region encoding the catalytic domain. 34 Thus, the ancestral CXXC domain of TET2 is now encoded separately by a neighboring gene, IDAX (also called CXXC4). The CXXC domain of TET proteins (IDAX CXXC domain in the case of TET2) is highly conserved and preferentially associates with unmethylated CpG-containing sequences. [34][35][36] The presence or absence of the CXXC domain may affect the genomic distribution of TET proteins; Tet1 is preferentially detected at Figure 1 Domain structure of TET proteins. The carboxyl-terminal core catalytic domain is highly conserved among all TET family members and consists of a DSBH domain and a cysteine (Cys)-rich domain. The Cys-rich domain is comprised of two subdomains and modulates the chromatin targeting of TET proteins. The DSBH domain harbors key catalytic motifs, including the HxD motif, which interacts with Fe(II) and 2OG. A large low-complexity insert is found within the DSBH domain, but its function remains to be defined. the promoter CpG islands (CGIs) or enhancers in mouse embryonic stem cells (ESCs), particularly at the former, whereas Tet2 is mostly enriched in gene bodies or enhancer regions. [37][38][39] Structural analyses of TET proteins provide significant insights into how TET enzymes recognize their substrates and catalyze iterative oxidation reactions. 32,[40][41][42] The crystal structure of the TET2 catalytic core domain revealed that two subdomains of the Cys-rich domain wrap around the DSBH domain on which DNA is located. 32 Interestingly, two out of three zinc fingers, coordinated by several residues from the Cys-rich and DSBH domains, bring the two domains into close proximity to facilitate the formation of a compact globular structure, creating a unique structure for DNA substrate recognition. 32 TET2 specifically recognizes 5mCpGcontaining DNAs with no preference for the flanking sequences, consistent with the fact that 5hmC is almost exclusively located in the CpG context throughout the genome. 43 This interaction is stabilized by extensive intermolecular hydrogen bonds between key residues of TET2 and 5mCpGs-flanking phosphates in the DNA backbone. Hydrophobic interactions resulting from base-stacking interactions also contribute to the overall stability of the structure. Interestingly, CpG recognition does not depend on the methyl group of 5mC; accordingly, TET proteins could accommodate the formyl and carboxyl groups of highly oxidized 5mC derivatives at the active site. 32,42 Unlike 5mC, the majority (480%) of oxi-mCs are deposited asymmetrically on a specific CpG site. 43,44 What is the molecular basis for this strand asymmetry? As observed for 5mC recognition by the SRA domain of DNMT enzymes, TET2 also recognizes oxi-mCs using a base-flipping mechanism. Upon TET2 binding to the symmetrically methylated palindromic CpG DNA, only a single oxidized base in one strand is flipped out of the DNA duplex and incorporated into the active site. 32,40 A similar base-flipping mechanism has also been observed in the structure of the Naegleria Tet-like dioxygenase (NgTet1). 42 In mouse ESCs, TET enzymes convert~10% of 5mCs to 5hmCs, and only a subset (1-10%) of 5hmCs are further oxidized to 5fC/5caC. Therefore, 5hmC is about 10-to 100-fold more prevalent than more oxidized bases in the genome. 17,20,[45][46][47] This unequal genomic distribution of oxi-mCs might be attributable, at least in part, to TDG/BER-mediated active demethylation because 5fC and 5caC, but not 5hmC, are reverted to unmethylated cytosines ( Figure 2). In addition, a fraction of oxi-mCs, mostly 5hmC, may not undergo entire oxidation reactions because TET enzymes differentiate their substrates. Indeed, TET proteins are less active on 5hmC and 5fC than on 5mC in vitro, indicating a substrate preference. 20,40,42 TET-mediated oxidation tends to occur preferentially in regions with higher chromatin accessibility. What determines whether oxi-mCs are committed to undergoing further oxidation? Notably, all three oxi-mCs are similarly recognized by TET proteins with comparable binding affinity, and adopt almost identical conformations within active sites. 40 However, the hydroxymethyl group and formyl group of 5hmC and 5fC, respectively, adopt a more restrained conformation within active sites by forming hydrogen bonds with N-oxalylglycine (NOG, 2OG under physiological conditions) as well as polar groups of the cytosine ring. This structural restriction prevents hydrogen abstraction, the rate-limiting step for TET-mediated oxidation reactions with a concomitant decrease in catalytic efficiency. 40 Collectively, the catalytic core of TET proteins has intrinsic properties for efficient CpG recognition, substrate preference Function of TET proteins in passive and active DNA demethylation. TET proteins iteratively oxidize 5mC to produce oxidized methylcytosines (oxi-mCs), of which 5fC and 5caC are directly excised by the DNA repair enzyme TDG (thymine DNA glycosylase). The resulting abasic sites are eventually replaced with unmethylated cytosines by base excision repair (BER). No mammalian 5mC glycosylases that directly excise 5mC have been reported to date. TET proteins also promote the oxidative demethylation of 5mC in a replicationdependent manner because oxi-mCs tend to interfere with the methylase activity of DNMT1. TET proteins have a distinct preference for their substrates, so many oxi-mCs, mostly 5hmC, are not committed to demethylation pathways and are stable epigenetic modifications. and strand biases ( Figure 1). Thus, a fraction of 5hmC is less prone to further oxidation and remains as a stable epigenetic mark. Considering the capability of TET enzymes to oxidize their substrates in a step-wise manner, differential genomic levels of oxi-mCs also suggest that TET-catalyzed oxidation is not processive, and frequently stalls at the intermediate stages, most likely at 5hmC. TET proteins may associate transiently with specific substrates and detach before completing oxidation to the end product 5caC. Furthermore, there may be a division of labor among distinct TET enzymes. In fact, a recent study has shown that collaborative interplay among TET proteins and transcription factors is required to complete active DNA demethylation in enhancers. 48 In mouse ESCs, Tet1 recruits Sall4, which is a strong 5hmC-interacting protein in vitro, to enhancers. Unexpectedly, the Sall4-bound enhancers are substantially depleted of 5hmCs, but significantly enriched for 5caC. Deletion of Sall4 increases 5hmC levels in these regions in a Tet1-dependent manner, suggesting that Tet1 is mainly responsible for the initial oxidation of 5mC to 5hmC. In contrast, Sall4 loss leads to a reduction in 5caC levels and Tet2 occupancy at the Sall4-bound enhancers. Furthermore, depletion of Tet2, but not Tet1, increases 5hmC levels at Sall4-bound sites. These observations suggest that cooperative interactions between Tet1 and Tet2 are coordinated by an oxi-mC-sensing transcription factor to complete stepwise 5mC oxidation at enhancer regions. IMPACT OF OXIDIZED METHYLCYTOSINES IN DNA METHYLATION AND DEMETHYLATION DNA methylation is a highly dynamic process. Therefore, it is important to precisely control the generation and erasure of methylation marks to ensure the long-term inheritance of cell type-specific epigenomic memory across generations. 49,50 As mentioned earlier, following DNA replication, hemimethylated CpG DNAs are transiently formed with only the parental strand containing 5mC, and the original modification patterns are restored by re-methylating cytosines in the newly synthesized DNA strands (by DNMT1) and consecutively re-oxidizing the resulting 5mCs (by TET proteins). If the methylation maintenance machinery becomes non-functional or chromatin accessibility becomes restricted under certain conditions, 5mCs would be passively diluted as cells divide, either globally or locally. TET proteins can also promote this process, but they first oxidize 5mCs to oxi-mCs, which are subsequently diluted to regenerate unmethylated cytosines in a replication-dependent manner. Compared to maintenance methylation whose molecular mechanism is relatively well defined, it is not clear how 5mC oxidation patterns are restored and faithfully inherited by daughter cells. It has been shown that maintenance methylation re-establishes methylation patterns immediately after DNA replication, but subsequent TET-mediated oxidation occurs relatively slowly at a later time point. 51 TET proteins may not simply catalyze the successive oxidation of 5mCs once they are generated by DNMT1/UHRF1, and different mechanisms might be employed to restore patterns of DNA methylation and oxi-mCs during cell division. How might the oxidized 5mC bases affect passive demethylation? Given that oxi-mCs at CpG-containing DNA interfere with the ability of DNMT1 to methylate CpG sites in vitro, [52][53][54] TET proteins were proposed to promote replication-dependent passive demethylation (Figure 2). If this is the case, TET proteins might be able to induce progressive DNA demethylation even in the presence of active DNMT1/UHRF1, as observed in normal erythropoiesis (Figure 3). [55][56][57] Although the result is controversial, the SRA domain of UHRF1 has been shown to recognize 5hmC and 5mC with similar affinity. 58 The UHRF2 SRA domain also preferentially recognizes 5hmC. 59,60 As UHRF1 is an obligate partner protein of DNMT1, these results suggest that 5hmC could promote methylation maintenance by facilitating the recruitment of DNMT1 to hemi-hydroxymethylated DNA. Moreover, DNMT3A and DNMT3B, originally known as de novo DNA methyltransferases, are also required for DNA methylation maintenance in somatic cells, 61 and they display comparable methylase activity on 5mC-and oxi-mC-containing DNA in vitro, with 5fC increasing methylation efficiency most markedly. 53,54,62 Thus, further studies are required to elucidate the precise roles of oxi-mCs in the maintenance of DNA methylation. In addition to passive dilution, 5mCs can also be removed enzymatically by a replication-independent mechanism, a process called 'active DNA demethylation' (Figure 2). 12,25,26,49 In plants, active demethylation depends on DEMETER and REPRESSOR of SILENCING 1, which are well-characterized 5mC DNA glycosylases that directly excise 5mC to initiate base excision repair (BER). However, no orthologs with similar activities have been identified in mammals. The DNA repair protein thymine DNA glycosylase (TDG), which belongs to the uracil DNA glycosylase superfamily, was a strong candidate owing to its ability to remove the pyrimidine base from a T:G mismatch that arises from the deamination of 5mC. 63 However, given the preference of activation-induced deaminase (AID)/ APOBEC deaminases for single-stranded DNA and unmethylated cytosine over modified bases, this pathway may play a marginal role. 64 Notably, TDG specifically recognizes 5fC and 5caC, but not 5mC and 5hmC, which normally base-pair with guanine, and it shows robust in vitro base excision activity. 18,60,65,66 TDG harbors a binding pocket that specifically accommodates these oxidized bases. 66 Mechanistically, 5fC and 5caC were shown to destabilize the covalent bond that links them to sugar, making the glycosidic bond more susceptible to cleavage by TDG. 67,68 It is now clear that 5mC in mammalian genomes can be removed by a two-step process (Figure 2). TET proteins first oxidize 5mCs to form oxi-mCs, and TDG subsequently excises the highly oxidized bases 5fC and 5caC. 18,65 This excision reaction results in abasic sites that are eventually repaired by the BER pathway to restore unmodified cytosines. In line with this, the knockdown of Tdg in mouse ESCs leads to a 5-to 10-fold increase in the levels of genomic 5fC/5caC, whereas its overexpression in HEK293T cells markedly diminishes the levels of TET-generated 5fC/5caC. 18,44,64,[69][70][71][72][73] Interestingly, vitamin C treatment leads to a significant increase in the levels of 5fC and 5caC, consistent with its function in stimulating the catalytic activity of Tet enzymes. 44,[74][75][76][77][78] In line with its profound role in demethylation, Tdg is essential for embryonic development, as evidenced by mice with Tdg deficiency 79,80 or the expression of mutant Tdg lacking glycosylase activity 80 , which exhibit developmental defects and embryonic lethality, possibly by impairing the disappearance of 5fC and 5caC. Other studies have shown that 5hmU can be generated as a result of either deamination by AID/APOBEC 80,81 or direct oxidation by TET enzymes, 82 followed by TDG-mediated BER. Furthermore, DNMTs could directly catalyze dehydroxymethylation 83,84 and the cell lysate of ESCs exhibits 5caC decarboxylase activity. 85 These pathways need to be further characterized in vivo. GENOMIC LANDSCAPE OF CYTOSINE METHYLATION AND ITS OXIDATION PRODUCTS In mammalian genomes,~4-5% of all cytosines in the CpG context are methylated to yield 5mC. The methylation frequency at individual CpG sites typically displays a bimodal distribution. In general, the majority (70-80%) of CpG sites within genic and intergenic regions are highly methylated, whereas a small fraction (o20%) that includes promoter CGIs and distal regulatory elements, such as enhancers, is notably depleted of methylation. 5,86-89 Interestingly, non-CpG methylation is prevalent in ESCs, neuronal precursor cells and ectoderm-derived tissues, such as the cerebellum, cortex and olfactory bulb. 5,89 Cancer cells display highly dysregulated DNA methylation profiles characterized by global hypomethylation, which presumably impairs genome integrity, in conjunction with localized hypermethylation of promoter CGIs associated with aberrant expression of tumor suppressor genes or repair genes. [90][91][92] However, recent technological advances have enabled the precise mapping of individual cytosine derivatives at single-base resolution, and these analyses have suggested that tumorigenesis is more highly associated with the genome-wide loss of 5hmC than 5mC. 93 Interestingly, the global level of cytosine methylation across various human and murine tissues is remarkably similar. 88,89,94 However, some CpG sites (7-20%) in the mouse epigenome are differentially methylated among cell types; they are mostly hypomethylated in a tissue-specific manner. [86][87][88][89] Most of these Figure 3 A model of TET-assisted passive DNA demethylation. The parental DNA methylation patterns are faithfully inherited to daughter cells across generations because the methylation maintenance machinery DNMT1/UHRF1 is targeted to the hemi-methylated DNA after DNA replication and re-methylates cytosine in the newly synthesized strand. Upon chromatin reorganization at certain genetic loci, such as enhancers, in response to cellular signals, TET proteins and BER components might become more accessible. As a result, a fraction of 5mC may undergo stepwise oxidation. After replication, the resulting DNA contains oxi-mCs only on one strand, which impairs maintenance methylation. Therefore, 5mCs would be passively diluted upon successive cell divisions, even in the presence of functional DNMT1/UHRF1. The impact of de novo DNA methyltransferases in DNA methylation maintenance is not considered here. regions represent the small, evolutionarily conserved, distal cis-regulatory elements marked with H3K4me1, H3K27ac and p300 occupancy, and show significant enrichment of tissuespecific transcription factor binding sites, indicating that they include active enhancers. [86][87][88][89] Intriguingly, transcription factor binding is necessary and sufficient to reduce methylation levels in these regions. In particular, cell type-specific transcription factors could locally modify these regions during differentiation, inducing dynamic changes in the expression of the neighboring genes. 87 Genome-wide mapping analyses have shown that 5hmC is also strongly enriched in hypomethylated distal regulatory elements, such as enhancers. 39,87,95 Base-resolution DNA methylome mapping has revealed that Tet deficiency leads to more hypomethylated sites than hypermethylated sites in ESCs. 43 Extensive DNA hypermethylation typically occurs in distal enhancer regions that are associated with enhancerrelated histone modifications (H3K4me1 and H3K27ac), increased DNase I hypersensitivity, and occupancy by transcription factors and a histone-modifying complex. On the other hand, hypomethylated regions are randomly distributed throughout the genome. Notably, the majority of hypermethylated regions overlap significantly with regions enriched with 5fC and 5caC observed in the Tdg knockdown ESCs, suggesting that Tet-mediated demethylation mainly occurs in these regions. Changes in DNA methylation levels differentially influence the transcription of neighboring genes. 95 For example, Tet loss inhibits recruitment of Kap1 to the chromatin and induces derepression of most two-cell embryo (2C)-specific genes such as Zscan4. As expected based on the known function of Zscan4 in telomerase-independent telomere elongation, telomere length is elongated in Tet-deficient ESCs. On the basis of genome-wide profiling, 5fC and 5caC mostly reside in the distal regulatory elements, including the active/poised enhancers, CTCF-bound insulators, active/ poised promoters, and gene bodies of actively transcribed genes. 44,[69][70][71][72]96 Combined with Tdg depletion, these studies have enabled assessments of the dynamics and regulatory mechanisms of active DNA demethylation pathways. Interestingly, 5fC/5caC and 5hmC largely exist at distinct CpGs, and 5fC and 5caC frequently do not overlap at individual CpGs. There are about three times more CpGs modified with 5hmC alone than in association with 5fC/5caC, 44 indicating that TET/TDG-mediated active demethylation preferentially stops at the 5hmC step and accordingly the majority of 5hmCs could exist as stable marks. Furthermore, a considerable fraction of 5fC/5caC peaks are found in distal regulatory elements with relatively higher chromatin accessibility, suggesting that the catalytic processivity of TET enzymes is regulated by the local chromatin environment. Interestingly, like 5hmC, most of the 5fC/5aC are asymmetrically modified, 47 demonstrating that active DNA demethylation activity targets palindromic CpGs asymmetrically, consistent with the asymmetric baseflipping model. OXIDIZED 5-METHYLCYTOSINE DERIVATIVES AS DISTINCT EPIGENETIC MARKS Oxi-mCs are detectable in most tissues, but their levels are relatively very low compared to those of other bases and highly variable across cell types. 5hmC is most prevalent in ESCs, Purkinje neurons and the brain. 45,94,[97][98][99] As discussed, a significant amount of 5hmC is maintained as stable, demethylation-independent bases and can exert independent epigenetic roles. 25,26,47,51 The presence of oxi-mCs in DNA influences its physical properties. For example, 5hmC increases the thermodynamic stability of a DNA double helix. 100 When 5fC is incorporated into DNA, it induces alterations of the local DNA structure and influences the accessibility of DNA-binding proteins, presumably by altering the degree of DNA supercoiling and packaging. Furthermore, RNA polymerase II specifically recognizes 5caC and 5fC and forms hydrogen bonds with the 5-carboxyl or 5-carbonyl groups of 5caC or 5fC, respectively. As a result, RNA polymerase II is transiently stalled, thereby delaying transcription elongation on gene bodies. 101,102 Moreover, individual oxi-mCs were shown to be specifically recognized by numerous cellular proteins, called 'oxi-mC readers', which can differentiate the distinct chemical modification status of oxi-mCs. 60,103-106 By altering the modification status of different cytosine derivatives, cells might be able to selectively control the chromatin association and dissociation of these cellular proteins. For instance, the transcription factor Wilms tumor 1 binds preferentially to unmethylated or methylated DNA, but binds less efficiently when its cognate binding site contains oxi-mCs. In addition, TET proteins also interact with diverse cellular proteins that potentially affect its chromatin targeting and steady-state levels, as reviewed elsewhere. 25,29 TET PROTEINS IN HEMATOLOGIC CANCERS TET2 is frequently mutated in a wide spectrum of myeloid malignancies, including~20% of myelodysplastic syndrome (MDS), 20% of myeloproliferative neoplasms (MPN), 50% of chronic myelomonocytic leukemia (CMML), and 20% of acute myeloid leukemia (AML), reviewed elsewhere. 23,25,27 TET2 mutations are associated with aberrant DNA methylation patterns in myeloid malignancies. TET2 deletion and mutations are mostly heterozygous and are considered an early event in the pathogenesis of myeloid malignancies. Most of the leukemia-associated TET2 missense mutations are inactivating mutations that inhibit or abolish the catalytic activity of TET2 in vitro and in vivo. 21 These mutations may impair the interaction of Fe(II) and 2OG at the active site or affect the structural integrity of the catalytic core domain. Furthermore, TET2 was shown to be monoubiquitylated by the CRL4 VprBP E3 ligase, which promotes the chromatin binding of TET2. 107 Interestingly, leukemia-associated TET2 mutations are frequently targeted to the residues that are directly ubiquitylated or required for associations with the E3 ligase. Early studies using hematopoietic stem/progenitor cells (HSPCs) from MPN patients bearing TET2 mutations 108 or HSPCs in which Tet2 expression was knocked-down 109,110 have shown that Tet2 inactivation induces a developmental bias toward myeloid lineages at the expense of other lineages. Overall, various Tet2 loss-of-function mouse models exhibit very similar phenotypes, including augmented HSC expansion, increased repopulating capacity of HSCs, and skewed differentiation toward the myeloid lineage. [23][24][25][26] Some strains of Tet2-deficient mice, including those containing a homozygous or heterozygous deletion of Tet2, developed myeloid malignancies, indicating a causal relationship between Tet2 loss-of-function and myeloid transformation. Notably, Tet2 deletion in the more highly differentiated myeloid cells compared with HSPCs is not capable of inducing leukemogenesis, and only wild type, but not catalytically inactive Tet2, could rescue the leukemogenic phenotypes in Tet2-deficient mice, suggesting that the catalytic activity of Tet2 is required to suppress myeloid transformation. 111 Consistent with recurrent TET2 mutations in a subset of lymphoid malignancies, T-cell lymphoma with follicular helper T-cell-like phenotypes has also been observed in some Tet2-deficient mice. 112 These results collectively suggest that TET2 functions as a bona fide tumor suppressor in hematological malignancies. However, it appears that Tet2 deletion/mutation alone is not enough to drive full-blown leukemia. Thus, TET2 dysregulation may contribute to the induction of a pre-leukemic condition. The acquisition of additional mutations may then drive the development of full-blown malignancy. Supporting this hypothesis, Tet2 deficiency has synergistic effects with various leukemiarelated mutations that commonly co-exist with TET2 mutations in patients. Depending on the types of second mutations, the fate of leukemic cells could diversify and the disease latency is markedly shortened. 113 TET1 also has a regulatory role in hematopoietic transformation. Interestingly, TET1 seems to exert context-dependent effect. TET1 is a direct transcriptional target of MLL fusion proteins and activates the expression of its downstream oncogenic targets to promote leukemogenesis, suggesting its oncogenic roles in MLL-rearranged leukemia. 114 In contrast, the loss of Tet1 in mice promotes the development of B-cell lymphoma resembling follicular lymphoma and diffuse large B-cell lymphoma, albeit with a long latency, 115 suggesting its tumor suppressor function in lymphomagenesis. In non-Hodgkin B-cell lymphoma (B-NHL), TET1 expression is suppressed at the transcriptional level via promoter CpG methylation. Tet1 deficiency leads to an enhanced serial replating capacity of HSPCs, augmented HSC self-renewal and repopulating capacity, and the accumulation of DNA damage. Tet1 loss also induces developmental bias toward the B-cell lineage. Tet3 deficiency in mouse HSCs does not show any overt hematopoietic phenotypes, except for the expansion of HSPCs. 25 However, Tet2 and Tet3 are highly expressed in the hematopoietic system, suggesting that Tet2 and Tet3 play redundant roles in the regulation of normal hematopoiesis and oncogenesis. 116 As expected, the combined loss of Tet2 and Tet3 markedly impairs 5hmC production in hematopoietic cells, suggesting that they are the major 5mC oxidases in the hematopoietic system. Remarkably, the dual loss of Tet2 and Tet3 rapidly induces the development of highly aggressive, fully penetrant and cell-autonomous myeloid leukemia in mice. In Tet2/Tet3 double-deficient HSPCs, the myeloid lineage genes are significantly upregulated, whereas lymphoid and erythroid lineage genes are strongly downregulated. These altered gene expression patterns are associated with myeloid skewing. The double deficiency leads to a mild but consistent increase in DNA methylation, but this altered DNA methylation only has a mild relationship to gene expression levels. Furthermore, upon the loss of Tet2 and Tet3, DNA damage progressively accumulates, suggesting that TET proteins also play significant roles in maintaining genomic integrity. In addition to myeloid cancers, TET2 mutations are also found in lymphoid cancers, including~2% of Hodgkin's lymphoma and 10% of T-cell lymphoma cases. Furthermore, TET1 expression is significantly downregulated in acute B-lymphocytic leukemia. Because both TET1 and TET2 are frequently downregulated in acute B-lymphocytic leukemia, the impact of the simultaneous deletion of both genes on hematopoietic development has been tested. 117 Surprisingly, Tet1/Tet2 double knockout mice show significant decreases in the frequency of myeloid malignancies and have a strikingly improved survival rate compared to that of Tet2-deficient mice. Even haplo-insufficiency of Tet1 is sufficient to induce these phenotypes in Tet2-deficient mice. Furthermore, the double knockout mice mainly develop transplantable, lethal B-acute lymphoblastic leukemia-like malignancies associated with the clonal expansion of B cells, extensive lymphocyte infiltration into the bone marrow, spleen and liver, spleno-hepatomegaly, and enlarged lymph nodes. ADDITIONAL MAJOR EPIGENETIC FACTORS IN HEMATOPOIETIC CANCERS IDH enzymes Recent studies suggest that an altered metabolic status is closely linked to cellular transformation because many key enzymes implicated in tumor suppression consume various metabolites as cofactors. TET proteins require 2OG to catalyze 5mC oxidation. 2OG is mainly produced by IDH enzymes in the TCA cycle that catalyze the oxidative decarboxylation of isocitrate ( Figure 4). Interestingly, recurrent heterozygous mutations in IDH1 and IDH2 genes have been detected in a majority of glioblastomas and various hematopoietic malignancies, including MDS, MPN and AML. 25 IDH mutations are almost exclusively targeted to specific mutational hotspots (R132 in IDH1 and R140 and R172 in IDH2) and confer a neomorphic ability to reduce 2OG to 2-hydroxyglutarate (2HG) (Figure 4). 118 Thus, patients with IDH mutations show elevated levels of 2HG. In addition, inactivating mutations frequently arise in other genes that encode additional metabolic enzymes. For example, mutations in succinate dehydrogenase (SDH) and fumarate hydratase (FH) lead to the accumulation of succinate and fumarate. Interestingly, the structures of 2HG, succinate and fumarate are very similar to that of 2OG. Accordingly, they can compete with 2OG to inhibit 2OG-dependent dioxygenases, including TETs and JmjC-domain-containing histone demethylases, causing an increase in histone and DNA methylation (Figure 4). As a way of targeting mutant IDH enzymes to treat cancers, specific inhibitors that interfere with 2HG production by mutant IDH enzymes have been developed and they were shown to have clinical efficacy against gliomas in vitro and in vivo. 119 To characterize the in vivo function of IDH mutations, several mouse models, including those expressing mutant IDH1 or IDH2, have been generated. 120,121 Although the expression of mutant IDH in mice leads to abnormal hematopoietic phenotypes, the mice were not the exact phenocopies of those with Tet2 deficiency. For example, IDH mutations do not significantly affect myeloid differentiation and the repopulating capacity of HSCs, which are consistently observed in various Tet2-deficient mouse models. Furthermore, no leukemogenesis has been observed in any of these mouse models. Thus, these results suggest that IDH mutations alone contribute to pre-leukemic conditions, and full-blown leukemia develops via the gain of additional mutations. Interestingly, genetic or pharmacological suppression of mutant IDH proteins could promote the differentiation of leukemic cells and significantly ameliorate the pathogenic features, suggesting the requirement of 2HG in the maintenance of leukemic cells. 121,122 DNMT3A During hematopoiesis, DNA methylation pattern is dynamically regulated. 123,124 Individual DNMTs have been shown to be critical for HSC self-renewal, normal hematopoietic differentiation, lineage specification and suppression of malignant transformation. 24 Among them, the de novo DNA methyltransferase DNMT3A has gained much attention. In mice, the loss of Dnmt3a in HSCs augments HSC self-renewal and impairs differentiation over serial transplantation, 125 which was further enhanced by the additional loss of Dnmt3b. 126 Dnmt3a-deficient HSCs show aberrant DNA methylation patterns, but changes in DNA methylation are not strongly correlated with alterations in gene expression levels. HSCs doubly deficient in Dnmt3a and Dnmt3b have large hypomethylated regions in the CGI shore in the β-catenin (Ctnnb1) promoter, which transcriptionally upregulates β-catenin and its downstream target genes to block HSC differentiation. DNMT3A is also frequently mutated in a wide range of hematopoietic malignancies including AMLs (20-30%), MDS (10-15%), and MPN (~8%), and DNMT3A mutations are generally correlated with poor prognosis. 127 These mutations are typically heterozygous and target a specific residue, arginine 882, in the catalytic domain. The DNMT3A R882H mutant has a dominant negative effect. The expression of the DNMT3A R882H mutant or the deletion of Dnmt3a in mice leads to the development of a wide spectrum of myeloid and lymphoid malignancies resembling MDS, MPN, CMML, AML and acute lymphoblastic leukemia although the disease latency is very long. Similar to TET2 mutations, DNMT3A mutations are considered an early event that are introduced in HSCs, inducing a pre-leukemic condition, and a Dnmt3a deficiency cooperates with MLL-AF9, Flt3-ITD, and other mutations such as c-Kit, Kras and Npm1 mutations to promote oncogenic transformation toward a diverse spectrum of malignancies. DNMT3A mutations frequently co-exist with TET2 mutations in lymphoma and leukemia. Mutations in both genes are expected to modulate DNA methylation patterns in opposite directions; the former leads to global hypomethylation in general, whereas the latter leads to hypermethylation. However, in studies involving Dnmt3a-or Tet2-deficient mice, the pathological outcomes are very similar. Because TET2 consumes 5mC generated by DNMT3A, both mutations would result in the same end result at the molecular level, that is, a loss of oxi-mCs. A recent study has shown that compared to single deletions, the combined deletion of Dnmt3a and Tet2 in mice further augments the accumulation and repopulating capacity of HSPCs, and accelerates the development of hematologic malignancy, including B-cell and T-cell lymphomas, 128 similar to the DNMT3A R882H in the Tet2-deficient background. 129 The dual loss of both enzymes results in the downregulation of HSC-specific genes and derepression of lineage-specific genes. For example, Dnmt3a and Tet2 collaborate to prevent the activation of Klf1 and Epor. These genes are known as erythroid lineage genes, but Figure 4 TET protein as a linker between metabolism and epigenetic regulation. During the tricarboxylic acid (TCA) cycle, IDH enzymes catalyze the oxidative decarboxylation of isocitrate to generate 2-oxoglutarate (2OG), an essential co-substrate that TET enzymes require to oxidize their substrates. Mutations in the IDH1 gene increase the binding affinity for NADPH relative to isocitrate and NADP + ; thus, the resulting mutant enzymes acquire neomorphic activity to reduce 2OG to 2-hydroxyglutarate (2HG). Owing to the structural similarity, 2HG can function as a competitive inhibitor of TET enzymes. erythropoiesis is paradoxically blocked in the double knockout mice, resulting in anemia. Interestingly, these genes promote the self-renewal of double-knockout HSPCs in vitro. Further studies are required to precisely assess whether the loss of 5mC, oxi-mC or both contributes to malignant hematopoiesis. CONCLUSIONS AND PERSPECTIVE DNA methylation plays pivotal regulatory roles in diverse cellular processes, such as transcription and genome integrity, and its aberrations influence mammalian development and cancer development. TET proteins directly modulate the DNA methylation landscape by successively oxidizing 5mCs. TET loss-of-function is commonly observed in various cancers, including hematopoietic and non-hematopoietic cancers, and studies of various mouse models have clearly shown that it is causally related to the pathogenesis of hematologic cancers. Notably, the re-introduction of wild-type Tet activity into Tet-deficient HSPCs fully rescues the leukemogenic phenotypes in mice. Similar tumor-suppressor functions are anticipated for the wide spectrum of solid cancers. Therefore, the restoration of TET expression or function in cancers will have an immense clinical impact. In this regard, it is noteworthy that the combined treatment of DNMT inhibitors and vitamin C shows a marked effect in restoring TET activity in cancers. Despite vast information on the regulatory function of TET proteins in stem cell maintenance, lineage specification, gene transcription, genomic integrity and oncogenesis, it is still unclear how TETs control normal cell differentiation and malignant transformation. Further studies are required to uncover the exact molecular mechanism underlying accelerated oncogenesis upon TET loss-of-function. Furthermore, it is also necessary to develop tools to precisely manipulate TET function in cancer cells and identify targets for therapeutic intervention and/or preventive measures.
2017-05-26T01:54:49.519Z
2017-04-01T00:00:00.000
{ "year": 2017, "sha1": "c46703528beacbb09f7f71036a76debde87c5de3", "oa_license": "CCBYNCSA", "oa_url": "https://www.nature.com/articles/emm20175.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c46703528beacbb09f7f71036a76debde87c5de3", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
254577298
pes2o/s2orc
v3-fos-license
Project description and crowdfunding success: an exploratory study Existing research on antecedent of funding success mainly focuses on basic project properties such as funding goal, duration, and project category. In this study, we view the process by which project owners raise funds from backers as a persuasion process through project descriptions. Guided by the unimodel theory of persuasion, this study identifies three exemplary antecedents (length, readability, and tone) from the content of project descriptions and two antecedents (past experience and past expertise) from the trustworthy cue of project descriptions. We then investigate their impacts on funding success. Using data collected from Kickstarter, a popular crowdfunding platform, we find that these antecedents are significantly associated with funding success. Empirical results show that the proposed model that incorporated these antecedents can achieve an accuracy of 73 % (70 % in F-measure). The result represents an improvement of roughly 14 percentage points over the baseline model based on informed guessing and 4 percentage points improvement over the mainstream model based on basic project properties (or 44 % improvement of mainstream’s performance over informed guessing). The proposed model also has superior true positive and true negative rates. We also investigate the timeliness of project data and find that old project data is gradually becoming less relevant and losing predictive power to newly created projects. Overall, this study provides evidence that antecedents identified from project descriptions have incremental predictive power and can help project owners evaluate and improve the likelihood of funding success. Introduction In recent years, crowdfunding has emerged as a revolutionary financing model that allows small entrepreneurs to raise funding in the early stages of their projects, particularly those that may otherwise struggle to obtain capital (Kuppuswamy and Bayus 2013;Belleflamme et al. 2014). Today, there are approximately 1250 active crowdfunding platforms across the world, which together channeled $16.2 billion in 2014, representing a 167 % increase from $6.1 billion in 2013 (Massolution 2015). Having their project successfully funded is crucial to project creators as it provides not only initial funds for project development but also access to valuable future resources, and eventually turn their projects into successful entrepreneurial organizations (Mollick 2014). Previous research shows that only 45 % of the projects on these platforms are successfully funded (Greenberg et al. 2013;Mollick 2014). As a result, identifying general antecedents of funding success (i.e., successfully funded) has been of great interest to researchers because it can provide insights to project creators to maximize their funding success (Greenberg et al. 2013;Xu et al. 2014). It is natural to believe that one of important antecedents of funding success is the quality of project, and previous research on crowdfunding has suggested that project quantity is positively associated with the likelihood of funding success (Mollick 2014). However, the project quality is a latent construct and is measured from different aspects such as innovation, market condition, management team and so on. This measurement requires high level of expertise and experience in venture investment and it is usually done case by case. Consequently, existing studies on antecedent of funding success mainly focus on project properties that may directly or indirectly impact the funding success. For example, research has found that project properties 1 such as the funding goal, campaign duration, number of Facebook friends of the project creator, etc. are associated with funding success (Agrawal et al. 2011;Greenberg et al. 2013;Z. Li and Duan 2014;Mollick 2014;Xu et al. 2014;Kuppuswamy and Bayus 2015). Although existing research has identified an impressive list of antecedents that are associated with funding success, our primary criticism is the fact that they only focus on basic project properties, and that the information related to project descriptions is largely ignored. This paper tries to fill the gap by highlighting the importance of project descriptions and identifying influential antecedents of funding success under a theoretical guidance. Similar to traditional business plans, project descriptions are highly recommended to include the following information: what you are trying to do, how you will do it, how the funds will be used, qualifications to complete this project, people on the team, and how far along your project is (Kickstarter 2016). Previous entrepreneurship literature has evidenced that nascent entrepreneurs manage impressions and persuade business angels by manipulating language use of business plans (e.g., tone and style), in hoping to increase the likelihood of being selected for further consideration or getting funded (Chan and Park 2015;Parhankangas and Ehrlich 2014). Owners of crowdfunding projects are essentially entrepreneurs and have similar funding needs. We conjecture that they have the propensity to use project descriptions to promote their projects (products) and persuade backers to make a financial contribution. Following the previous research on traditional business plans (Chen et al. 2009), we view the process by which project owners secure funding from backers as a Bpersuasion processâ nd introduce the unimodel of persuasion into crowdfunding domain. However, the primary interest of this paper is not to test unimodel of persuasion, but to utilize its theoretical guidance and explore potential antecedents of funding success. Although there are other persuasion theories such as Elaboration Likelihood Model (ELM), we choose unimodel because it clearly indicates the sources of influential factors and it has been successfully applied to entrepreneurship literature to study the persuasion process of venture capitalists' funding decisions (Chen et al. 2009). Unimodel of persuasion classifies persuasive information into issue-relevant (the content of a message) and issue-irrelevant (cues other than the message itself), and it argues that these two types of information are functional equivalent in persuasion, though they may be quantitative different (Kruglanski 1989;Kruglanski et al. 2006;Chen et al. 2009). Guided by unimodel of persuasion, we identify five potential antecedents of funding success. Three of them (length, readability, and tone) are identified from the content of project description (issue-relevant) and two of them (past experience and past expertise) are from the trustworthy (issue-irrelevant) of project descriptions. We then study whether these five newly identified antecedents are statistically influential on funding success and whether such influence is practically meaningful. Our logistic regression results show that each of these antecedents is significantly associated with funding success. When these five antecedents are incorporated into a predictive model (logistic), the results of N-Fold cross-validation tests indicate that the proposed model can predict funding success with an accuracy rate (F-measure) of 73 % (70 %). The average accuracy rate (F-measure) of the mainstream model is around 69 % (66 %), and baseline model around 59 % (57 %). This indicates that the proposed model has an improvement of roughly 14 percentage points (rounded) over the baseline model based on informed guessing and 4 percentage points improvement over the mainstream model based on basic project properties. The differences among these three models are statistically significant under the t-test. More importantly, considering that the mainstream model only beats the baseline model by 9 percentage points (57 % to 66 %), the 4 additional percentage points (66 % to 70 %) improved by our proposed model is fair significant, representing 44 % (i.e., 4 divided by 9) of mainstream's performance over informed guessing. These results together show that our newly introduced variables have significant and practical impacts on the funding success of projects. Additionally, crowdfunding environment has experienced tremendous changes since its inception, from perspectives such as platform functions, users and policy, and so on. For example, the numbers of users and projects have grown drastically (Kickstarter 2014b), which change the competition environment of crowdfunding. Additionally, both backers and owners are likely to change their behaviors through their use of the crowdfunding platforms. These changes make us wonder 1) whether the project data in earlier years have become Bout of date^and have less power to predict funding success of future projects, and 2) whether the sub-sample of project data right before the projects being predicted contains the most relevant information and have higher predictive power. To answer these questions, we investigate the timeliness of project data and provide evidence that old project data is gradually becoming less relevant and losing predictive power to newly created projects. Overall, our results provide insights to researchers, project owners, and backers to better study and use crowdfunding platforms. This rest of the paper is organized as follows. We first review literature related to the antecedents of crowdfunding success and the unimodel of persuasion. We then propose a new method based on the unimodel to quantify the influence of project descriptions based on content analysis. We present and discuss our empirical results using data sample collected from a popular crowdfunding site, Kickstarter. Finally, we provide conclusions and discuss opportunities for future research. 2 Background and literature review 2.1 Crowdfunding models and platforms According to the context and nature of the funding effort, there are mainly four models of crowdfunding (Belleflamme et al. 2014). The first model is patronage-based, where supporters expect no direct return from their contributions or donations. The second one is lending-based, where the supporters expect some rate of return on their capital invested. The third one is rewardbased, where supporters receive a reward for backing a project. The reward can simply be a mention/credit in a movie or a prototype (earlier version) of a product. The last one is equity based, where the supporters are treated as investors and are given certain shares of future profit of the project (Mollick 2014). This study focuses on the reward-based crowdfunding, in which there are two dominant models regarding how funds are raised and distributed to project owners, represented respectively by two popular crowdfunding platforms, Kickstarter and IndieGoGo. Funds raised on Kickstarter follows a rule called all-or-nothing, which means no one will be charged for a pledge towards a project unless it reaches its funding goal (Kickstarter 2014a). On the other hand, IndieGoGo allows creators to keep the money pledged even the project fails to meet its goal (IndieGoGo 2014). While the all-or-nothing policy leads to greater motivation on Kickstarter, it's nice to at least get some money as opposed to none. Kickstarter charge 5 % commission fee of the funds raised for each project and IndieGoGo between 4 and 9 % (when the goal is not met). Antecedents of funding success Existing research has suggested that crowdfunding projects mostly succeed by narrow margins, or else fail by large amounts, and that crowdfunding success appears to be linked to project quality, i.e., projects of a higher quality level are more likely to be funded (Mollick 2014). However, the quality of a crowdfunding project is not easy to measure because individual backers generally lack relevant expertise owned by venture capitalists (VCs) and their contribution decisions are usually based on factors such as feeling and preference which, because of the limited backer data on the crowdfunding platform, 2 are difficult to evaluate and quantify. As an alternative, researchers turn to other factors that may directly or indirectly influence the funding success of a project. Some researchers find that project properties, such as project category, funding goal, and campaign duration, are associated with funding success. Others show that the existence of images or videos in project introduction is associated with funding success (Greenberg et al. 2013;Mollick 2014). Studies have also shown that a project owner's social influence, proxied by the numbers of friends on social networks such as Facebook, has an impact on funding success (Mollick 2014). Furthermore, researchers find a strong geographic influence in crowdfunding projects: project owners are more likely to propose those projects reflecting the underlying cultural products of their geographic areas (e.g., a project related to country music in Nashville, Tennessee) (Agrawal et al. 2011;Mollick 2014). They suggest that the nature of the population in which founders operate is related to funding success (Kuppuswamy and Bayus 2013;Z. Li and Duan 2014). More recently, by studying the reciprocity effect on crowdfunding, Zvilichovsky et al. (2015) provide evidence that project owners' backing-history has a significant effect on financing success: projects initiated by owners who have previously supported others have higher success rates, attract more backers and collect more funds. Project description and persuasion theory Although existing research on antecedents and funding success contributes greatly to our understanding of crowdfunding, few studies have focused on the project descriptions. Research in venture literature has evidenced that Bbusiness plan serves as an important indicator of a venture's potential for success. ( Chen et al. 2009, p. 202) Despite the difference between the funding environments, project descriptions are similar to traditional business plans in terms of both content and function (Kickstarter 2016). On the one hand, the project description is one of the most important information sources for backers to evaluate a project and make their funding decisions. Earlystage investments typically involve unproven technologies, unfinished products, and services. Thus, factual evidence pertaining to the new venture and its quality is often unavailable (Parhankangas and Ehrlich 2014). On crowdfunding platforms, backers Bpre-order^products before their existence, and these products are Bpromised^to be delivered in a future day. Backers usually have no control over the project development, and there is little external information such as customer reviews for backers to evaluate a product or an owner. On the other hand, project description is one of the few available tools for project owners to communicate with potential backers and promote their projects. This is especially true before the project is launched. Given the fact that the number of crowdfunding projects is increasing dramatically in recent years, the competition for backers' attention is becoming increasingly fierce (Mollick 2014). This highlights the importance of project descriptions for both project owners and backers on crowdfunding domain. We conjecture that project owners have the propensity to use project descriptions as marketing tools to influence potential backers' contribution decisions. There are only a few studies that have examined information content of project descriptions in the context of crowdfunding. These studies, however, either use a case study approach relying on small samples (Ordanini et al. 2011) or simply include all phrases as predictive variables (Mitra and Gilbert 2014). To identify the potential influential antecedents from project descriptions, we need to understand how information is processed by backers to form their funding decisions. On this point, social judgment and persuasion research offer potential insights. For example, Parhankangas and Ehrlich (2014) find the business angels' funding decisions are influenced by the language use of business proposals (plans). In another study, Chen et al. (2009) use a persuasion theory of the unimodel and investigate the extent to which venture capitalists' perceptions of Bentrepreneurial passionf rom business plans influence their investment decisions. Following their approaches, we conceptualize the process by which project owners secure funding from backers through project descriptions as a persuasion process, and employ the unimodel of persuasion to identify potential antecedents of funding success. Although the unimodel differs from other established paradigms of persuasion such as dual-process model of ELM (Petty and Cacioppo 1986;Petty et al. 2002;Rucker and Petty 2006), it has received greater recognition and acceptance in the literature in recent years (Chen et al. 2009;Catellani and Alberici 2012;Suárez-Vázquez and Quevedo 2015). Dualprocess models suggest that influence is formed from two routes, namely, central route and peripheral route, and that the influence of two routes is both qualitatively and quantitatively different. In other words, individuals have a higher motivation or cognitive ability tend to rely more on the central route, and the influence of central route is more enduring than that of the peripheral route. Unimodel of persuasion also classifies information into two types: issue-relevant (the content of a message) and issue-irrelevant (cues other than the message itself) (Chen et al. 2009). However, unimodel only suggests the quantitative difference, not the qualitative difference, of the influence of different information. In other words, unimodel assumes that the processing of issue-relevant information and issue-irrelevant information share the same route (individuals subjectively decides which information qualifies as their basis for persuasion-based decisions), and they have the same enduring effect on individual's decision. Consistent with previous research on the influence of business plans on venture capitalists' funding decisions (Chen et al. 2009), we believe that the unimodel explains better the backers' decision-making process because it emphasizes the subjectivity and equality of information basis and parsimoniously captures the persuasion process. In the context of crowdfunding, a backer's funding decision is determined by what the backer believes to be the basis for his/her judgment. For example, if a backer personally knows the project owner, he/she may rely less on the project description itself and use his/her personal experience as the basis to make a funding decision; otherwise, the backer may be more likely to use project description as the basis to make the decision. In addition, both persuasion and funding decision on crowdfunding are not a Bone-time^thing. Backers usually get to know the project at a different time; Kickstarter provides backers with tools (web pages) to monitor projects they have backed (Kickstarter 2014a), and backers can re-visit the project descriptions and get Binfluenced^throughout the campaign. More importantly, Kickstarter allows backers to make and change their decisions (contribute, cancel, or re-contribute) anytime before the campaign is ended (Kickstarter 2014a). In other words, project description is accessed by backers at the different time point (or multiple times by a backer), and their funding decision can be formed at any time point before the campaign is ended. However, the ending time of a campaign is fixed and is the same to every backer; this setting makes the enduring effect of influence less meaningful in the context of crowdfunding. In summary, unimodel of persuasion provide us with a theoretical guidance regarding the information sources of potential antecedents of funding decisions, without dealing the subtle details of influencing process. In this study, the potential antecedents of funding success are investigated at the aggregated level (backer population), not at the individual backer level. Unimodel of persuasion is an individual level theory and mainly links the influential antecedents to individual backer's funding decision. Following the previous literature (Baum et al. 2001;Baron 2008;Chen et al. 2009), we extend this link to include funding success at the aggregated level. Unimodel suggests that a antecedent can have a strong influence on a backer's funding decision. We argue that, however, if a antecedent has an influence on enough backers' funding decisions, 3 at the aggregated level, these funding decisions will lead to a higher likelihood of funding success. Previous persuasion and venture literature have evidenced a link between entrepreneur's traits and venture success and growth (Baum et al. 2001;Baron 2008;Chen et al. 2009). Especially, using the unimodel of persuasion, Chen et al. (2009) find that the affective and cognitive passion revealed from traditional business plans are positively associated with venture success. They further explain that these traits can make Bentrepreneurs more persuasive,^thus Bthese entrepreneurs had a higher probability of achieving success in new ventures.^ (Chen et al. 2009, p. 201) The argument we used to extend the link is also consistent with marketing and advertising literature that persuasion (or response) occurs at individual level, but the overall success (of marketing and advertising) is evaluated at the aggregated level (e.g., market-level sales) (Sun et al. 2010;Venkatraman et al. 2014). Methodology Existing research on crowdfunding success are generally interested in evaluating the performance (predicting accuracy) of different models, assuming each model using the same set of antecedents. For example, Greenberg et al. (2013) evaluates the performance of various decision tree algorithms and support vector machines with different kernel functions. Specifically, they evaluate the performance of radial basis, polynomial and sigmoid kernel functions with varying costs for support vector machines. For decision trees, they further evaluate different learning algorithm variations such as J48 Trees, Logistic Model Trees, Random Forests, Random Trees and REPTree. Then they choose the highest performing set of algorithms and boost them using the AdaBoost algorithm to see if accuracy is improved. In other words, existing research focuses on the model level, trying to find the best models with optimized parameters to achieve the highest performance. Our study, on the other hand, focuses on the antecedent level and tries to identify additional antecedents from a theoretical perspective (i.e., the unimodel of persuasion), and evaluates whether they have the incremental power to predict funding success. These additional contributing antecedents can then be applied to different predictive models. Identifying exemplary antecedents of funding success This study introduces the unimodel of persuasion into crowdfunding domain. However, the primary purpose of this study is not to test the unimodel theory, but to use the theory as a guidance and identify potential antecedents of funding success beyond basic project properties. In addition, this study doesn't mean to identify a complete set of antecedents from project description, rather, we use exemplary antecedents to demonstrate that unimodel can be used to facilitate our understanding of persuasion process and uncover potential antecedents. We choose exemplary antecedents based on following criteria: 1) the antecedent must be closely related crowdfunding context; 2) the antecedent must be aligned with unimodel of persuasion; 3) the antecedent can be reliably extracted or calculated automatically, and 4) the antecedent must be widely used in literature. Unimodel of persuasion suggests that backers' funding decisions are influenced by the content of project description (issue-relevant message) and cues other than project description itself (issue-irrelevant message) (Chen et al. 2009). Since the project description of crowdfunding project shares the similar content and role of traditional business plans, we identify potential antecedents based on previous research on traditional business plans (or investment proposals). For the content of project description, research on traditional investment proposals finds that language use can positively influence business angels' decision and increase the likelihood of being funded (Parhankangas and Ehrlich 2014). So we first identify three exemplary antecedents based on language use of project description through a content analysis. Similarly, for the cues other than project description itself, research on traditional business plans find that entrepreneur's traits such as tenacity, proactivity and passion are associated with venture success and growth (Baum et al. 2001;Chen et al. 2009). So we identify two exemplary antecedents based on project owners' traits. These exemplary antecedents are discussed in more detail below. The three exemplary antecedents identified from the language use of project description are length, readability, and tone. Length captures the amount of information project owner provided. Since crowdfunding projects typically involve unproven technologies, unfinished products, and services, there is litter external information regarding the factual evidence pertaining to their final products and quality. Project owners thus are encouraged to provide sufficient information for backers to evaluate the project, increase backers' confidence, and earn their trust (Kickstarter 2016). So we posit that length is a potential antecedent of funding success. Readability (not legibility) captures the easy of understanding of project description. Because of its importance on effective communication, readability has been advocated by the government, business, and other organizations. For example, The U.S. Department of Defense uses the Reading Ease test as the standard test of readability for its documents and forms, and Florida requires that the readability of life insurance policies must be greater than a set score (Wikipedia 2015). So we posit that readability is also a potential antecedent of funding success. Lastly, tone captures the general attitude used by project owner to describe their products or services. Research on business plans has evidence that entrepreneurs use moderate positive language in order to attract investors (Parhankangas and Ehrlich 2014). So we believe that tone could also be a potential antecedent of funding success. The two potential antecedents identified from project owners' traits are past expertise and past experience. These two traits demonstrate the competence and trustworthy of project owner and increase their likelihood of funding success. Because of potential costs, backers very often do not want to support a project that is unlikely to succeed (Li and Duan 2014). These costs can be associated with backer's disutility from having his or her money locked in the project or potential risk that rewards cannot be delivered as promised. Past expertise captures the achievement and success of the previous project creating actives. Higher expertise generally leads to a higher likelihood of funding success. Past experience captures project owners' previous backing and creating activities on the crowdfunding platform. On the one hand, backing a project will make a project owner think from a backer's perspective, thus having a better understanding of information needs of backers. This experience may increase his/ her competence to create a better project description. On the other hand, research has evidenced a reciprocity effect on crowdfunding that project owner with more backing activates are more likely to receive reciprocal backing from project owners they backed (Zvilichovsky et al. 2015). In other words, project owners can accumulate social capital by backing projects and at least partially cash it out when they raise funds for their own projects. In either case, past backing activities are likely to increase funding success. Additionally, project owners can learn from both successful and failed project they created in the past by enhancing the strengths and improving weaknesses. As a result, past creating activities are also likely to impact funding success. In summary, both past expertise and past experience are potential antecedents of funding success. Predictive model and measures Because each project can be classified as either a success (reaching the funding goal when the campaign is completed) or a failure (not reaching the funding goal when the campaign is completed), we build a logistic regression model to study the influence of project descriptions on the success of a funding project. We use logistic regression model, instead of other binary classification models, for the following reasons. First, as discussed above, our primary purpose is to identify additional contributing factors, not to evaluate the model performance. So the selection of model is based on whether the model has the capability to evaluate factors. Second, we want to analyze each newly introduced factors quantitatively and see whether they have significant and incremental predictive power to funding success. The significance test of coefficients of logistic regression model provides us with such a capability. Third, logistic regression model has widely used in binary classification and has competitive performance results with other traditional classification models such as Support Vector Machines (Hua et al. 2007;Abrahams et al. 2014). All variables or antecedents used in this study are organized and described in Table 1. Table 1, the potential antecedents that influence funding success are arranged into two categories, namely, previously identified and newly introduced antecedents. Those newly introduced antecedents are further organized into content and trustworthy cue of project descriptions, according to unimodel of persuasion. In order to evaluate the incremental contribution of those newly introduced antecedents in determining funding success, we control for other major antecedents of funding success identified in previous research, such as project category, goal and duration (Greenberg et al. 2013;Mollick 2014). Our model is shown below. As shown in logit Success ð Þ¼β 0 þ β 1 length þ β 2 readability þ β 3 tone þ β 4 tone 2 þβ 5 pastExperience þ β 6 pastExpertise þ β 7 goal þβ 8 duration þ β 9 rewards þ β 10 numImages þβ 11 numVideos þ ε Where success takes a value of either 0 or 1, indicating whether a project is successfully funded. Following the previous literature, we measure the amount of information (length) by using the number of words in the project description (Wang et al. 2011;Zhou et al. 2015). We measure the ease of understanding (readability) by calculating the readability score of the project description. Specifically, we use Gunning fog index (hereafter Fog Index) to measure the readability of project description. The Fog index was developed by Robert Gunning (BGunning fog index,^2015) and has been widely used in literature (F. Li 2008;Wang et al. 2011;Zhou et al. 2015). Fog index proposes that, assuming everything else to be equal, more syllables per word or more words per sentence make a document harder to read. For example, texts such as formal financial reports generally have a Fog index greater than 16 (F. Li 2008), and documents for a wide audience generally need a fog index less than 12 (BGunning fog index,^2015). Please note that a higher value of the Fog index corresponds to a lower level of readability. So we intentionally reverse the sign of Fog Index to reflect the direction. Specifically, readability is calculated as follows: Following the tone management literature (F. Li 2008; Davis et al. 2012;Huang et al. 2013), we measure the tone as the percentage difference of positive and negative words in the project description. Specifically, it is calculated as follows: The positive and negative words used in the formula are defined in Harvard-IV dictionary, which has been widely used to measure the tone reflected in textual contents (Davis et al. 2012;Huang et al. 2013). Because of the nature of marketing and persuasion, we expect that the project description usually has a net positive tone. In other words, more positive words than negative words are used in project descriptions. However, previous research has found that the venture capitalists preferred Bmoderate use of positive language^and evidenced a curvilinear relationship between positive language and funding success. So we add a quadratic term of tone to the model. As the reason discussed above, since project owners can learn and benefit from both their backing and creating activities, we measure past experience (pastExperience) by using the number of projects previously either created and backed by the owner, before the one being investigated. Finally, we measure past expertise (pastExpertise) by using the ratio of total funds raised and total goals required for all previous projects ended before the one being investigated. For example, assuming an owner has created three projects; when we investigate his past expertise by the time he is creating the third project, we should only consider the total funds raised and the total funds required for the first two projects, assuming both projects have been completed. Data sample We collected real crowdfunding project data from Kickstarter. com to carry out our empirical analysis. We use Kickstarter mainly for two reasons. First, Kickstarter is a popular and prevalent crowdfunding platform. Founded in 2009, Kickstarter has become one of the largest crowdfunding platforms in the world. It has more than nine million backers, and three million of them are repeat backers. As of today, more than 93,000 projects have been successfully funded, and more than two billion dollars have been raised . Second, the majority of research on crowdfunding uses Kickstarter data (Greenberg et al. 2013;Li and Duan 2014). This makes the comparison of our results against those previous reported more meaningful and reliable. Kickstarter doesn't provide a pubic API (Application Program Interface), and the non-live projects (e.g., completed, canceled, etc.) are not directly searchable. However, live projects are organized in category and are convenient to navigate. Our data collection mainly consists of two steps. First, started in late 2012, we scraped the those Blive^projects from Kickstarter websites by using a specially developed crawler. The crawler visited the website every other day and captured all live projects newly launched. Second, we scraped project data in early years based on those live projects already collected. Project's profile page contains the historical projects created and backed by the owner, and comments and updates contain backers' information which leads to other projects they backed. Similar to the approach used by Zvilichovsky et al. (2015), we used those Blive^projects as seed and recursively iterate from projects to backers and backers to projects until the number of newly discovered projects per iteration converged. This step was only performed once a while when a big enough number of new projects were scrapped. Our data sample covers all the projects from 2009 to November 2014. We excluded those funding projects that were still ongoing (6559 projects). In addition, we excluded those projects that were canceled (15,116 projects), purged (36 projects), and suspended (584 projects). Purged and The ease of understanding of project description, measured by using readability index. tone The ratio of positive and negative words in project description. Trustworthy cues (project owner traits) pastExperience The number of projects previously created and backed by the owner pastExpertise The rate of projects successfully funded. Previously Identified Antecedents Control Variables goal Project goal, the amount owner seeks to raise using crowdfunding. duration The number of days for which a project accepts funds. FBConnected Whether the project owner has linked or created a Facebook page for the project. FBFriends The number of Facebook friends a project owner has. numImages The number of images embedded in the project page. numVideos The number of videos embedded in the project page. rewards The number of pledge levels year The year a project launched ,not the year the project completed. category The category of the projects. suspended projects were usually handled by Kickstarter according to its policy or terms of use. Projects were canceled by owners for a variety of reasons. It's possible that majority of projects were canceled because they were unlikely to reach the funding goals and project owners want to avoid the dismal end (stonemaiergames.com 2013). A brief examination also finds that many projects were canceled because they were simply testing projects, with unreasonable low funding goal (e.g., $1, $2) or duration (e.g., 1 day). And some projects were canceled because project owners want to make necessary improvement and re-launch the project, or Bamazing partners reached outd uring the campaign (needwant.com 2016). It's also interesting to find out some projects were canceled even after successfully funded because of unforeseen changes from either project owners or backers (themarysue.com 2016). These projects were not treated as failed projects because they were not typically failed projects. We also followed a previous study (Mollick 2014) and removed those projects with a funding goal below $100 (1982 projects) or above $1,000,000 (294 projects), because those extremely small or large projects may have characteristics very different from the majority of projects. We finally removed those projects with less than 100 words in their descriptions, because, upon inspection, they are either incomplete or represent non-serious efforts to raise funds. Our final data sample consists of 151,752 projects across all 15 funding categories. These steps are summarized in Table 2. Table 3 shows the descriptive statistics for variables used in this paper. On average, the projects in our sample have an average funding goal of $15,126, with half of them less than $5000. The average (median) campaign duration is 34 (30) days; 47 % of projects have at least an image, with the average (median) number of 4.67 (1); and 80 % of projects have at least a demo video, with the average (median) of 1.18 (1). The results also show that the descriptions have an average (median) length of 646 (482) words, are generally positive (with a net positive tone), and are easy to understand (with fog index around 13). In addition, although owners usually have some past experience, with several projects backed or created, their past expertise is very limited. Averagely, they have raised 22 present of required funds on their previous projects, if any (more than 75 % of projects are created by the first-time owners). In the follow sections, we present our empirical analysis results from three aspects. First, we briefly discuss the current status of crowdfunding on Kickstarter from different angles such as category and year. Second, we evaluate the increment influence of newly identified antecedents and report the practical improvement when they are included to predict funding success. Third, we investigate the timeliness of project data and provide evidence that old project data is gradually becoming less relevant and losing predictive power to newly created projects. Overall funding status by category Tables 4 and 5 present the status of crowdfunding by category on Kickstarter. Overall, the success rate of our sample projects is 46 %, which is comparable to what reported by Kickstarter . Most of the basic project properties vary across 15 categories. Table 4 shows that in term of project number, the top three categories created are Film & Video, Music, and Publishing; and the least three are Dance, Craft, and Journalism. However, a project attractive to owners doesn't necessarily mean it is also attractive to backers. For example, Dance is one of the three categories that are least attractive to owners but has the highest success rate among all the categories, possibly because of low fund required and less market saturation (competition). On average, a project requires raising a fund of $14,541. Technology and Games require the highest funds, $39,073 and $27,520, respectively; Music and Crafts require the least funds, $7289 and $5794, respectively. It is also reasonable to notice that, generally, project categories requiring higher (lower) funds have lower (higher) success rates. Mollick (2014) reports that crowdfunding projects mostly succeed by narrow margins but fail by large amounts. However, we found this is more noticeable in popular categories with a significant number of projects. For example, Untabulated results show that only 3 % of the funded projects in Film & Video category have a funding ratio greater than 2 (two times the required fund) while that percentage is around 25 % in Comics and design category. Duration has little variation among different categories, with a range from 33 and 36 days, possibly because that Kickstarter has a default duration of 30 days and limits it up to 60 days (it was 90 days before June 2011). Surprisingly, there are some projects have a duration less than 5 days. Further investigation finds they are mostly small project, with funding goals less than $500. Table 5 presents the data related to new antecedents derived from project descriptions. Projects in Games has the longest description (11,24 words) and those in Music has the shortest (453 words). It is also worth noting that the overall readability is high (with Fog index around 13). Possible reason is that comparing to traditional business plans, project descriptions are more likely to be written the informal language. Both past experience and past expertise vary greatly across categories, reflecting the different popularity and competition among categories. Overall funding status by year Tables 6 and 7 present the status of crowdfunding by year on Kickstarter. We find the project properties are relatively more stable over time than across categories. The results show a clear trend that the number of projects and backers and the amount of goal are all increasing over time, though the speed is decreasing. This is reasonable because, as more users join Kickstarter and become more familiar with the platform, more projects are created. In addition, the mutual trust between project owners and backers increases with the familiarity and maturity of Kickstarter platform, thus, more expensive ones are likely to be funded in later years. The results also shows that the durations before 2011 are higher, which is consistent with the fact that Kickstarter allows a duration up to 90 days before June 2011 but reduces to 60 days afterward. The results in Table 7 show that, although readability and objectivity of project descriptions are relatively stable, project owners are disclosing more information through project description over time, with an exception in 2014. Specifically, the length of project descriptions increases from 405 to 718 words. Another interesting finding is that increasing trend of both past experience and past experiment. As shown in the results, the value of past experience increase consistently from 3.65 in 2009 to 8.15 in 2014, and the value of past expertise also increases consistently from 0.03 in 2009 to 0.32 in 2014. This is reasonable because that, over time, project owners gain experience and expertise by backing and creating more projects, and by making projects more persuasive thus more likely to raise funds. Logistic regression results In order to investigate whether those newly identified antecedents have incremental influence on funding success, we run two logistic models. Model A represents the mainstream model which only includes antecedents identified by previous research from basic project properties (i.e., control variables); Model B represents the proposed model which also includes those exemplary antecedents identified by this study. The results are reported in Table 8 below. As shown in Model B (proposed model), consistent with unimodel of persuasion, we find antecedents identified from both content of project descriptions (length, readability, and objectivity) and owner traits of project descriptions (pastExperience and pastExpertise) are significantly associated with funding success. Specifically, we find for every 1 % increase of length, the log odds of funding success increases by 0.38; this number is 0.68 and 0.58, for 1 % of increase of past experience and expertise, respectively. We find tone is positively associated with funding success. However, the quadratic term of tone has a negative coefficient, which indicates the curvilinear relationship between tone and funding success. In other words, moderate use of positive tone can demonstrate project owners' confidence and optimism, thus, increases the likelihood of success. Excessive use of positive tone, however, may weaken project's credibility and lead to an adverse effect. The results are consistent with those reported by Parhankangas and Ehrlich (2014). Finally, we find readability is negatively associated with funding success, with 1 % of its increase reduce the log odds by 0.05. This is puzzling because we expect a positive association since a more readable project description is easier to be understood by potential backers. However, as discussed in the descriptive statistics section, project descriptions are mainly written in informal language with low Fog index (easy to understand). Under this circumstances, formally written project descriptions may signal the preparedness and professionalism of project owners (Chen et al. 2009), thus increase the positive perception of backers and increase the likelihood of funding success. Similar results have also been reported by a previous study (Luo et al. 2013). The results of other antecedents are consistent between Model A and B. For example, a higher requirement of funding goal and a longer campaign duration are negatively associated with funding success; a higher number of reward levels has a positive influence on funding success; and as expected, a higher number of Facebook friends, images, and videos are also positively associated with funding success. Evaluation of predicting performance We first compare the predicting performance of our proposed model (Model B) with the mainstream model (Model A) discussed above. We use the entire data sample to train (via logistic models) and test the predicting performance. In addition to the accuracy rate, we also use F-measure to evaluate the prediction performance. F-measure considers both prediction accuracy and recall accuracy and thus provides a balanced performance evaluation (Ferri et al. 2009;Sokolova and Lapalme 2009;Powers 2011). The results are reported in Table 9. As shown in Table 9 Panel B, the proposed model has a predicting accuracy (F-measure) of 73.09 % (70.31 %), while the mainstream model has a predicting accuracy (F-measure) of 69.34 % (66.20 %). This indicates the proposed model has a better performance of predicting funding success. In addition, the confusion matrixes of both models further demonstrate that proposed model has higher true positive and true negate rates (69.3 % and 76.32 %) than those rates (65.28 % and 72.79) of the mainstream model. Correspondingly, the proposed model has a lower false positive and false negative rates (23.68 % and 30.7 %) than those rates (27.21 % and 34.72) of the mainstream model. These results show that the newly identified antecedents have incremental predictive power. However, training and testing a predictive model using the same data sample is not a good practice. A recommended practice is to use different datasets to train and test the model (Bengio and Grandvalet 2004). In this step, besides the proposed and mainstream model, we include another model called baseline model. Baseline model is based on informed guessing. In baseline model, we classify each project as Bsuccess^or Bfailure^simply according to the overall probability of funding success. For example, if 40 % projects are successfully funded, the overall probability of funding success is 40 %. Therefore, each project will be classified as Bsuccessŵ ith a probability of 40 % and to Bfailure^with a probability of 60 %. Then we calculate the prediction performance by comparing projects' assigned status values (i.e., success or failure) with their true status values. For each predictive model, we employ N-fold cross-validation test (with N set as 3, 5 and 10) to evaluate the prediction performance. The N-fold cross-validation test has been widely used to validate the performance of classification (Bengio and Grandvalet 2004;Li 2008). For each N, our data sample is randomly divided into N parts, then N experiments are performed, with N-1 parts used as training data for the predictive model to classify the remaining part. The average prediction performance is reported for the given N. Our results of N-fold cross-validation test are reported in Table 10. The results show that our proposed model achieves the highest performance, with the average accuracy rate (Fmeasure) around 73 % (70 %). The average accuracy rate (Fmeasure) of the mainstream model is around 69 % (66 %), and baseline model around 59 % (57 %). This indicates that the proposed model has an improvement of roughly 14 percentage points over the baseline model based on informed guessing and 4 percentage points improvement over the mainstream model based on basic project properties. The differences among these three models are statistically significant under the t-test. More importantly, considering that the mainstream model only beats the baseline model by 9 percentage points (66 % vs. 57 %), the 4 additional percentage points (70 % vs. 66 %) improved by our proposed model is fair significant, representing 44 % (i.e., 4 divided by 9) of mainstream's performance over informed guessing. These results together show that our newly introduced variables have significant and practical impacts on the funding success of projects. Both accuracy and F-measure are designed for the overall performance of predictive models. Sometimes, however, we need more specific information to make the funding decision. This is especially true when we evaluate prediction performance from the perspective of project owners. Because of the limited time and resources, project owners may not be interested in the overall success rate. Instead, they care more about whether their projects, if predicted as success, will truly be successfully funded. In another word, they want a predictive model that has high true positive rate and low false positive rate. Although this has been reported in the confusion matrixes, one of the visual illustration is to use ROC (Receiver Operating Characteristic) curve, which plots the true positive rate against the false positive rate at various threshold settings (Fawcett 2006). The results are illustrated in Fig. 1. As shown in Fig. 1, comparing to the mainstream model, our proposed model has an ROC curve more convex toward upper-left. This indicates that the proposed model has a higher true positive rate and a lower false positive rate, which is more useful for project owners to change their project settings and evaluate the likelihood of funding success before projects are launched. Predictive power of project data over time In the previous subsections, in order to ensure the comparability, we follow the previous studies and conduct our analysis by ignoring the temporal (i.e., time) information of projects. In other words, all projects are put in a single pool and predictions are in both directions: older project data are used to predict newer projects and, at the same time, newer projects are used to predict older projects. This can be clearly seen from our Nfold cross-validation test, in which the total sample is randomly divided into N parts without considering the creating time of each project. When we predict a funding success of a project, however, the only project data available to make a prediction is those historical data before the one being predicted, and it seems unreasonable to use future project data to train the predictive model and predict the funding success of past projects. On the other hand, as a new platform of crowdfunding, Kickstarter has experienced great changes since its inception, from perspectives such as system functions, platform policy, and so on. In addition, both backers and owners have changed their behaviors greatly though their use of the crowdfunding platform. Furthermore, the number of users and projects grow drastically over time, which changes the competition environment of crowdfunding. These changes make us wonder whether the project data in earlier years have become Bout of date^and have less predictive power to future project success, and whether the sub-sample of project data right before the projects being predicted contains the most relevant information and have higher predictive power. In order to answer these questions, we slice the whole sample (2009 to 2014) by month and construct narrower, but relatively big enough subsample (e.g., 6 or 12 month 4 ) and analyze the effectiveness of project data of different subsamples on prediction performance. We construct six subsamples consisting of one year's project data from 2009 to June, 2014 (the last sub-sample contains only 6 month's data), each of these sub-samples is used as training data to predict the funding success of projects between July and November of 2014 (our data sample ends in November 2014). Figure 2 presents the results of the prediction performance (F-measure) by using each year's data from 2009 to 2014. Consistent with our conjectures, we find that, overall, the prediction performance increases over time for both mainstream and proposed models. We remove the informed guessing model from this analysis because its performance only depends on the success rate of each year, which increases and then decrease as evidenced in Table 6. The figure indicates there are two bigger jumps in the year 2010 and 2014. In addition to the reason that project data in 2009 is the oldest relative to projects in 2014, another possible reasons is that, since Kickstarter was funded in 2009, the project data in 2009 may contains more noises and inconsistency. These two reasons together may result in the prediction performance by using data of 2009 is Bmuch^lower than other years. The performance jump in 2014, on the other hand, may mainly reflect the timeliness of data because the first half year's project data is used as training data to predict the projects of the second half year. The results also show that, from 2010 to 2013, although the improvement is relatively small, the overall trend of prediction performance is obviously increasing. These results together provide evidence that the historical project data is gradually becoming less relevant and losing predictive power to newly created projects. We also replace the F-measure with the accuracy rate and find the similar pattern. Conclusions and discussions The success of crowdfunding warrants its importance of research, and we expect an increasing use of crowdfunding in future venture investment. By using a large dataset obtained from Kickstarter, a popular crowdfunding platform, we examine the influence of project descriptions on funding success. To do so, we rely on the unimodel of persuasion and identify five exemplary antecedents from project descriptions: three of them are related to the content of project descriptions and two of them are related to the owner traits of project descriptions. We then investigate the influence of these antecedents by using logistic model. Our results show that the proposed model can predict funding success with an accuracy rate of 73 % (or 70 % in F-measure), which represents an improvement of roughly 14 percentage points over the baseline model based on informed guessing and 4 percentage points improvement over the mainstream model based on basic project properties (or 44 % improvement of mainstream's performance over informed guessing). These results together show that those antecedents identified from project descriptions have significant and practical impacts on the funding success of projects. We also investigate the timeliness of project data and provide evidence that old project data is gradually becoming less relevant and losing predictive power to newly created projects. This paper contributes to the crowdfunding literature in several ways. First, to the best of our knowledge, this study is among the first to explore crowdfunding with a focus on the information content of project descriptions. Second, this paper is also among the first to introduce communication theory in general and persuasion theory in particular (i.e., unimodel) into the crowdfunding context. Using content analysis, we measure the influence of project descriptions and investigate its impacts on funding success. The newly identified antecedents from project descriptions can then be used in different predictive models to enhance the predicting performance. Third, the results reported in this paper highlight the importance of project descriptions and provide insights for project owners to understand the influence of antecedents and increase their funding success with proper balance among antecedents. Fourth, existing predictive models of funding success usually employ overall accuracy to measure performance (Etter et al. 2013;Greenberg et al. 2013;Mitra and Gilbert 2014). Our proposed model is also evaluated using more balanced performance measures (i.e., F-measure and ROC curve) to better serve backers to make funding decisions. Taken together, our results provide meaningful insights to researchers, project owners, and backers to better understand the importance of project descriptions and their influence on funding success. This study is subject to several limitations. First, we conduct our studies merely base on one crowdfunding platform. There are other popular platforms (e.g., IndieGoGo) with different rules. This limits the generalizability of our results. Second, we mainly consider the information communicated between owners and backers through the crowdfunding platform (within-platform activities). There are many channels of Boffline^communication and interactions between owners and backers (off-platform activities) such as media coverage, which are also critical to funding success. Third, we limit our study to the information content of project description before project launch. The information content of project updates and comments after project launch are also important to funding success. Fourth, we simply exclude cancelled projects from our study, which may cause our results biased. This study also provides valuable opportunities for future research. First, some of the limitations can be addressed in future research. Future research can compare different crowdfunding platforms and gain more insights. Kickstarter and IndieGoGo have different rules regarding how to keep funds raised. These difference may lead to different motivations when selecting platform and different strategies when preparing the campaign. This study can also be extended to the stage after the project is launched and provide real-time monitoring and suggestions to increase funding success for project owners. Second, future studies can examine the influence of project descriptions by using more advanced features such as linguistic structures or methodologies such as topic modeling. Third, our study is conducted at a broader level to identify antecedents beyond basic project properties. Future study can further focus on a single antecedent and provide us with more thorough understanding. For example, Zvilichovsky et al. (2015) focus on past experience and find there exist a reciprocity effect on crowdfunding. Similar studies can be conducted based on other antecedents such as tone and readability. Fourth, with the fast growing of crowdfunding projects, it is becoming increasingly difficult for a project owner to attract enough backers and for a backer to choose a suitable project. Future research can work on bringing the two types of participants closer by identifying potential backers to owners or recommending potential projects to backers. Fifth, the main purpose of this study is to highlight the important influence of project descriptions and identify exemplary antecedents. In order to further improve the prediction performance, future research can incorporate those new antecedents into other classification models such as decision trees and Support Vector Machine (SVM), which provide better model calibration. Sixth, cancelled projects are excluded from study, which may make our results biased. Cancelled projects count for around 8 % of our data sample, they are mainly Bperceived as failed^by project owners. Kickstarter allows project owners to cancel a project during and even after the campaign is ended. However, we know little regarding the influence of cancellation on project owners, backers and the general crowdfunding practice. Last but not least, the theory and methodology used in this study can be extended and applied to equity-based crowdfunding domain. We encourage future research to explore these areas and advance our knowledge in crowdfunding.
2022-12-13T15:12:31.851Z
2016-12-07T00:00:00.000
{ "year": 2016, "sha1": "85b3cb5dab1cd1fa7c7a891282699c06d591e975", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10796-016-9723-1.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "85b3cb5dab1cd1fa7c7a891282699c06d591e975", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
84538801
pes2o/s2orc
v3-fos-license
Ethnomycological studies on some macro-fungi in Rupandehi District , Nepal identification and documentation of nutritional potential and indigenous knowledge. The study area occupies 154.75 hectare-land, and lies within a narrow limit of altitude between 110 m and 165 m above sea level in tropical deciduous riverine forest. Amanita caesarea, A. chepangiana, A. pantherina, Agaricus augustus, Coprinus comatus, C. plicatilis, Macrolepiota fuliginosa, M. rhacodes, Russula emetica, R. foetens, R. nigricans, Scleroderma bovista, S. citrinum, Termitomyces clypeatus and T. eurhizeus are found to be dominant. The collected samples represented 27 species of Basidiomycetes belonging to 6 orders, 13 families and 18 genera. The dried specimens are housed in the Tribhuvan University Central Herbarium, Kirtipur, Kathmandu, Nepal. The area embraces many mycophagous ethnic communities. The mycoelements prevailing in this area need sustainable conservation and utilization. The investigation and study on mushroom of Nepal started since 19th century (Lloyd, 1808;Berkeley, 1838).Since then, several papers have been published and several botanical investigations have been done.Among these, very few reveal the studies and investigation on macrofungi from western Nepal.This is a preliminary report on ethnomycological investigation carried out at Baunnakoti Community Forest in Rupandehi District.The area has not been previously investigated so far.This paper highlights the indigenous knowledge of the wild edible mushrooms in the district.Presently, 27 species of the Basidiomycetes belonging to 6 orders, 13 families and 18 genera have been reported from Baunnakoti Community Forest, situated in the tropical climate. Materials and methods Altogether, 27 mushroom samples were collected, and the local informants were interviewed.Indigenous knowledge survey was conducted from 15 to 31 May 2010, and specimens were collected from 1 June to 31 October 2011.The Participatory Rural Appraisal (PRA) technique was adopted with the local people aimed at getting information largely on nutritional aspects.Data were obtained using combined semi-structured questionnaire, participatory discussions and field observations.Mushroom samples were photographed in their natural habitat, and their morphological characters were noted.The samples were well dried and packed in wax paper bags with proper tag numbers.The habitat including ecological parameters viz.altitude, vegetation composition, soil type, soil pH, soil moisture, humidity, and temperature were recorded.The paper bags were brought to the Central Department of Botany, Tribhuvan University for further microscopic examination. Results During field survey, altogether, 27 species of Basidiomycetes from 6 orders belonging to 13 families and 18 genera were recorded with their brief descriptions (Annex 1). Indigenous knowledge and therapeutic use On the basis of the information collected, 92.5% of the mushrooms were found to be used as food, 5.5% as medicine, 1.5% as taste and flavor, and 0.5% as tonic.The food values of wild edible mushrooms were found to be more significant in the study sites.The consumption data revealed that mushrooms were found to be mostly used as food by 51% women followed by 31% children and 18% men.People were found to have used these mushrooms for the remedy of different types of diseases and ailments.Out of the 150 respondents, 30% people were found to have used it for the remedy of measles.Similarly, 24% people were found to have used it for the treatment of yellow fever, 20% for the treatment of jaundice, 16% for the treatment of inappetence, 4% for the treatment of constipation, 4% for the treatment of mumps, ear pain and cut wounds, 2% for the treatment of skin diseases, 1.3% for the treatment of muscular pain, and 0.6% for the treatment of stomach pain.Their medicinal uses for the treatment of different types of disease were found to have made them more significant for the people of the area. Discussion Wild edible mushrooms are not only an important source of food for local people, but are equally used for medicinal purpose.The present survey on the macrofungi revealed that there are plenty of edible species of mushroom.The most common among them such as Cleroderma bovista, T. clypeatus, T. eurhizeus and Volvorella volvacea are collected, sacked in bags and carried to market for selling. Among 27 species identified, 15 are edible, 4 inedible, 4 poisonous and 4 species possess medicinal value.Some of the edible species such as S. bovista, T. clypeatus and T. eurhizeus are also used for medicinal purpose.The medicinally important tropical polypore like Pycnoporus cinnabarinus is used for the remedy of infectious disease (mump), ear pain etc. Scleroderma citrinum, the medicinal species is also used as food.Schizophyllum commune, the cosmopolitan inedible species is sometimes used for culinary purpose in food deficit condition.This species has religious value too, and is used as 'Sagun' i.e. good luck in the marriage ceremony in Newar community. During surveys, it was found that the mushroom flora of Macrolepiota fuliginosa, M. rhacodes, R. nigricans, T. clypeatus, T. eurhizeus and V. volvacea is declining since the last two decades due to deterioration of forest lands.Notable frequencies of species were found in abundance during sample collection.Being saprophytic, obligatory symbionts as well as part of the mycorrhizal association, these microfungi play an important role in increasing soil fertility in the forest through biodegradation as well as decomposition of the lignocellulose compounds of leaf litter.The litter debris of vascular flora favours the regulation and maintenance of temperature and moisture in the soil for these microfungi.The toxic species listed are Amanita pantherina, Coprinus plicatilis, Russula emetica and R. foetens. Conclusion The reported mushrooms occur in tropical to temperate belts throughout the nation.Extensive investigation is needed to find out their species richness, distribution pattern, species diversity index and ethnomycological uses.Some of the important macrofungi such as Macrolepiota, Scleroderma, Termitomyces and Volvorella spp.need special attention to be conserved against threat to avoid their unmanaged and unscientific exploitation.Besides, their harvesting should be done in scientific manner rather than using traditional methods.The mycoelements prevailing Fig. 2 : Fig. 2: A graph showing the total no. of species (Tns) and % of frequency of groups (SF%) of Basidiomycotina
2018-12-21T22:49:22.749Z
2013-12-26T00:00:00.000
{ "year": 2013, "sha1": "8584c5ebbed6f7e5618ec0816d794433014192a1", "oa_license": "CCBYNC", "oa_url": "https://www.nepjol.info/index.php/BANKO/article/download/9467/7807", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8584c5ebbed6f7e5618ec0816d794433014192a1", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
254282859
pes2o/s2orc
v3-fos-license
Hidden invasion and niche contraction revealed by herbaria specimens in the fungal complex causing oak powdery mildew in Europe Deciphering the dynamics involved in past microbial invasions has proven difficult due to the inconspicuous nature of microbes and their still poorly known diversity and biogeography. Here we focus on powdery mildew, a common disease of oaks which emerged in Europe at the beginning of the twentieth century and for which three closely related Erysiphe species are mainly involved. The study of herbaria samples combined with an experimental approach of interactions between Erysiphe species led us to revisit the history of this multiple invasion. Contrary to what was previously thought, herbaria sample analyses very strongly suggested that the currently dominant species, E. alphitoides, was not the species which caused the first outbreaks and was described as a new species at that time. Instead, E. quercicola was shown to be present since the early dates of disease reports and to be widespread all over Europe in the beginning of the twentieth century. E. alphitoides spread and became progressively dominant during the second half of the twentieth century while E. quercicola was constrained to the southern part of its initial range, corresponding to its current distribution. A competition experiment provided a potential explanation of this over-invasion by demonstrating that E. alphitoides had a slight advantage over E. quercicola by its ability to infect leaves during a longer period during shoot development. Our study is exemplary of invasions with complexes of functionally similar species, emphasizing that subtle differences in the biology of the species, rather than strong competitive effects may explain patterns of over-invasion and niche contraction. Introduction The microbial component of biological invasions has been recognized rather lately compared to plant and animal invasions (Desprez-Loustau et al. 2007;Mallon et al. 2015;Dunn and Hatcher 2015;Blackburn and Ewen 2017). However, the dramatic impact of diseases caused by pathogens of exotic origin in Electronic supplementary material The online version of this article (https://doi.org/10.1007/s10530-020-02409-z) contains supplementary material, which is available to authorized users. natural communities is well documented (Hatcher et al. 2012;Fisher et al. 2012). For example, the spread of the fungus Batrachochytrium dendrobatidis, which causes chytrid disease, has been shown to be a key factor in the worldwide decline of amphibian populations (Fisher et al. 2009). The global emergence of invasive microbial pathogens is now considered a major challenge in invasion science (Ricciardi et al. 2017). Microbes are characterized by a tremendous diversity, harboring still numerous undescribed species. In particular, the fungal kingdom has been estimated to include several millions of species, of which only a few percent have been formally described (Blackwell 2011;Hawksworth and Lücking 2017;Wu et al. 2019). The lack of diagnostic features for species identification before the advent of molecular and phylogenetic methods explains that many species, only defined on a morphological basis, were thought to be cosmopolitan (Taylor et al. 2000). The recognition of cryptic species within morphological species has now become extremely common (Fitt et al. 2006;Crous et al. 2016). Cryptic species often show a different geographic distribution, ecology and pathogenicity (Taylor et al. 2000). For example, the recent ash dieback observed in Europe, which was initially thought to be caused by the native species Hymenoscyphus albidus, was shown to be associated with a closely related and morphologically almost indistinguishable species of Asian origin. H. fraxineus has probably co-evolved with Asian ash, on which it causes little damage (Gross et al. 2014;Enderle et al. 2019). The introduced species H. fraxineus has now spread in almost the whole range of ash in Europe and has progressively outcompeted the native species, H. albidus, which can hardly be found in areas where the new disease has been reported (McKinney et al. 2012; but see Dvorak et al. 2015;Koukol et al. 2015). With increasing rates of introductions, invasions may involve not only interactions between native and introduced species but also between introduced species in the same area. Multiple or successive invasions by functionally equivalent or closely related species have started to be documented and investigated, especially in plants and animals (Rauschert and Shea 2012;Russell et al. 2014;Linzmaier et al. 2018). Many examples of multiple invasions have been reported for insects (Reitz and Trumble 2002). It may be hypothesized that such multiple invasions could also be very frequent for microbial pathogens which, similar to insects, show a great diversity, including complexes of cryptic species, and propensity to be disseminated with human activities. The two successive pandemics of Dutch Elm Disease, caused by the spread of two fungal species (Ophiostoma ulmi and Ophiostoma novo-ulmi) outside their native area, is one among very few documented examples for fungi (Brasier and Buck 2001) but the availability of molecular methods may increasingly reveal the high significance of multiple microbial invasions. Multiple invasion events are interesting to address important questions in ecology, e.g. species coexistence or displacement. Historically, species displacements, intimately linked to invasions, have been considered as an illustration of competitive exclusion, where the most competitive species eliminates the other species sharing the same ecological niche (DeBach 1966;Gao and Reitz 2017). However, the outcome of competitive interactions during invasions may not be as extreme, depending on the amount of niche overlap, the relative competitive ability of species for resource use, and spatial and temporal variations in the interaction (MacDougall et al. 2009;Gao and Reitz 2017). In the case of Dutch Elm Disease, the fitness advantage of O. novo-ulmi, which eventually displaced O. ulmi, was shown to include several components, such as direct competitive antagonism, exploitative competition through resource use and wider climatic niche (Brasier and Buck 2001). Considering cryptic species in invasions has also important practical implications. The fact that many microbial, including fungal, pathogens occur as species complexes may impede the detection of a new, potentially more damaging, introduced pathogen if a closely related species causing the same symptoms has already invaded. Ash dieback, but also grapevine powdery mildew in Europe are examples of diseases caused by an introduced pathogen species having close relatives in its native area, which constitute further risks for the target host species (Schröder et al. 2011;Gross and Han 2015). Here, we focus on a complex of closely related fungi causing the same tree disease: Erysiphe spp, causing oak powdery mildew. Among the seven species in this clade, probably native to Asia (Takamatsu et al. 2007), three have been introduced in Europe (see history of introductions in Mougou et al. 2008). E. alphitoides was the first reported and described, after the emergence of severe epidemics at the beginning of the twentieth century. The pathway of invasion of E. alphitoides remains unknown. The disease was widespread all over Europe within a few years. E. hypophylla, with a slightly different morphology and disease symptoms, was first reported in the 1960's, after a putatively independent introduction, and showed a North-East to South-West spread (Viennot-Bourgin 1968). Finally, E. quercicola has only been detected recently in Europe when molecular markers were available for species identification. Its date of introduction therefore remains undetermined. A fourth oak powdery mildew species, in another genus (Phyllactinia roboris), is very rarely found in contemporary samples and might represent a native species, which was replaced by invasive species . The current distribution of the three Erysiphe species in Europe, although different, shows overlap at different spatial scales, from the leaf to the continent (Desprez-Loustau et al. 2018). This pattern of coexistence suggested that niche differences, both temporal (Feau et al. 2012;Hamelin et al. 2016) and spatial (Desprez-Loustau et al. 2018), rather than competitive interactions shaped the distribution of these species. The aim of the present study was to improve our understanding of the multiple invasion of oak powdery mildew fungi in Europe. By using more than 200 herbarium specimens of infected oak leaves, dating from 1875 to 2002 and distributed all over Europe, our first goal was to date the invasion history of the three species. Next, we tried to elucidate potential processes underlying the observed invasion dynamics by investigating two mechanisms possibly responsible for invasive success (i) the mating type ratio, conditioning sexual reproduction, characterized in the herbaria specimens for E. alphitoides and E. quercicola and (ii) competitive interactions between the two species, using an experimental approach with fresh isolates. Herbaria specimens Specimens were obtained from six mycological collections: University Paul Sabatier of Toulouse (France), Fungarium Z ? ZT, ETH Zurich, Switzerland, forest pathogen collection of the forest protection service of Switzerland, Swiss Federal Institute of Forest, Snow and Landscape Research WSL (Birmensdorf, Switzerland), Royal Botanical Gardens Fungarium (Kew, Richmond, United Kingdom), Botanische Staatssammlung Munich (Munich, Germany), Natural History Museum in Kopenhagen (Denmark). We tried to obtain as many specimens as possible from all over Europe collected between late nineteenth century and 1980 consisting of leaves infected by oak powdery mildew, recorded under different names, most often Microsphaera alphitoides, but also Erysiphe quercina or others (Online Resource 2). Molecular identification of species Specimens were examined under a binocular microscope for the presence of conidia and chasmothecia (sexual stage). The presence of the latter was specifically noted and is reported in Online Resource 2. For each specimen, one small piece of leaf (approx. 1 cm 2 ) was cut from a zone with conidia using a sterilized scalpel. The leaf pieces were put in Eppendorf tubes (2 mL) with two steel beads (4 mm diameter) and approximately 75 mg of white quartz sand to improve grinding. Grinding was made with a Grinder Shaker Pulverizer Tissuelyser (Retsch MM 300) during 30 s at 30 Hz. DNA extraction was carried out with the InvisorbÒ Spin Plant Mini Kit (STRATEC Molecular GmbH, Berlin, Germany), according to manufacturer's instructions. Species identification was based on PCR amplification and Sanger sequencing of the ITS region, as previously described for contemporary samples (Mougou-Hamdane et al. 2010;Desprez-Loustau et al. 2018). Protocols were, however, slightly modified to obtain sufficient quantities of DNA from old samples, by using 4 ll instead of 2 ll undiluted DNA for the PCR and extending the number of cycles to up to 45 (instead of 35). Positive (DNA from contemporary samples) and negative (water) samples were included at all steps. The homology of sequences with E. alphitoides, E. hypophylla or E. quercicola was verified by nBLAST analysis (https://blast.ncbi. nlm.nih.gov/Blast.cgi), using the reference Genbank sequences AB292708, AB292716 and AB193591, respectively (Takamatsu et al. 2007). Identification of species within this complex, including mixtures, was possible thanks to six diagnostic SNPs in the amplified region, with specific alleles fixed in each species (three SNPs for each species pair comparison) (Mougou-Hamdane et al. 2010;Desprez-Loustau et al. 2018). Determination of mating types Most powdery mildews studied so far have been shown to be heterothallic. In this case, sexual reproduction (leading to the production of chasmothecia, the ascocarps) is only possible if the two mating individuals have different sequences at the MAT1 locus, called the MAT1-1 and MAT1-2 idiomorphs (Brewer et al. 2011). In order to characterize the frequency of the mating types in the herbaria samples, primers were developed for partial regions of the two genes MAT1-1-1 and MAT1-2-1, corresponding to the two mating types MAT1-1 and MAT1-2, by using the Primer3 software Release 4.1.0 (Untergasser et al. 2012;http:// primer3.ut.ee/). A sequence of the MAT1-1-1 gene was available for E. alphitoides and E. quercicola from whole genome sequencing of two strains of these species (Dutech et al. 2020). For MAT1-2-1, sequences of closely related species (Erysiphe necator, E. pisi, Blumeria graminis), retrieved from Genbank (https:// www.ncbi.nlm.nih.gov/genbank/) were used. After import in the Geneious11.0.4 software and alignment with ClustalW, several primer pairs for each idiomorph (targeting both E. alphitoides and E. quercicola) were designed to amplify fragments of 120-250 bp. These primer pairs were tested on contemporary mono-spore isolates of E. alphitoides and E. quercicola, using a standard PCR protocol. Amplicons were sequenced by Sanger sequencing (GENEWIZ, Leipzig, Germany). The primer pairs for the initial MAT1 amplification of E. quercicola, E. alphitoides and E. hypophylla were the following for the MAT1-1-1 gene at the MAT1-1 idiomorph: Ea-Mat111F = 5'-TTA-TTT-CTG-CGG-CGT-ACT-CA-3' (forward), Ea-Mat111R = 5'-ACC-CCT-TTC-AGC-ACA-AAA-CA-3' (reverse); and for the MAT1-2-1 gene at the MAT1-2 idiomorph: mat1-2-F = 5'-CGA-AAR-AGT-AAA-CAT-GCC-GAW-AC-3' (forward), mat1-2-R2 = 5'-CTT CAG AAG ATT TCC GYG GTC-3' (reverse). Representative sequences of both mating types of the tree species were deposited in GenBank (accession numbers, MT592825 -MT592842). We checked by PCR amplification that a single idiomorph was detected in mono-spore isolates (results not shown), supporting heterothallism in E. alphitoides and E. quercicola, as generally reported for powdery mildew fungi. After preliminary tests on herbaria samples, we selected one pair of primers for each mating type corresponding to amplicons of short sizes (\ 150 bp) to optimize the probability of detection of these single copy genes in ancient DNA. The primer pairs finally selected were the following for the MAT1-1 idiomorph: mat1- The amplification reactions were done on a Veriti Thermal Cycler (Applied Biosystems). The PCR included an initial denaturation at 94°C for 3 min, followed by 40 cycles of denaturation at 94°C for 30 s, hybridization at 59°C for 40 s, extension at 72°C for 1 min, and a final elongation at 72°C during 5 min. The PCR products were separated by gel electrophoresis at 70 V for 25 min on a 3% (wt/vol) agarose gel in TAE buffer containing Gel Red (1 x final concentration, BIO-TIUM) and were visualized under UV light. Two replicate analyses were made for most herbaria samples. A few amplicons from herbaria were sequenced to check that mating genes had been amplified. Since natural infections can be caused by several genotypes of Erysiphe, each sample could theoretically be assigned to MAT1-1, MAT1-2 or MAT1-1 & MAT1-2. The mating type ratio of a group of samples was calculated as the number of samples with MAT1-1 (including MAT1-1 & MAT1-2) to the number of samples with MAT1-2 (including MAT1-1 & MAT1-2). This ratio is expected to be close to 1:1 in sexually reproducing populations. Mixed inoculation experiment In order to investigate interactions between E. alphitoides and E. quercicola, an experiment was set up using inoculations on oak leaves with spore suspensions containing different proportions of each species: A = 100% E. alphitoides, B = 75% E. alphitoides ? 25% E. quercicola, C = 50% E. alphitoides ? 50% E. quercicola, D = 25% E. alphitoides ? 75% E. quercicola, E = 100% E. quercicola. The design was based on the principle of the replacement series, widely used for the study of interference in mixtures of species (Jolliffe 2000), in which the total density of individuals is kept constant. Thus in our case, the same total spore concentration was used for each spore suspension (pure inoculum and mixtures). Since the same total density of spores was inoculated at each point, what is tested through inoculum (pure vs. mixed) and treatment (A-B-C-D-E) effects is whether inter-and intra-specific interferences are equal. Three experiments, using different pairs of E. alphitoides / E. quercicola isolates were performed. The Erysiphe isolates had been obtained from naturally infected leaves collected in Cestas (near Bordeaux, France) and multiplied from a single lesion by successive inoculations on oak seedlings to obtain a sufficient number of spores for the experiment. The identity of E. alphitoides and E. quercicola isolates was checked by ITS sequencing. The different mixtures were obtained as follows. First, concentrated spore suspensions (approx. 100,000 spores/mL) of each species were obtained by harvesting spores from monospore isolate ''cultures'' maintained on excised oak (Quercus robur) leaves kept in Petri dishes on moist filter paper (since powdery mildew fungi as obligate biotrophs cannot be cultured on axenic media) and adding them in an Eppendorf tube containing sterile water. The concentration of each spore suspension was assessed by counting spores with a hemacytometer. The concentration of the pure suspensions of E. alphitoides and E. quercicola was then adjusted to 60,000 spores/mL by adding sterile water, to obtain A and E spore suspensions, respectively. B, C and D suspensions were then obtained by mixing adequate volumes of A and E. In all these steps, spore suspensions were kept on ice to prevent spore germination or degradation. Inoculations were performed on detached oak leaves on moist filter paper in Petri dishes. Leaves were taken from oak (Quercus robur) seedlings, which had been grown in a greenhouse and were selected at a susceptible stage, with at least five developing leaves on the first flush. A 2.5 lL droplet of spore suspension (corresponding to approx. 150 spores) was put on each leaf on the same position (in the middle of the leaf, between lateral veins). Each of the five treatments (A to E) was applied on five leaves of 25 seedlings (i.e. 125 leaves in total). The rank of leaves (from oldest = 1 to youngest = 5) was recorded so that each treatment was applied to the same number of leaves for a given rank. Inoculated leaves were left in the microbiological hood with high ventilation for one hour with cover lid of Petri dishes open so that water droplets could dry out (since spores cannot germinate in free water). Leaves in closed Petri dishes were then incubated during 10 days in a growth chamber at 22°C with a 12 h-12 h photoperiod. At the end of the incubation period, a disc (1 cm diameter) was cut off from each leaf, containing the inoculated zone. The quantification of E. alphitoides and E. quercicola spores was carried out by droplet digital PCR (ddPCR), a method providing absolute quantification of specified DNA targets, using a previously developed protocol (Online Resource 1). The ddPCR analyses provided a number of target DNA copies per sample, for E. alphitoides and E. quercicola separately, which was used as a proxy of the number of spores of each species produced at each inoculation point. Data analyses were performed by using several mixed linear models with different dependent variables (Table 1). In all models, experiment was included as a fixed effect (at least initially, then removed if non-significant) and seedling-nested-inexperiment as a random factor. A leaf rank effect (and interactions with other effects) was included as fixed effect since leaf ontogeny is well known to have a strong effect on powdery mildew susceptibility (Edwards and Ayres 1982). In the first model, we analysed spore production in the different treatments by using the total number of target DNA copies (E. alphitoides ? E. quercicola) per sample as dependent variable, after log transformation (which normalized residuals). In this first model, we tested both a species effect on spore production (when comparing A and E) and interactive effects between species (i.e. interspecific interaction greater than intra-specific) by comparing pure vs. mixed inoculum and treatment effect. In the second model, the dependent variable was a proxy of ''infection efficiency'' (or spore yield) per species, calculated for each inoculated leaf as the ratio of the number of specific target DNA copies detected 10 days after inoculation to the theoretical number of spores inoculated for that species (i.e. 150, 112.5, 75 and 37.5 in the treatments corresponding to frequency = 100, 75, 50 or 25 in the inoculum mixture, respectively). The variable was not defined in the E treatment for E. alphitoides and in the A treatment for E. quercicola. It was log-transformed for analyses. The effects (explaining variables) tested on Spore yield were Species (E. alphitoides vs. E. quercicola), Inoculum (pure vs. mixed), Treatment (Inoculum), Leaf rank and Leaf rank * Species. In the third model, the aim was to compare initial and final frequencies of E. alphitoides and E. quercicola in the mixed inoculations (B, C and D treatments). An index of competitiveness was calculated as the difference between final frequency, calculated as the ratio of target DNA copy number of one species to the total number of DNA copies, and initial frequency for each inoculation point. The analysis was performed on only one species (i.e. E. alphitoides) since proportions of each species are complementary. We tested whether this competitive index (which should be equal to 0 under the null hypothesis of equal inter-and intraspecific interference) varied according to Leaf rank. Data analysis was implemented with the Proc mixed procedure in SAS software, Version 9.4 (SAS Institute Inc., Cary, NC, USA). Identification of Erysiphe species in herbaria specimens In more than 90% of samples (191 out of 204), amplified sequences showed strong homology with Erysiphe species. All Erysiphe sequences had [ = 99% homology with the E. alphitoides, E. quercicola or E. hypophylla reference sequences, except for the (Figs. 1, 2). Strong changes in the relative frequencies of the three species occurred during the second half of the twentieth century (Fig. 1). E. alphitoides was detected for the first time in a sample collected in 1921 in Austria. From that date until 1960, this species was found occasionally, alone or in mixture with E. quercicola in several countries (also in France, Denmark and Switzerland) at an overall frequency never exceeding 25%. A dramatic change occurred in the next decades, after 1960, with E. alphitoides becoming dominant, i.e. found in approx. 75% of all samples and in almost all countries sampled. E. hypophylla was first detected in a sample collected in 1947 in Switzerland. It was later found also in samples from Finland and Switzerland, but its overall frequency remained low, less than 20% of all samples. Presence of chasmothecia and distribution of mating type Except in the two oldest samples (1875) from Italy, no chasmothecia were observed until 1920 in any of the herbaria specimens, all identified as E. quercicola or undetermined. From that year on, chasmothecia were detected regularly in samples of E. quercicola (Fig. 3, Online Resource 2). Similarly, no chasmothecia were observed in the seven oldest samples of E. alphitoides, from 1923 to 1954, whereas they were detected in 81% of samples from 1956 onwards. Unlike E. alphitoides and E. quercicola, all ten samples of E. hypophylla had chasmothecia from their first detection in 1947 (Online Resource 2). The mating type was successfully determined by PCR amplification in 130 out of the 204 samples, but usually for only one replicate. From 1900 to 1920, 21 out of 22 samples of E. quercicola had a positive signal for the MAT1-2 idiomorph. Only one sample in 1912 revealed a mixture between MAT1-2 and MAT1-1. After the year 1920, the frequency of MAT1-1 increased greatly, and the ratio between the two mating types was maintained close to 1:1 in successive decades (Fig. 3). A highly significant association was found between presence of chasmothecia and presence of the two mating types in a sample (Chi 2 = 20.08, df = 1, P \ 0.0001). Moreover, the mating type ratio followed a similar temporal trend as Fig. 1 Temporal changes in the frequency of Erysiphe species (EA = E. alphitoides; EQ = E. quercicola; EH = E. hypophylla), alone or in mixture, detected in oak herbaria samples the frequency of chasmothecia (Fig. 3). The mating type of only one sample of E. alphitoides could be characterized before 1960, as being MAT1-1 & MAT1-2 (specimen collected in 1940). From 1960 onwards, E. alphitoides was more frequent and both mating type ratios and frequency of chasmothecia were close to 1, e.g. 0.90 (on 19 samples) and 0.96 (on 23 samples), respectively, between 1960 and 1970. The three E. hypophylla isolates (collected in 1948, 1957, 1961) from which mating type could be characterized were MAT1-1 or MAT1-1 & MAT1-2. Co-inoculation experiment All detailed statistical analyses can be found in the Online Resource 1. The E. alphitoides and E. quercicola isolates used in our experiment exhibited similar levels of pathogenicity on average. The total number of DNA copies did not significantly differ between E. alphitoides and E. quercicola when inoculated as pure inoculum, i.e. between A and E treatments in the first model (t = 0.36 P = 0.7193). Similarly, the infection efficiency, calculated as the ratio of DNA copies Differences between pure (one species alone) and mixed inoculations were demonstrated. A tendency of lower spore production (as estimated by target DNA copies detected by ddPCR) in mixed inoculum treatments (i.e. in B, C and D) compared to pure inoculum ( Fig. 4) was observed (first model -F172,1 = 3.60; P = 0.0595). Furthermore, for both species, infection efficiency was greater in pure inoculations than in inoculations in mixture with the other species (Second model-Inoculum F 329,1 = 23.03; P \ 0.0001) (Fig. 5). Importantly, leaf rank was shown to strongly affect infection by each species and their interaction. The infection efficiency was highest on the youngest leaf for both species but a stronger decrease was observed Mean infection efficiency (estimated as the ratio of targeted DNA copies detected 10 days after inoculation to the estimated number of spores used for inoculation) for E. alphitoides and E. quercicola when inoculated alone or in mixture with the other species on older leaves for E. quercicola than for E. alphitoides, resulting in only 24% of maximal infection efficiency on leaf 1 for E. quercicola (as compared to leaf 5) as compared to 53% for E. alphitoides ( Fig. 6; significant Leaf rank effect F 329,4 =2.59; P = 0.0369 and Leaf rank*Species interaction F 329,4 =6.88; P \ 0.0001 in the second model). This resulted in a greater infection efficiency for E. alphitoides than for E. quercicola on oldest leaves (t = 3.21, P = 0.0015). In agreement, the difference between final and initial frequencies for each species (third model) varied according to leaf rank (F 86,4 = 4.54; P = 0.0022; Fig. 7). On the oldest leaves (F1 and F2), E. alphitoides was detected at higher frequencies 10 days after inoculation than in the initial inoculum, corresponding to a significantly positive competitive index (t 86 = 3.11 P = 0.0025 for F1; t 86 = 2.24 P = 0.0275 for F2; 0 value of competitive index not included in the confidence interval -t values were nonsignificant for other leaves). Discussion Our study based on analyses of herbaria specimens provided several unexpected findings that challenge the currently admitted scenario of oak powdery mildew invasion in Europe based on historical reports (Mougou et al. 2008). The new scenario, strongly supported by evidence, resolve apparent contradictions (detailed below) in the history of this invasion by a complex of closely related species. Our results suggest that the first invasive species experienced niche contraction after over-invasion of the second species and point to several mechanisms potentially explaining this outcome. Our case study thus improves our understanding of multiple invasions by closely related, functionally equivalent species (Russell et al. 2014), which is of particular significance for fungi. Evidence of a silent epidemic (lag phase) before records of disease emergence The first striking finding of our study is that, with the help of herbaria specimens, it was possible to date the presence of oak powdery mildew to as early as the year 1904 (excluding the two oldest specimens corresponding to another taxon), i.e. three years earlier than the first report of symptoms in the literature (Hariot 1907). It should be pointed out here that all the time estimates based on herbaria samples have to be taken with caution since they were gained by a limited sampling of herbaria specimens. An even earlier introduction is thus likely. Our finding helps to explain the fact that the species was detected in many European countries (England, Germany, Austria, Belgium, Portugal, Italy, Scandinavia, Switzerland) in 1908, only a single year after the first mentioning in the literature. Obviously, the epidemic had started before but was only noticed by few mycologists and was not reported in the literature. Lags in detection (i.e. time between entry and discovery of an invasive species) are very common in undeliberate introductions since Fig. 6 Mean infection efficiency (estimated as the ratio of targeted DNA copies detected 10 days after inoculation to the estimated number of spores used for inoculation) for E. alphitoides and E. quercicola on developing leaves of different ages (from oldest = 1, to youngest = 5) Fig. 7 Variation in E. alphitoides competitive index (estimated as the difference between its final-and initial frequency in inoculated leaves) according to leaf rank (from oldest = 1 to youngest = 5). The dotted line and confidence interval at 95% correspond to the linear regression of the index (final-initial frequency) on leaf rank used as a quantitative variable populations often grow exponentially in the early phases of invasion and they become noticed only when they reach high density or cause significant damage (Crooks 2005). Such a lag was already reported for some fungal pathogens, e.g. H. fraxineus (Gross et al. 2014). The history of powdery mildew in Europe revisited The second striking result of our study is the detection of E. quercicola and not E. alphitoides in herbaria samples from 1900 to 1920, and predominantly until the 1960's. This strongly suggests that E. quercicola arrived before E. alphitoides and caused the emergence of oak powdery mildew in Europe in the beginning of the twentieth century. One of our analysed sample was collected in 1908 by Hariot who made the first disease report (Hariot 1907). However, E. alphitoides was described in 1912 as the new species associated with the disease (Griffon and Maublanc 1912;Mougou et al. 2008). This apparent contradiction can be explained by the fact that the current E. alphitoides description is not based on the holotype (= original sample from which the species description was based initially) but on a neotype, consisting of a Swiss sample of oak powdery mildew collected in 1999. At the time of this neo-typification, the holotype of E. alphitoides had been lost and E. quercicola was only known from Japan (Takamatsu et al. 2007). When looking at descriptions and drawings of chasmothecia included in the early reports of oak powdery mildew in Europe (e.g. Arnaud and Foex 1912;Griffon and Maublanc 1912), they are indeed more suggestive of E. quercicola than of E. alphitoides, if looking at the relative length of appendages and diameter of chasmothecia which are considered as a differential feature between the two species (Takamatsu et al. 2007; Online Resource 1). Although the morphology of the anamorphic (asexual) stage of the two species has generally been considered as similar, a difference in conidia morphology was found in this study (Online Resource 1). Our findings supporting E. quercicola as the causal agent of the emergence of powdery mildew are also consistent with early observations on the pathogen cycle. From 1907 to 1911, the new oak powdery mildew was reported as overwintering in the anamorphic stage, as mycelium or conidia in buds, leading to a typical symptom at budburst called flagshoots, i.e. strongly infected shoots with young developing leaves covered with conidia. However, this symptom has always been associated with E. quercicola and never with E. alphitoides in recent samples (Feau et al. 2012). Evidence of a (cryptic) over-invasion leading to spatial and temporal niche contraction of the first introduced species. The detection of E. quercicola in herbaria samples all over Europe, including Nordic countries, from 1900 to 1960, was another unexpected finding of our study since it is now known almost exclusively from southern Europe (Desprez-Loustau et al. 2018). The strong increase in E. alphitoides frequency in herbaria samples from its first detection in 1920 leading to its predominance after the 1950's (and still nowadays, Desprez-Loustau et al. 2018) strongly suggests that E. alphitoides was the cause of the partial displacement of E. quercicola. E. alphitoides may have been introduced a few years after E. quercicola, grew exponentially and spread. Alternatively, the two species may have been introduced together, but in conditions giving E. quercicola a strong initial advantage. Some authors suggested that oak powdery mildew first appeared in Portugal at the end of the nineteenth century and that it was introduced (possibly much earlier) not on contaminated oaks but on other hosts, such as tropical plants (Torrend 1909, Online Resource 1). Interestingly, molecular phylogenetic analyses and cross inoculations suggest close relationships (even conspecificty) between powdery mildew fungi on tropical plants, especially mango, and on oaks (Boesewinckel 1980;Desprez-Loustau et al. 2017, Limkaisang et al. 2006, Ajitomi et al. 2020. Both E. alphitoides and E. quercicola have been found on mango trees and other tropical plants, but most frequently E. quercicola Nasir et al. 2014). One can therefore hypothesize an introduction of both E. alphitoides and E. quercicola on common hosts, but with E. quercicola at higher densities than E. alphitoides, maybe also due to its ability to survive in buds. Moreover, if this introduction took place in southern Europe, E. quercicola may have found both climatic conditions, e.g. warmer temperatures in winter, and hosts, e.g. Pyrenean oak (Q. pyrenaica = Q. toza), favouring its development (Takamatsu et al. 2007;Desprez-Loustau et al. 2018). Unfortunately, we could not obtain herbaria specimens from Portugal and Spain supporting this early history. With E. alphitoides overinvasion, especially after the 1960's, E. quercicola would have been progressively restricted to southern Europe. As a general rule, niche contraction of a species as a result of exposure to a threat is expected to restrict the niche to areas where the threat impact is reduced, often in relation to environmental heterogeneity, and thus to areas where the competitive ability of the species is maintained (Scheele et al. 2017). Similarly, niche shift or ecological character displacement, has been described as a process facilitating coexistence in recently encountered species with initial similar niche, through niche partitioning (Stroud et al. 2019). Herbaria analyses also suggest that niche contraction of E. quercicola had a temporal dimension. Most herbaria specimens were collected at the end of summer or autumn (August to November), probably because mycologists were looking at fruiting bodies (chasmothecia) which are the main structures for diagnostic purposes (but see Online Resource 1 about differences in the morphology of conidia). This suggests that, in the beginning of the twentieth century, E. quercicola was able to develop and cause symptoms all along the season. Although herbaria specimens cannot be considered a systematic sampling, there seems to be a contrast with the recent period since several studies showed that E. quercicola is mostly associated with spring symptoms (on the first flush) and can hardly be found at the end of the season (Feau et al. 2012;Hamelin et al. 2016). Temporal niche contraction has been far less documented than spatial niche contraction (Scheele et al. 2017) but some examples can be found (Ishii and Higashi 2001). Leaf ontogeny and ontogenic resistance as a crucial factor in species interference How can the spatial and temporal niche reduction of E. quercicola, leading to its partial displacement as a result of the over-invasion of E. alphitoides, be explained? Although several mechanisms may be involved in species displacement after invasion of another species, competition between two newly interacting species is often considered as a key component (Gurevitch and Padilla 2004;Gao and Reitz 2017). But assessing the fitness advantage of a species over another one is challenging since the whole interacting space of two species is not easily tractable. Our experimental approach with E. alphitoides and E. quercicola is limited to interaction effects during leaf infection after inoculations with high spore loads to maximize potential interference effects. Our results suggest some level of negative interaction between the two species, as shown by decreased infection efficiency of each species in mixed inoculum with the other species. Direct antagonism cannot be discarded in absence of more detailed investigations. However, a more likely explanation would be an effect of competition, including both socalled interference and exploitative competition, i.e. competition for space and resources (Boddy 2000). Most interestingly, this competition effect was strongly affected by leaf phenology. The very young leaves appeared as the most favourable for sporulation of the two fungal species, in agreement with previous experiments showing ontogenetic resistance of mature oak leaves to powdery mildew (Edwards and Ayres 1982). However, leaf aging had a strikingly different effect on E. alphitoides and E. quercicola. Whereas susceptibility to E. quercicola strongly decreased during leaf development, E. alphitoides appeared more plastic, i.e. its sporulation was much less reduced on older leaves. A significant competitive advantage of E. alphitoides could be detected on these older leaves. This small experimental difference may have a great significance in nature by providing E. alphitoides a much wider susceptibility window during the whole growing season. The importance of temporal aspects or phenology in the interaction between closely related species and their coexistence or displacement has already been emphasized (Hamelin et al. 2016;Gao and Reitz 2017). The difference between E. alphitoides and E. quercicola in their ability to infect older leaves, and not only very young leaves of currently growing shoots, may have had crucial implications in terms of reproductive strategy. Indeed, sexual reproduction in powdery mildew fungi, leading to the formation of chasmothecia, occurs at the end of the season, on senescing leaves. The observation of the sexual morph (chasmothecia) in herbaria specimens identified as E. quercicola between 1920 and 1980 strongly suggest that they belonged to E. quercicola, as also supported by a few sequences obtained from DNA tentatively extracted from fruiting bodies isolated from surrounding mycelium (data not shown). The absence of chasmothecia in the early invasion phase could be simply explained by the absence or very low frequency of one mating type, due for example, to a first clonal expansion in Europe of few genotypes mostly bearing the Mat-1-2. This explanation was suggested by early observers (Behrens 1921) and the absence of sexual fruiting bodies in the first years following introduction has been documented in several other cases with heterothallic fungi (Paoletti et al. 2006;Boron et al. 2016), including grapevine powdery mildew (Yarwood 1957). Chasmothecia associated with E. quercicola infection were increasingly observed while disequilibrium in mating type ratio decreased in the subsequent decades. However, the presence of E. quercicola chasmothecia in herbaria specimens was mainly observed in a period when E. alphitoides was at low frequency and they are almost never observed in contemporary samples (Mougou et al. 2008;Marçais et al. 2009;Feau et al. 2012). It may be hypothesized that when E. alphitoides spread in Europe, its better ability to infect older leaves compared to E. quercicola led to its much higher frequency on leaves in autumn. This could have favoured in turn a greater sexual reproductive success, which possibly contributed to its invasive success (Philibert et al. 2011;Bazin et al. 2014). Therefore, our findings could be interpreted as a form of reproductive (sexual) interference, where the competitive advantage of E. alphitoides on older leaves ''constrained'' E. quercicola to development on youngest leaves, only in the asexual stage, while E. alphitoides had the free space for sexual reproduction in autumn. Reproductive interference has been involved in animal or plant species displacements (Gao and Reitz 2017;Nishida et al. 2017), but never in fungi to our knowledge. Reproductive interference between E. alphitoides and E. quercicola could explain the geographic niche reduction of E. quercicola. Indeed, sexual fruiting bodies in powdery mildew fungi are also resistance forms for overwintering. In their absence, overwintering in the asexual form in buds is limited in very cold winters (Marcais et al. 2017). The ability of E. quercicola to infect leaf primordia and survive winter in buds, giving it a priority effect in spring, is therefore limited to warmer areas which can explain the contemporary distribution of E. quercicola restricted to southern Europe, in the optimal part of the historical distribution (Scheele et al. 2017). Conclusions Our results provide new insights into the history of the oak powdery mildew invasion in Europe. They strongly suggest that E. quercicola was the first invading powdery mildew species on oaks in Europe in the early twentieth century, although another Erysiphe species was detected on oaks in 1875 in Italy, and that E. alphitoides over-invaded in a second stage. Interestingly, another multiple invasion involving a complex of Erysiphe species associated with oak powdery mildew, but with E. alphitoides overtaken by an American species, was reported to have occurred in South Africa in the middle of the twentieth century (Gorter 1984). This other multiple invasion would deserve further investigation since molecular species identifications were not available at that time and have not been used ever since. As concerns Europe, our results showed that E. quercicola was not totally displaced by E. alphitoides but experienced severe niche contraction, in several of its temporal (season) and spatial (age of leaves, geographic range) dimensions, as well as being affected in its reproduction. Our findings related to the outcome of the successive invasions of E. quercicola and E. alphitoides are fitting well a theoretical model suggesting that the coexistence of the two species could be mediated by a trade-off between within-season transmission, in favour of E. alphitoides, and between-season transmission, in favour of E. quercicola only in nonlimiting winter conditions (Hamelin et al. 2016;Desprez-Loustau et al. 2019). Beyond the reconstruction of the history of an important forest disease, our investigation illustrates the complexity of a multiple invasion in a microbial complex, emphasizing the potential role of subtle differences in the biology of closely related species, rather than strong direct competition, in the observed pattern of partial species displacement. Many other components not explored here may be involved, such as interactions with natural enemies (Xing et al. 2017;Kiss 2008) or human interference (Gao and Reitz 2017). For this later point, it is interesting to note that silvicultural practices in oak forests greatly changed in the twentieth century, partly in response to powdery mildew invasion (Viney 1970), favouring high forest at the expense of coppices that were most frequent before. This could have also favoured the predominance of E. alphitoides, less dependent than E. quercicola on new succulent shoots (Desprez-Loustau et al. 2018). Improved understanding of multiple invasions, especially for microbes that often come unnoticed, is crucial owing to their increased frequency and negative impacts. Microbial pathogens often occur as species complexes in their natural area (Rouxel et al. 2013;Gross and Han 2015;Zheng and Zhuang 2013), constituting reservoirs of potential multiple species introductions. Keeping in mind the possibility of an over-invasion by a related pathogenic species for a given disease caused by an introduced pathogen may have important implications for policy and management (Landolt et al. 2016). Even if overinvasion does not necessarily involve increased aggressiveness (Mariette et al. 2016), new risks may emerge, through hybridization between introduced species for example, as already demonstrated for Dutch Elm Disease and Alder Phytophthora decline (Brasier and Buck 2001;Ioos et al. 2006). Finally, we would like to emphasize the importance of biological collections to preserve current and past biological samples, which is the only way to keep tracks of changes and provide full understanding of emerging epidemics. Interestingly the history of the iconic potato late blight was also revisited by using herbaria specimens which showed that the lineage previously thought to be responsible of the Great Irish Famine was actually a secondary invader which replaced the original lineage (Yoshida et al. 2014). It would be interesting to further investigate whether interactions between lineages of the same species or of closely related species involve some common mechanisms (Gladieux et al. 2015).
2022-12-07T14:59:05.969Z
2020-12-04T00:00:00.000
{ "year": 2020, "sha1": "14a03eeb6827df8c12b7c74c1efc57d8b425193a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10530-020-02409-z.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "14a03eeb6827df8c12b7c74c1efc57d8b425193a", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
218857657
pes2o/s2orc
v3-fos-license
Decline in Carbon Monoxide Transfer Coefficient in Chronic Obstructive Pulmonary Disease Background: Although a reduced carbon monoxide transfer coefficient (Kco) is an important feature in chronic obstructive pulmonary disease (COPD), how it changes over time and its relationship with other clinical outcomes remain unclear. This study evaluated longitudinal changes in Kco and their relationship with other clinical outcomes. Methods: We evaluated patients with COPD from the Korean Obstructive Lung Disease cohort, followed up for up to ten years. Random coefficient models were used to assess the annual change in Kco over time. Participants were categorized into tertiles according to Kco decline rate. Baseline characteristics and outcomes, including changes in FEV1 and emphysema index, incidence of exacerbations, and mortality, were compared between categories. Results: A decline in Kco was observed in 92.9% of the 211 enrolled participants with COPD. Those with the most rapid decline (tertile 1) had a lower FEV1/FVC% (tertile 1: 43.8% ± 9.7%, tertile 2: 46.4% ± 10.5%, tertile 3: 49.2% ± 10.4%, p = 0.008) and a higher emphysema index at baseline (27.7 ± 14.8, 22.4 ± 16.1, 18.1 ± 14.5, respectively, p = 0.001). Tertile 3 showed a lower decline rate in FEV1 (16.3 vs. 27.1 mL/yr, p = 0.017) and a lower incidence of exacerbations (incidence rate ratio = 0.66, 95% CI = 0.44–0.99) than tertile 1. There were no differences in the change in emphysema index and mortality between categories. Conclusion: Most patients with COPD experienced Kco decline over time, which was greater in patients with more severe airflow limitation and emphysema. Decline in Kco was associated with an accelerated decline in FEV1 and more frequent exacerbations; hence, this should be considered as an important outcome measure in further studies. Introduction Chronic obstructive pulmonary disease (COPD) is characterized by persistent airflow limitation and related chronic respiratory symptoms due to airway and/or parenchymal abnormalities. COPD is primarily caused by air pollution, including smoking, and influenced by host factors [1,2]. Many patients with COPD experience acute exacerbations with disease progression, leading to mortality and morbidity, creating an economic and social burden worldwide [3,4]. A decline in forced expiratory volume in one second (FEV1) is a well-known indicator of disease progression in COPD and has been used as an important outcome parameter in many COPD studies [5][6][7]. However, it is known that FEV1 alone does not adequately reflect disease severity such as parenchymal destruction, decreased exercise performance, and clinical symptoms [8][9][10]. The carbon monoxide transfer coefficient (Kco) is regarded as an index to assess the efficiency of alveolar transfer of carbon monoxide by measuring the pulmonary gas exchange across the alveolar-capillary membrane [11]. Kco could decline not only with parenchymal destruction but also with small airway diseases and microvascular destruction [11], which are important pathological changes seen in COPD but not necessarily linked to the degree of airway obstruction [12,13]. A decreased Kco is also associated with increased pulmonary venous pressure and cardiac problems that affect the prognosis of COPD [11,14,15]. Thus, we could postulate that Kco is also related to COPD prognosis. However, to date, limited data are available on the association between Kco and COPD outcomes [7]. Moreover, changes in Kco over time and the relationship between these changes and COPD outcomes, such as a decline in FEV1, acute exacerbations, and mortality, have never been reported. Therefore, we conducted a multicentre prospective cohort study among patients with COPD to examine the variability of changes in Kco over time during follow-up and to investigate the relationship between these changes and COPD outcomes including the annual decline rate of FEV1, rate of exacerbations, and all-cause mortality [16]. Study Design and Participants Data collected on the Korean Obstructive Lung Disease (KOLD) cohort were used in this study. We prospectively recruited patients diagnosed with obstructive lung disease from pulmonary clinics of 14 referral hospitals in Korea between June 2005 and October 2012, and followed them up for up to ten years. Details of this cohort are reported previously [16]. The KOLD cohort initially excluded patients who had respiratory diseases other than obstructive lung disease, and patients with comorbidities that can interfere with the study results (e.g., malignancies, congestive heart failure, chronic renal failure, uncontrolled hypertension). To include only COPD patients and evaluate the change in Kco, patients who met the following inclusion criteria were enrolled in the present study: (1) were older than 40 years; (2) had post-bronchodilator FEV1/forced vital capacity (FVC) < 0.7; (3) were current or ex-smokers with a smoking history of over 10 pack-years; and (4) had more than two annual measures of Kco. Baseline information of participants included demographic characteristics and smoking status, symptom scores from the St. George's respiratory questionnaire (SGRQ) and the modified Medical Research Council (mMRC) dyspnea-scale, and history of acute exacerbations in the year preceding enrolment. In addition to regular follow-ups at 3-month intervals, reports were collected when patients experienced acute exacerbations or all-cause mortality throughout the follow-up period. Acute exacerbation was defined as any event that required an unplanned visit to an emergency room or clinic with or without admission due to the aggravation of respiratory symptoms. Written informed consent was provided by all included patients at baseline enrolment into the cohort. The study was conducted according to the principles of the Declaration of Helsinki. This study design was approved by the ethics committee of the Seoul National University Hospital Institutional Review Board (IRB no. 1611-013-804). As the KOLD study was initiated in 2005, the protocol was not registered in an international clinical trial registry. Lung Function Measurements Pulmonary function tests were performed according to the American Thoracic Society guidelines using Vmax 22 (Sensor Medics, Yorba Linda, CA, USA) and PFDX (Medgraphics, St. Paul, MN, USA) [17]. Post-bronchodilator FEV1 and FVC, total lung capacity (TLC), residual volume (RV), and Kco were measured at baseline and at each annual visit. Post-bronchodilator spirometry values were measured 15 min after administering 400 µg of salbutamol. Bronchodilator reversibility was defined as an increase in FEV1 that was 12% above the baseline value and at least 200 mL after administration [18]. Lung volumes, TLC and RV, were measured using body plethysmography with V6200 (CareFusion, San Diego, CA, USA), PFDX, or Vmax 22 [17]. Values for diffusing capacity (DLco) and predicted alveolar volume (VA) were measured by assessing the single-breath carbon monoxide uptake (Vmax 22 or PFDX). Measures of DLco were adjusted for hemoglobin concentrations using the equation provided by American Thoracic Society guidelines [19]. Kco values were calculated by dividing measures of hemoglobin-adjusted DLco (mmol/min/mmHg) by VA (L) [7,20]. Chest CT Measures Volumetric computed tomography (CT) scans were taken upon enrolment, after one year, and subsequently at intervals of three years. CT scans were taken at full inspiration and expiration using three 16-multidetector CT scanners produced by different manufacturers (Somatom Sensation 16; Siemens Medical Systems, Bonn, Germany; GE Lightspeed Ultra; General Electric Healthcare, Milwaukee, WI, USA; and Philips Brilliance 16, Philips Medical Systems, Best, The Netherlands). Images of the whole lung were extracted automatically, and the attenuation coefficient of each pixel was calculated. Emphysema index (volume fraction of the lung ≤ −950 Hounsfield Units (HU), air trapping index (mean lung density at full expiration/mean lung density at full inspiration), and percentage wall area (wall area percentage of two segmental bronchi; RB1 and LB1 + 2) were measured for quantitative assessment. Statistical Analysis Random coefficient models with random slopes and intercepts were used to estimate the Best Linear Unbiased Prediction (BLUP) of annual changes in Kco (mmol/min/mmHg/L per year) for each patient and to establish the effect of patient characteristics on the annual change rate of Kco [7,21]. To investigate the potential relationship between patient characteristics and changes in Kco and the relationship between these changes and other clinical outcomes, participants were categorized into tertiles based on the degree of annual change in Kco (tertile 1: those with the most rapid decline, tertile 3: those with the slowest decline). Annual changes in FEV1, emphysema index, and SGRQ score were calculated using random coefficient models with random slopes and intercepts for each patient who had two or more longitudinal measures of post-bronchodilator spirometry, CT exams, and SGRQ score, respectively. Characteristics between groups were compared using the t-test and one-way analysis of variance, as appropriate. A binomial negative regression analysis was performed to evaluate the relationship between each group and annual exacerbation rates in terms of incidence rate ratio (IRR), with adjustments for the following factors: age, sex, body mass index, smoking status, pack-years smoked, baseline FEV1, exacerbation history at baseline, and use of inhaled corticosteroid/long-acting β-agonists (ICS/LABA), or inhaled long-acting muscarinic antagonists (LAMA). Mortality between groups was analyzed using Cox-proportional hazards modelling, with adjustments for the same covariates listed above. The 95% confidence intervals (CIs) were calculated, and p < 0.05 was considered to indicate statistical significance. All analyses were performed using IBM SPSS Statistics 25.0 (IBM Corp., Armonk, NY, USA) and Stata version 14.2 (StataCorp, College Station, TX, USA). Patient Characteristics and the Rate of Change in Kco Of the 462 patients with COPD from the KOLD cohort, 211 participants were eligible for analyses ( Figure 1). The mean follow-up period of the enrolled subjects was 6.1 ± 2.7 years. J. Clin. Med. 2020, 9, x FOR PEER REVIEW 4 of 13 Of the 462 patients with COPD from the KOLD cohort, 211 participants were eligible for analyses ( Figure 1). The mean follow-up period of the enrolled subjects was 6.1 ± 2.7 years. Demographic data and clinical characteristics of the participants across tertiles of annual change of Kco are listed in Table 1. The mean decline rate of Kco per year was 0.07 ± 0.02 for tertile 1 (most rapid decliners), 0.04 ± 0.00 for tertile 2, and 0.01 ± 0.02 for tertile 3 (slowest decliners), respectively (p < 0.001). There were no significant differences in BMI, smoking status and amount, or quality of life and symptom measurements among the three groups at baseline. Spirometry results revealed that baseline FEV1/FVC was positively related to the annual decline rate of Kco, showing lower baseline FEV1/FVC (%) for rapid decliners for Kco (43.8% ± 9.7% for tertile 1, 46.4% ± 10.5% for tertile 2, 49.2% ± 10.4% for tertile 3, p = 0.008). Measurements of CT indices showed that patients with a higher emphysema index at baseline showed a more rapid decline in Kco over time (27.7 ± 14.8 for tertile 1, 22.4 ± 16.1 for tertile 2, 18.1 ± 14.5 for tertile 3, p = 0.001). Of the 462 patients with COPD from the KOLD cohort, 211 participants were eligible for analyses ( Figure 1). The mean follow-up period of the enrolled subjects was 6.1 ± 2.7 years. Demographic data and clinical characteristics of the participants across tertiles of annual change of Kco are listed in Table 1. The mean decline rate of Kco per year was 0.07 ± 0.02 for tertile 1 (most rapid decliners), 0.04 ± 0.00 for tertile 2, and 0.01 ± 0.02 for tertile 3 (slowest decliners), respectively (p < 0.001). There were no significant differences in BMI, smoking status and amount, or quality of life and symptom measurements among the three groups at baseline. Spirometry results revealed that baseline FEV1/FVC was positively related to the annual decline rate of Kco, showing lower baseline FEV1/FVC (%) for rapid decliners for Kco (43.8% ± 9.7% for tertile 1, 46.4% ± 10.5% for tertile 2, 49.2% ± 10.4% for tertile 3, p = 0.008). Measurements of CT indices showed that patients with a higher emphysema index at baseline showed a more rapid decline in Kco over time (27. Demographic data and clinical characteristics of the participants across tertiles of annual change of Kco are listed in Table 1. The mean decline rate of Kco per year was 0.07 ± 0.02 for tertile 1 (most rapid decliners), 0.04 ± 0.00 for tertile 2, and 0.01 ± 0.02 for tertile 3 (slowest decliners), respectively (p < 0.001). There were no significant differences in BMI, smoking status and amount, or quality of life and symptom measurements among the three groups at baseline. Spirometry results revealed that baseline FEV1/FVC was positively related to the annual decline rate of Kco, showing lower baseline FEV1/FVC (%) for rapid decliners for Kco (43.8% ± 9.7% for tertile 1, 46.4% ± 10.5% for tertile 2, 49.2% ± 10.4% for tertile 3, p = 0.008). Measurements of CT indices showed that patients with a higher emphysema index at baseline showed a more rapid decline in Kco over time (27.7 ± 14.8 for tertile 1, 22.4 ± 16.1 for tertile 2, 18.1 ± 14.5 for tertile 3, p = 0.001). Comparison of Changes in FEV1, Emphysema Index, and SGRQ Score According to Changes in Kco Over Time All study participants had two or more annual measures of post-bronchodilator FEV1. Figure 3A shows the annual change in FEV1 over time among the three groups depending on the degree of annual decline in Kco. Compared to the group with the most rapid decline for Kco (tertile 1), those with the slowest decline (tertile 3) also showed a significantly lower decline rate in FEV1 over time (27.1 ± 30.2 mL/yr vs. 16.3 ± 21.9 mL/yr, p = 0.017). The changes in emphysema index among groups were compared among the 198 participants who had two or more longitudinal performance of CT exams. The annual changes in emphysema index over time in subjects, classified by the degree of annual decline in Kco, are shown in Figure 3B. The annual change in emphysema index did not significantly differ between groups. A total of 210 patients had two or more longitudinal results of SGRQ score. Figure 3C shows the annual change in SGRQ score over time among the three groups. Whereas the group with the most rapid decline for Kco (tertile 1) showed an increase of 0.32 ± 1.50 in SGRQ score, those with the slowest decline (tertile 3) showed a decrease of 0.21 ± 1.30 in SGRQ score (p = 0.026). Figure 4 shows the comparison of IRRs of acute exacerbation among tertiles classified by changes in Kco over time. Compared to patients who showed the most rapid decline in Kco over time (tertile 1), patients with the slowest decline rate in Kco (tertile 3) had a significantly lower incidence of acute exacerbation (IRR = 0.66, 95% CI = 0.44-0.99, p = 0.045). The trend of decreasing incidence rates from tertile 1 to tertile 3 was also significant (IRR = 0.81, 95% CI = 0.66-0.99, p = 0.042). Figure 5 shows the comparison of mortality risk according to groups classified by the degree of Kco decline over time. The risk of all-cause mortality did not significantly differ between groups. J. Clin. Med. 2020, 9, x FOR PEER REVIEW 6 of 13 Comparison of Changes in FEV1, Emphysema Index, and SGRQ Score According to Changes in Kco Over Time All study participants had two or more annual measures of post-bronchodilator FEV1. Figure 3A shows the annual change in FEV1 over time among the three groups depending on the degree of annual decline in Kco. Compared to the group with the most rapid decline for Kco (tertile 1), those with the slowest decline (tertile 3) also showed a significantly lower decline rate in FEV1 over time (27.1 ± 30.2 mL/yr vs. 16.3 ± 21.9 mL/yr, p = 0.017). The changes in emphysema index among groups were compared among the 198 participants who had two or more longitudinal performance of CT exams. The annual changes in emphysema index over time in subjects, classified by the degree of annual decline in Kco, are shown in Figure 3B. The annual change in emphysema index did not significantly differ between groups. A total of 210 patients had two or more longitudinal results of SGRQ score. Figure 3C shows the annual change in SGRQ score over time among the three groups. Whereas the group with the most rapid decline for Kco (tertile 1) showed an increase of 0.32 ± 1.50 in SGRQ score, those with the slowest decline (tertile 3) showed a decrease of 0.21 ± 1.30 in SGRQ score (p = 0.026). (A) Figure 4 shows the comparison of IRRs of acute exacerbation among tertiles classified by changes in Kco over time. Compared to patients who showed the most rapid decline in Kco over time (tertile 1), patients with the slowest decline rate in Kco (tertile 3) had a significantly lower incidence of acute exacerbation (IRR = 0.66, 95% CI = 0.44-0.99, p = 0.045). The trend of decreasing incidence rates from tertile 1 to tertile 3 was also significant (IRR = 0.81, 95% CI = 0.66-0.99, p = 0.042). Figure 5 shows the comparison of mortality risk according to groups classified by the degree of Kco decline over time. The risk of all-cause mortality did not significantly differ between groups. Footnotes: All statistical analyses were adjusted for age, sex, body mass index, smoking status, packyears smoked, baseline post-bronchodilator FEV1, exacerbation history at baseline, and use of inhaled corticosteroid/long-acting β-agonists (ICS/LABA), or inhaled long-acting muscarinic antagonists (LAMA), * p-value < 0.05. Footnotes: All statistical analyses were adjusted by age, sex, body mass index, smoking status, packyears smoked, baseline post-bronchodilator FEV1, exacerbation history at baseline, and use of inhaled corticosteroid/long-acting β-agonists (ICS/LABA), or inhaled long-acting muscarinic antagonists (LAMA). Discussion To the best of our knowledge, this is the first study investigating changes in Kco and their association with outcomes of COPD over a long period. This study demonstrated that most patients with COPD experienced an overall decline in Kco during a mean of 6.1 years. The rate of annual change in Kco varied substantially among the patients. Patients with the highest decline rate in Kco (tertile 1) showed the lowest FEV1/FVC and the highest emphysema index at baseline. A patient with COPD who had a greater decline in Kco also showed a greater decline rate in FEV1 and a higher rate Footnotes: All statistical analyses were adjusted by age, sex, body mass index, smoking status, packyears smoked, baseline post-bronchodilator FEV1, exacerbation history at baseline, and use of inhaled corticosteroid/long-acting β-agonists (ICS/LABA), or inhaled long-acting muscarinic antagonists (LAMA). Discussion To the best of our knowledge, this is the first study investigating changes in Kco and their association with outcomes of COPD over a long period. This study demonstrated that most patients with COPD experienced an overall decline in Kco during a mean of 6.1 years. The rate of annual change in Kco varied substantially among the patients. Patients with the highest decline rate in Kco (tertile 1) showed the lowest FEV1/FVC and the highest emphysema index at baseline. A patient with COPD who had a greater decline in Kco also showed a greater decline rate in FEV1 and a higher rate Discussion To the best of our knowledge, this is the first study investigating changes in Kco and their association with outcomes of COPD over a long period. This study demonstrated that most patients with COPD experienced an overall decline in Kco during a mean of 6.1 years. The rate of annual change in Kco varied substantially among the patients. Patients with the highest decline rate in Kco (tertile 1) showed the lowest FEV1/FVC and the highest emphysema index at baseline. A patient with COPD who had a greater decline in Kco also showed a greater decline rate in FEV1 and a higher rate of acute exacerbations. However, we did not find the decline rate in Kco to be significantly associated with the change in emphysema index and the risk of all-cause mortality. Traditionally, the decline in FEV1 has been widely accepted as one of the most important outcome measures reflecting disease progression in COPD [5,6]. However, there are concerns that FEV1 alone does not adequately reflect disease severity and various phenotypes [8,22,23]. Inconsistent results from previous studies evaluating the clinical significance of FEV1 in COPD support this view. For example, evidence from many prior studies shows that a low initial FEV1 is a predictor of increased risk of exacerbation and mortality [24][25][26][27][28]. In contrast, some studies revealed that a high initial FEV1 is associated with a more rapid decline in FEV1 over time [23,29]. Although COPD is currently defined on the basis of the degree of airflow limitation, the decline rate in FEV1 does not correlate well with health status and important clinical outcomes such as exacerbations and mortality [9]. Moreover, it is also known that a substantial proportion of patients with COPD do not show a decline in FEV1. In the ECLIPSE study, approximately 15% of study participants showed a positive annual change in FEV1 [5]. A study from the Hokkaido COPD cohort revealed that approximately one fourth of the patients with COPD did not experience a significant decline in FEV1 [7]. In the BODE cohort, only 18% of those enrolled revealed a significant decline in FEV1 [23]. Accordingly, recent studies have attempted to investigate changes in other important features such as the progression of emphysema or hyperinflation and COPD-related outcomes [22,30,31]. However, limited data exist on the relationship between progression of such indices and other outcomes. Similarly, there are discrepancies around the factors that are shown to predict the various clinical course of COPD [30,32]. Intriguingly, the results from our study showed that most (92.9%) patients with COPD experienced a decline in Kco over time, which is different from the trajectories of FEV1. The decline in Kco was associated with baseline disease severity as well as subsequent outcomes. Interestingly, patients with a lower initial FEV1/FVC and a higher emphysema index experienced a more rapid decline in Kco over time. However, baseline FEV1 was not significantly associated with the decline rate of Kco. This could be explained by the better correlation between emphysema severity and FEV1/FVC than the correlation between emphysema severity and FEV1 reported from previous studies [33,34]. Our results indicate that the decline in Kco is mainly affected by the degree of baseline parenchymal destruction and emphysema rather than the degree of airway obstruction. On the other hand, a more rapid decline in Kco was also associated with a more rapid decline in FEV1, which is known to be an indicator of disease progression. A more rapid decline in Kco was also associated with an increase in SGRQ score, which reflects the grade of symptoms and quality of life in COPD patients, with higher scores indicating more limitations. In addition, a rapid decline in Kco was also related to a higher risk of exacerbations, which is an important outcome of COPD. These findings suggest that the decline in Kco can accurately reflect the features and prognosis of COPD. Measurement of Kco with the single-breath method is considered to reflect changes in functional lung volume and impairment in gas transport across the alveolar-capillary membrane. Thus, Kco indicates the degree of parenchymal destruction, reduced alveolar surface, and loss of pulmonary capillary density in patients with COPD [24]. Therefore, a reduction in Kco would reflect progression in alveolar destruction and emphysema, which are important phenotypes of COPD [11,35]. The good correlation observed between Kco and emphysema index, as quantified by CT at baseline in our study, supports the close pathophysiological relation between the two indices for reflecting disease status and is in accordance with the findings of previous studies [36,37]. However, it is known that neither measures from CT scans nor Kco are perfect predictors of emphysema severity on a pathologic basis, and thus they should be regarded as complementary measurements [38]. In our study, the relationship between the decline in Kco and changes in emphysema measured by CT was not observed. The discrepancy between the change in Kco and the change in emphysema index on CT could be explained by the fact that changes in Kco can be affected by various factors in addition to emphysema. First, a Kco decline is in part related to a gradual reduction in alveolar-capillary density along with decreased pulmonary capillary blood volume, which are the main determinants of Kco in patients with COPD [39][40][41]. Second, changes in Kco can reflect the changes that precede visible emphysema such as bronchiolitis and injury of the terminal airspace that result in dysfunction of the distal gas exchange units [42]. Third, an increase in pulmonary venous pressure such as in pulmonary edema or left heart failure, which are common comorbidities that appear in the clinical course of COPD, can also result in Kco reduction [11]. These pathophysiology-based explanations could also help readers to understand the significant association between the decline in Kco and worse prognosis in patients with COPD, as shown in our study. A decline in Kco might provide more information on the disease progression in COPD than the emphysema index. Thus, we carefully suggest that decline in Kco should be closely monitored in clinical practice and should also be considered as a useful intermediate or outcome measure in further studies on COPD. Our study has limitations. First, the majority of the included subjects were male, and the findings may not be generalized to female patients with COPD since the manifestations of the disease may differ by gender [43]. This biased gender distribution would be due to the marked difference in prevalence of smoking between men and women in South Korea [44]. Second, concerns regarding the possibility of the inclusion of patients with combined pulmonary fibrosis and emphysema (CPFE), which would accelerate the decline of Kco due to the interstitial lung disease portion, and asthma, in which Kco would probably not decline, can be raised about our study population. However, the inclusion of CPFE and pure asthma patients would have been minimized owing to our study design. The original KOLD cohort initially excluded patients that had respiratory diseases other than obstructive lung disease, including interstitial lung disease, upon recruitment [16]. Moreover, it is reported that the prevalence rate of CPFE among COPD patients is very low [45]. For asthma, although the original KOLD cohort did not use smoking history as an inclusion criterion to allow the inclusion of asthma patients in the whole cohort, we have set a separate inclusion criterion of positive smoking history over 10 pack-years to evaluate definite COPD patients from the original cohort with exclusion of pure asthma patients. In our study, 26 (12.3%) of the 211 participants showed a positive bronchodilator reversibility. However, it is known that not only asthma but also COPD patients can show positive bronchodilator reversibility [46]. Although we could not precisely report how many participants in our study would be classified into asthma-COPD overlap due to lack of data, considering reports from previous studies, only a small proportion of COPD patients with a positive bronchodilator reversibility are expected to be those with asthma-COPD overlap [47,48]. Third, our study did not fully evaluate all biomarkers that can be possibly associated with the decline rate in Kco. Considering that genetic determinants and circulating biomarkers of progression in COPD are an important area of research, additional studies will be needed to search for potential related biochemical predictors [49]. The main strength of our study is the well-designed prospective cohort with a stringent diagnosis of COPD including patients at all stages of severity, as well as the long observation period. KOLD is purely an observational study, and the observed changes are likely to represent disease-related changes in patients who were properly managed. In addition, strict records of demographic data and the standardized methodology used for evaluating lung function and imaging variables support the validity of our results. Conclusions In conclusion, measures of Kco declined over time in most patients with COPD, and the decline was greater in patients with more severe airflow limitation and emphysema. A decline in Kco was also associated with an increased decline rate in FEV1 and more frequent exacerbation. Thus, Kco decline could be considered as an important outcome measure in further clinical studies.
2020-05-24T13:06:33.502Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "106c6a8e93d66dfeac1cc91c5f84fb93da7ac83c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/9/5/1512/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dca97e978293aa34e4be6507c9784e698e47999b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238355851
pes2o/s2orc
v3-fos-license
Effect of dietary tannin supplementation on cow milk quality in two different grazing seasons Extensive farming systems are characterized by seasons with different diet quality along the year, as pasture availability is strictly depending on climatic conditions. A number of problems for cattle may occur in each season. Tannins are natural polyphenolic compounds that can be integrated in cows’ diet to overcome these seasonal problems, but little is known about their effect on milk quality according to the season. This study was designed to assess the effects of 150 g/head × day of tannin extract supplementation on proximate composition, urea, colour, cheesemaking aptitude, antioxidant capacity, and fatty acid (FA) profile of cow milk, measured during the wet season (WS) and the dry season (DS) of Mediterranean climate. In WS, dietary tannins had marginal effect on milk quality. Conversely, in DS, the milk from cows eating tannins showed 10% lower urea and slight improvement in antioxidant capacity, measured with FRAP and TEAC assays. Also, tannin extract supplementation in DS reduced branched-chain FA concentration, C18:1 t10 to C18:1 t11 ratio and rumenic to linoleic acid ratio. Tannins effect on rumen metabolism was enhanced in the season in which green herbage was not available, probably because of the low protein content, and high acid detergent fibre and lignin contents in diet. Thus, the integration of tannin in the diet should be adapted to the season. This could have practical implications for a more conscious use of tannin-rich extracts, and other tannin sources such as agro-industrial by-products and forages. Extensive farming systems are characterized by a dietary imbalance along the year, as they are strictly depending on the climatic conditions 1 . In particular, the seasonal variations under Mediterranean climate cause the alternation of periods with different pasture availability, with implications on animal performance and product quality. For instance, dairy cows reared under traditional husbandry systems have higher milk yield, and protein and fat contents during the green season compared to the dry season 2 . In addition, grazing is reported to increase the contents of vitamins and aromatic compounds 3 , and the proportion of polyunsaturated fatty acids (PUFA) and conjugated linoleic acid 4 in milk. On the other hand, young fresh herbage may cause an excess of degradable protein in the rumen with implications on protein metabolism efficiency and nitrogen excretion 5 . To overcome these seasonal problems, farmers can adopt several strategies. For example, tannins are plant polyphenols used in ruminant farming as growth and health promoter. Many forages and agricultural by-products are naturally rich in tannins, especially in plant species characterizing marginal areas or dry habitats 6 , but tannins can be also added as dietary supplement for a better control of dose and quality. Thanks to their antimicrobial and protein binding activities, tannins are known to affect ruminal biohydrogenation (BH) and N metabolism, with potential positive consequences on milk quality and N emissions 7 . However, the information available in literature does not clarify if and how the effects of dietary tannins might vary according to the season in extensive farming systems. In a recent study, a different response on in vitro rumen BH and fermentation was observed when tannin extracts were incubated with a green forage or a hay substrate 8 . Therefore, we hypothesized that a different effect of dietary tannins could be observed on cow milk quality when supplemented during the grazing season or the season in which diet is based on dry forages. Thus, the present study aimed to assess the effects of supplementing tannin extract to dairy cows grazing in two different seasons (spring and summer) under Mediterranean climate on the milk quality. We chose to fit into on-farm conditions to directly test the practical effects of dietary tannin extract. The experimental design of studies focusing on dietary tannins generally provide for sampling at the end of the trial 9-12 . However, an earlier effect of dietary tannins could not be ruled out. Therefore, the present study was also designed to evaluate the effect of dietary tannins on milk quality from the beginning of the supplementation, throughout the experimental period, to have a deeper insight of the subject. Methods Experimental design, animals and diets. All the experiment was performed in accordance with relevant guidelines and regulations (following the ARRIVE guidelines 13 ). All procedures were approved by the animal welfare committee (OPBA) of the University of Catania (UNCTCLE-0015295). The experimental design is detailed in a previous study 12 . Two experiments were performed in a commercial extensive farm located in an upland area of the Mediterranean island of Sicily, Italy (36° 57′ N, 14° 40′ E; altitude: 670 m). The first experiment was carried out during the wet season (WS), between March and April 2019, and the second one during the dry season (DS), in July 2019. In both experiments, 14 lactating dairy cows (Modicana breed) were divided into two groups (n = 7), namely control (CON) and tannin (TAN), balanced for average milk yield, and protein and fat contents recorded in the two days before the beginning of each trial, together with DIM, parity, and BCS, as reported in Menci et al. 12 . The groups were composed of different animals in WS and DS experiment. In WS, the cows were free to graze on 20 ha of spontaneous pasture. In DS, the cows were free to graze on 20 ha of dry stubble of an annual crop (vetch:oat:barley 40:40:20), and no fresh herbage was available. A detailed description of site, weather conditions, and pasture botanical composition is reported in Menci et al. 12 . In both WS and DS, the feeding trial lasted 23 days and tannin extract supplementation started at morning milking of day 0. To ensure correct feeding, the farmer was the only person aware of the treatment groups allocation. Blinding was used in the next steps of experimental process. Feedstuff sampling and analyses. During both experiments, samples of concentrates, hay, and pasture or dry stubble were collected weekly, vacuum-packed and stored at − 20 °C. The weekly subsamples were then pooled in order to get a representative sample for each feed. Ether extract, CP, and ash were determined according to AOAC 16 17 . The analyses of NDF, ADF, and ADL were performed following the method of Van Soest et al. 18 . Total phenolic compounds and total tannins were analysed according to the procedure of Makkar et al. 14 , as modified by Luciano et al. 19 . Fatty acid profile of feeds was determined through a one-step extractiontransesterification with chloroform and sulfuric acid (2% in methanol, vol/vol) as methylation reagent 20 . Gaschromatograph (ThermoQuest) equipment and settings were the same as described by Natalello et al. 21 . Milk sampling and analyses. Milk sampling was performed at the following days of trial: − 2, − 1, 1, 2, 3, 4, 5, 8, 11, 15, 18 and 23. Cows were individually milked twice a day (0700 h and 1700 h) with a milking machine (43 kPa vacuum, 60 pulsations/min). The milk of each cow was sampled individually. Each sampling day included the milk of two subsamples (250 mL) from two consecutive milkings: the evening milking and the following morning milking. The evening milking subsample was stored refrigerated until the next morning. To get a representative daily sample, the two subsamples were pooled according to the proportion between the milk amount recorded at the respective evening and morning milking. Analyses of proximate composition, somatic cells count (SCC), colour parameters, laboratory cheese yield (LCY), and milk coagulation properties (MCP) were immediately performed on fresh milk samples. The aliquots for antioxidant capacity assays and FA profile determination were stored at − 80 °C. Before freezing, sodium azide was added to the aliquots for FA profile determination, to a final concentration of 0.3 g/L. The LCY and laboratory dry matter cheese yield (LDMCY) were determined according to the method of Hurtaud et al. 24 , using a commercial liquid calf rennet (105 IMCU/mL, 80% chymosin and 20% pepsin; Biotec Fermenti S.r.l.). The MCP of milk were analysed using a formagraph (Maspres and Foss Italia), following the method of Zannoni and Annibaldi 25 . Determined parameters were clotting time (time needed for the beginning of coagulation), firming time (time needed to reach 20 mm of amplitude on the chart), and curd firmness (i.e., amplitude of the chart in mm) after 30 min and after two times clotting time. A detailed procedure for LCY, LDMCY, and MCP is reported in Menci et al. 12 . The antioxidant capacity of the hydrophilic fraction of milk was assessed by ferric reducing antioxidant power (FRAP) and Trolox-equivalent antioxidant capacity (TEAC) assays. Milk was pre-treated before analyses, as follows. Defrosted samples were vortexed thoroughly and 100 µL of milk was transferred in a 1.5-mL tube with 900 µL of water and 200 µL of hexane. After centrifugation at 1500×g for 10 min at 4 °C, two aliquots of 50 µL and 20 µL of the lower phase were transferred in plastic tubes and analysed in duplicate for FRAP assay and TEAC assay, respectively. The FRAP assay was performed following Benzie and Strain 26 method, with modifications. A solution 50:5:5:6 of pH 3.6 acetate buffer (300 mM sodium acetate trihydrate in 1.6% acetic acid), 0.01 M TPTZ [2,4,6-tris(2-pyridyl)-s-triazine] in 0.04 M hydrochloric acid, 0.02 M ferric chloride hexahydrate, and distilled water was made, and 1650 µL of this solution was added to samples. After incubation in water bath at 37 °C for 60 min, absorbance at 593 nm was read using a double beam UV/Vis spectrophotometer (UV-1601, Shimadzu Corporation). An external calibration curve was prepared using 1 mM ferrous sulphate heptahydrate, and FRAP values were expressed as mmol of Fe 2+ equivalent per L of milk. The TEAC assay was performed according to Re et al. 27 , with some modifications. A stable radical solution 1:1 of 14 mM ABTS (2,2-azinobis-3-ethylbenzothiazoline-6-sulfonic acid) and 4.9 mM potassium persulfate was incubated in the dark at room temperature for 12-14 h and then diluted to an absorbance of 0.75 at 734 nm. After adding 2 mL of diluted radical solution, samples were incubated at 30 °C for 60 min and absorbance at 734 nm was read using UV-1601 spectrophotometer. The reduction of absorbance was compared to a blank and an external seven-points calibration curve was prepared using 2.5 mM Trolox solution. Results are expressed as mmol of Trolox equivalent per L of milk. The FA profile of experimental milk was determined by gas chromatographic analysis of fatty acid methyl esters, after fat separation according to the method B described by Feng et al. 28 , with some modification. Briefly, the top fat-cake layer of 50-mL milk samples was removed after centrifugation at 13,000×g for 30 min at 4 °C. Fat was then transferred in a 2-mL tube, let melt at room temperature for 30 min and centrifuged at 19,300×g for 20 min. About 50 mg of the top lipid-layer was then transferred in a glass tube for transesterification, following the method described by Christie 29 , with modifications. Briefly, 1 mL of 0.5 N methanolic sodium methoxide was added, and samples were vortexed for 3 min. After a 5 min pause, 2 mL of hexane was added, and samples were vortexed for 30 s. The upper phase was then transferred in a 2-mL vial, a little spoon of sodium sulphate was added, and vials were then stored at − 20 °C. Gas-chromatograph (ThermoQuest) equipment and settings were the same as described by Natalello et al. 21 . Moreover, the separation of C18:1 t10 and C18:1 t11 was achieved by isothermal analysis at 165 °C. Calculations and statistics. The reflectance spectrum at wavelengths between 530 and 450 nm was elaborated as done by Priolo et al. 30 to calculate the integral value (I 450-530 ). Before statistical analysis, SCC data was transformed to log10/mL to obtain normalized distribution. All data from WS and DS trials were statistically elaborated separately using a mixed model ANOVA for repeated measures of IBM SPSS 21 For Analytics, with individual milk sample as statistical unit, using formula (1). where y ijkl is the observation, µ is the overall mean, T i is the fixed effect of treatment (i = 1-2), D j is the fixed effect of sampling day (j = 1-10), (D × T) ij is the interaction between diet and sampling time, C k is the random effect of the cow nested within the treatment (k = 1-7), BX ik is the covariate adjustment for each cow, and e ijkl is the residual error. The milk sampled in the two days before the beginning of the trial (i.e., sampling days − 2 and − 1) was analysed and averaged to constitute the covariate for statistical elaboration. In addition, statistical elaboration was adjusted for a covariate composed of DIM. For individual FA, fat content was included as covariate in the statistical model. When the effect of the covariate had P ≤ 0.100, it was removed from the statistical model. Multiple comparisons among means were performed using the Tukey's test and differences between treatment means were considered to be significant at P ≤ 0.050 and a trend towards significance at P ≤ 0.100. All the results showed in tables refer to estimated marginal means. Results WS experiment. Table 2 shows the results on proximate composition, physical parameters, and antioxidant capacity of WS milk. Dietary tannins did not affect (P > 0.100) milk yield, milk composition, and colour parameters. Likewise, milk cheesemaking parameters and antioxidant capacity did not differ (P > 0.100) between the two dietary groups. The sampling day effect was significant (P < 0.050) for almost all the parameters analysed. A significant interaction of dietary treatment with sampling day was found for few FA (P < 0.050), reflecting a different evolution of the concentration throughout the trial in the two groups. The concentration of C14:1 c9 in CON milk was higher at the 23rd day of trial than in the first four days, whereas in TAN milk there were no changes along time (P = 0.004). The concentration of C18:1 c11 in TAN milk after two weeks of treatment (i.e., at sampling day 15, 18 and 23) was higher than it was at the first sampling day, whereas no significative variation was observed in CON milk (P = 0.021). Finally, n-6 PUFA to n-3 PUFA ratio (n-6/n-3) increased, peaked at the 11th day and then decreased in both the experimental groups, but in CON milk the highest value differed only from the first five sampling days whereas in TAN milk the highest value differed from all but the fifth's day observations (P = 0.033). Anyway, neither the concentrations of C14:1 c9 and C18:1 c11 nor n-6/n-3 statistically differed between CON and TAN milk on any of the sampling day. Discussion The significance of sampling day effect found in this study was due to environmental factors falling outside our experimental design, so it will not be discussed further. Probably, the periodical monitoring of milk throughout the 23 days of trial, combined with the continuous free ranging of cows, made our experimental design sensitive to weather variations. For example, in July, during DS experiment, the temperature leap led to a range of average temperature between 19.5 and 32 °C. Milk proximate composition. In the present study, the effect of dietary tannin supplementation on milk yield and its main constituents was not significant, regardless of the season. In the last decade, most scientific articles reported no improvement in milk yield, or fat and protein contents from cows eating different sources of tannin 31-33 . Henke et al. 34 reported an increase in milk fat content in cows supplemented for 21 days with 3% DMI of quebracho tannin extract, but the effect was combined with a lower milk yield. Even, increasing doses of tannin have been reported to have a negative linear effect on cows' milk protein content 35,36 . In addition, our results suggest that the lack of effectiveness of dietary tannins is constant from the 1st to the 23rd day of administration in dairy cows. Consistently, Denninger et al. 37 did not observe any variations in cow milk yield, and fat and protein contents throughout 3 days of dietary Acacia mearnsii (De Wild) tannin extract supplementation (14.7 g of total tannins per kg of DM). On the other hand, a reduction in MUN of TAN group was expected, as it is reported in several studies on dairy cows involving different tannin sources, such as quebracho 34 , bayberry, Acacia mangium Willd. and valonia 32 or sainfoin (Onobrychis viciifolia Scop.) 38 . The reduction in MUN is potentially positive for the environment, as it is an indicator of urinary N excretion 39 . In extensive farming systems, this effect is desirable right in green season, as the high degradable protein content of herbage maximizes the N emission 5 . The lack Table 4. Effect of dietary tannin extract on physicochemical properties of milk in dry season experiment. a L* lightness, a* redness, b* yellowness, C* chroma, H* hue angle, I 450-530 integral value of the absorbance spectrum from 450 to 530 nm, LCY laboratory cheese yield, LDMCY laboratory dry matter cheese yield, R clotting time, K20 firming time, A30 curd firmness after 30 min, A2R curd firmness after two times R, FRAP ferric reducing antioxidant power, TEAC Trolox-equivalent antioxidant capacity, ECM energy corrected milk, SCC somatic cells count. b CON control group, TAN group receiving 150 g/head × day of tannin extract. c ns ≥ 0.100; † < 0.100; *< 0.050; **< 0.010; ***< 0.001. www.nature.com/scientificreports/ of an effect on MUN in WS experiment may be due to the relatively low dose of tannin supplementation, as in the studies where a significant effect was reported cows ingested about 3% DMI of tannin 32,34,38 . Also, the plant species from which tannins are extracted are a well-known limit to studies comparison 40 . Aguerre et al. 41 fed cows a tannin extract very similar to ours at increasing doses and no evident MUN reduction occurred at supplementations lower than 1.8% DMI. Anyway, a high intake of tannins could have detrimental consequences on animal performance 35 and could be economically unpractical on commercial farm. On the contrary, in DS experiment we observed a constant reduction in MUN. Probably, the dose of tannin extract we supplemented in cow's diet was not enough to modulate ruminal N metabolism when CP intake is as high as it was in WS experiment. Aguerre et al. 35 did not found any interaction between dietary treatment and CP intake on cow's N partitioning when administrating 0.45% up to 1.8% DMI of quebracho-chestnut tannin mixture. However, the two dietary CP levels they compared were 15.3% and 16.6% DM, whereas in our study the estimated CP levels were 13.9% and 19.9% DM in DS and WS experiment, respectively. Milk cheesemaking aptitude. In WS experiment, the lack of an effect on milk cheesemaking aptitude was not surprising, considering that fat, protein and casein contents, and even MUN were not affected by dietary tannin extract supplementation. This occurred also in DS experiment, where the reduction in MUN could have been linked with other parameters related to cheesemaking properties 42 . In accordance with our findings, Herremans et al. 43 concluded in a meta-analysis study that dietary tannins do not have any effect on N use efficiency in dairy cattle, except for the reduction of urea emissions. In a previous study from the same experiment, investigating the effect of dietary tannin extract on cheese quality 12 , we found no effects on cheesemaking aptitude after 23 days of dietary trial. With the findings of the present study, we can add that this lack of effect is consistent from the 1st to the 23rd day of administration. Kalber et al. 44 found that milk from cows eating buckwheat (Fagopyrum esculentum Moench) silage had a shorter clotting time compared to milk from cows eating chicory (Cichorium intybus L.) or ryegrass (Lolium multiflorum Lam.) silages. The three treatments significantly differed for condensed tannins intake, with 6.1 g/ day for cows eating buckwheat and about 2.2 g/day for cows eating chicory or ryegrass, but these intake levels seem too low to confidently attribute them the observed effect. At the best of our knowledge, no other study is available for comparison, and we cannot speculate if dietary tannins at doses higher than 1% DMI could exert an effect on cow milk cheesemaking properties. Also, the few scientific articles investigating dietary tannins effect on clotting time of ewe milk are discordant, reporting no effect 45 or even longer clotting and firming times 46 with ewes eating 1.6% DMI of tannin extracts. Anyway, literature suggests that a plant specific effect may occur, and results may vary administrating different tannin sources. Antioxidant capacity of milk hydrophilic fraction. In our study, we investigated both the reducing power and the radical scavenging capacity of skimmed milk. Avila et al. 36 did not observe an improvement in milk reducing power when cows' diet was supplemented with 5 up to 20 g/kg DM of A. mearnsii tannin extract. Unlike them, we analysed not-deproteinized milk, to also include the antioxidant activity of caseins and whey proteins 47 . Interestingly, although we observed no variation in protein and casein contents of milk, the dietary tannin extract supplementation tended to increase both the reducing power and the radical scavenging capacity of defatted milk in DS experiment. Although the antioxidant activity of polyphenolic compounds is wellknown 48 , is not yet clear how they could improve the antioxidant status of animal products. Probably, tannins had an indirect effect preserving other antioxidants (e.g., vitamin E, vitamin C) during digestion, or low molecular-weight molecules derived from tannins digestion could have been absorbed in the intestine and therefore exerted their antioxidant activity in milk 49 . The lack of effect in WS experiment could be explained by the already relatively high TEAC and FRAP values. Probably, the diet of cows in WS experiment had an antioxidants content high enough to suffice for milk oxidative stability, without the contribution of supplementary tannins bioactivity. Indeed, grazing pasture is commonly reported to increase the content of antioxidants in milk 3 , and b* and I 450-530 values we found in WS milk indicated a higher content of carotenoids 50 , compared to DS milk. Milk fatty acid profile. An effect on microbial and rumen preformed FA concentrations in milk is generally expected with dietary tannins, because of their well-known activity against rumen BH 51 . This is in contrast with the results of WS experiment, were neither odd-and branched-chain FA nor the C18:1 and C18:2 isomers www.nature.com/scientificreports/ differed between CON and TAN milk. Probably, as suggested above, the tannin extract supplementation dose used in the present study was not enough to exert an effect on milk quality during the WS experiment. Interestingly, the effect of dietary tannins on some MUFA and de novo FA concentrations right after the beginning of administration was never observed before. Recently, Denninger et al. 37 investigated the effect of A. mearnsii tannin extract supplementation in the first 3 days of administration. They observed a decrease in microbial-derived FA in milk starting from the second day of trial, indicating a quite rapid effectiveness of dietary tannins against microbial rumen activity. Likewise, in WS experiment we observed an immediate response of milk FA profile to dietary tannins administration, even if concerning different FA compared to Denninger et al. 37 . As C18:1 c9 can undergo BH in the rumen 52 and dietary supplementation of C18:1 c9 is reported to reduce mammary FA synthesis 53 , we hypothesized th at, in WS experiment, dietary tannins impaired the ruminal metabolism of C18:1 c9, with consequent increase in C18:1 c9 intestinal flow and reduction in de novo FA synthesis. Milk C18:1 c9 also results from the activity of mammary Δ9-desaturase, but we found no variation of desaturation index throughout the WS experiment. However, this effect against rumen BH vanished soon, probably indicating a rapid adaptation of ruminal microbiota to dietary tannins, as already suggested by Toral et al. 54 observing ewe milk. The different conditions of DS experiment modified the effect of dietary tannin extract on milk FA profile, compared to WS experiment. As recently reviewed by Frutos et al. 51 , a decrease of bacterial FA concentration, such as odd-chain fatty acids (OCFA) and BCFA, is often reported in studies investigating dietary tannins effect on rumen digesta. Interestingly, in DS experiment we observed a decrease in both main iso-and anteiso-FA in TAN milk, whereas milk's OCFA did not varied between dietary treatments. Likewise, Denninger et al. 37 reported a decrease in some BCFA in cow milk after A. mearnsii tannin extract feeding and no effect on milk's OCFA. As changes of bacterial FA proportions in milk likely reflect shifts in rumen microbial community, and the bacterial FA synthesis is species-specific 55 , the effect observed could be explained by the different sensitivity of rumen microorganisms to tannins. Probably, in DS experiment, the ruminal microorganisms enriched in BCFA were more sensitive to tannins bioactivity than those enriched in OCFA. Indeed, cellulolytic and amylolytic bacteria are reported to be enriched in BCFA and OCFA, respectively 55 , and Díaz Carrasco et al. 56 documented the ability of tannins to modify the cellulolytic:amylolytic bacteria balance in the rumen. In DS experiment, the effect of dietary tannin extract on t10/t11 and RA/LA seems to indicate an impaired rumen BH. Indeed, C18:1 t10, C18:1 t11 and C18:2 c9t11 are not dietary FA and they form in the rumen by microorganism activity 57 . The reduced RA/LA may indicate a slowdown in the first steps of BH, where C18:2 c9c12 is isomerized to C18:2 c9t11 58 . A second source of milk C18:2 c9t11 is the mammary Δ9-desaturation of C18:1 t11 59 . However, an effect of dietary tannin extract on mammary Δ9-desaturase may be ruled out, as desaturation index did not differ between CON and TAN milk in DS experiment. Unfortunately, the extent of the modifications induced by dietary tannins on FA profile in our experiments is hardly relevant in terms of product healthiness. Our study suggests a different effect of dietary tannins on cows' milk FA profile depending on the grazing season in the Mediterranean. This phenomenon is likely related to the different diet in WS and DS experiment, concerning the green herbage availability, the CP level, the forage to concentrate ratio, the different amount and composition of biohydrogenation precursors, or a combination of all these aspects. Similarly, Menci et al. 8 found two different tannin extracts (quebracho vs chestnut + quebracho) to reduce iso-FA concentration and RA/LA in rumen digesta fermenting a hay substrate, whereas none of these effects was observed fermenting the corresponding green herbage. Different diets are reported to select different microbiota populations in the rumen 60 and the specific microorganisms can show a different sensitivity to tannins bioactivity 56 . Therefore, the microbiota selected by the diet of DS experiment could have been more sensitive to dietary tannin supplementation. A second hypothesis is that the highly nutritious diet in WS experiment made the rumen microbiota resilient, whereas in DS experiment the microbiota could not adapt to tannin extract supplementation. Indeed, the variations observed in FA profile of DS milk were consistent throughout the whole trial. Anyway, as different rumen microorganisms show a different sensitivity to different kinds of tannin 61 , results may change by varying the source of tannin. Conclusions The dietary supplementation of tannin extract at the dose of 150 g/day in dairy cows showed different effects on milk quality according to the season under Mediterranean climate. No effect on milk quality was observed in WS, whereas in DS the milk from cows eating tannins showed lower BCFA concentration, C18:1 t10 to C18:1 t11 ratio, and rumenic to linoleic acid ratio. Also, tannin extract supplementation in DS reduced MUN and slight improved the antioxidant capacity of milk. Thus, tannin supplementation in grazing dairy cows was more effective during the dry season, when diet is low in CP and rich in ADF and ADL. Probably, the integration of tannin in the diet should be adapted to the CP or fibre intakes, or both, if the purpose is modifying milk quality. This could have practical implications for a more conscious use of tannin-rich extracts and also other tannin sources such as agroindustrial by-products and forages (especially those from dry areas). Further studies are needed to investigate the effects of longer supplementations or different doses and tannin sources. Finally, a deeper knowledge of the sensitivity of rumen microbiota to tannins could help to explain the variability in dietary tannins effectiveness. Data availability The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. www.nature.com/scientificreports/ Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2021-10-06T06:17:00.401Z
2021-10-04T00:00:00.000
{ "year": 2021, "sha1": "a865e86ffa059a5effd9ca6af5838d02598d3901", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-99109-y.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b85570f5872b80bc9c33f8a64a952487feb6eabb", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
234496216
pes2o/s2orc
v3-fos-license
Performance of a clinical risk prediction model for inhibitor formation in severe haemophilia A Abstract Background There is a need to identify patients with haemophilia who have a very low or high risk of developing inhibitors. These patients could be candidates for personalized treatment strategies. Aims The aim of this study was to externally validate a previously published prediction model for inhibitor development and to develop a new prediction model that incorporates novel predictors. Methods The population consisted of 251 previously untreated or minimally treated patients with severe haemophilia A enrolled in the SIPPET study. The outcome was inhibitor formation. Model discrimination was measured using the C‐statistic, and model calibration was assessed with a calibration plot. The new model was internally validated using bootstrap resampling. Results Firstly, the previously published prediction model was validated. It consisted of three variables: family history of inhibitor development, F8 gene mutation and intensity of first treatment with factor VIII (FVIII). The C‐statistic was 0.53 (95% CI: 0.46–0.60), and calibration was limited. Furthermore, a new prediction model was developed that consisted of four predictors: F8 gene mutation, intensity of first treatment with FVIII, the presence of factor VIII non‐neutralizing antibodies before treatment initiation and lastly FVIII product type (recombinant vs. plasma‐derived). The C‐statistic was 0.66 (95 CI: 0.57–0.75), and calibration was moderate. Using a model cut‐off point of 10%, positive‐ and negative predictive values were 0.22 and 0.95, respectively. Conclusion Performance of all prediction models was limited. However, the new model with all predictors may be useful for identifying a small number of patients with a low risk of inhibitor formation. | INTRODUC TI ON A major treatment complication in haemophilia A is the formation of neutralizing antibodies against factor VIII (also called inhibitors) which render subsequent treatment with factor VIII (FVIII) ineffective and are associated with increased morbidity/mortality. 1 There is a need to identify patients with a very low/high risk of developing inhibitors as these patients could be candidates for personalized treatment strategies. 2 Two published prediction models for inhibitor formation have been suggested for clinical use. 3,4 The second model 4 New risk factors for inhibitor formation have been identified using the SIPPET study cohort. [5][6][7] Firstly, the use of recombinant FVIII (rFVIII) was associated with a higher inhibitor risk than plasmaderived FVIII (pdFVIII) (hazard ratio: 1.87, 95 CI: 1.17-2.96). 5 Furthermore, the presence of non-neutralizing anti-FVIII antibodies (NNAs) before FVIII exposure was associated with an increased risk of inhibitor formation in previously untreated and minimally treated patients with severe haemophilia A (HR: 1.83, CI 95: 0.84-3.99). 7 Studies have also shown that NNAs are detectable in non-haemophilic subjects. (most of whom were never exposed to blood components such as fresh-frozen plasma). 8 This suggests that some autoreactivity against endogenous FVIII is relatively common. 9 Lastly, a genetic analysis showed that inhibitor prediction based on FVIII mutation could be improved by also accounting for FVIII antigen production. 6 A new model incorporating these new data could be useful for clinical practice. The first aim of this study was to externally validate the latest published prediction model for inhibitor development. 4 The second aim was to develop a new clinical prediction model that incorporates novel predictors. | Study design and population Data from the SIPPET study were used. 5 The SIPPET study enrolled 251 severe (FVIII:C < 1%) haemophilia A patients without previous treatment with FVIII or only minimal treatment with blood components. Patients were followed up for 50 EDs or 3 years of observation (whichever came first). The cumulative number of EDs to FVIII was used as the timescale. | Validation of 2015 model The outcome, inhibitor formation, was defined as any inhibitor higher than 0.4 Bethesda Units (BU), measured using the Bethesda assay with Nijmegen modification. The 2015 prediction model consisted of three predictors: family history of inhibitors, F8 gene mutation and intensity of the first treatment with FVIII. 4 Family history of inhibitors was analysed as a categorical variable (not applicable/negative, positive, unknown). Family history of inhibitors was classified as 'not applicable' when the patient had a negative family history of haemophilia. Intensity of first treatment was a continuous variable defined as the product of the number of consecutive EDs at first treatment (ranging from the first ED up to the 10th consecutive ED), and the mean daily dose in IU/kg of FVIII used during this period. The result was expressed as a fraction of 50 IU/kg. (As an example, an individual who was treated for 5 consecutive EDs with a mean daily dose of 75 IU/ kg would have a value of 5 EDs × (75 IU/kg/50 IU/kg) = 5 × 1.5 = 7.5). | Development of new model To improve clinical applicability, high-titre inhibitor formation, defined as a peak inhibitor titre of at least 5 Bethesda units, was used as the outcome. On the basis of literature and subject-matter knowledge, four predictors were considered: intensity of the first treatment with FVIII, F8 gene mutation, NNA status before treatment initiation and treatment with pdFVIII or rFVIII. Treatment intensity was defined as being treated for at least 2 consecutive EDs at first treatment. For F8 gene mutation, we used the classification by Spena et al. 6 In this classification, in silico predicted null mutations were reclassified as non-null if there were detectable FVIII antigen levels. Missing values were encoded as a separate category labelled 'unknown'. NNA status before treatment initiation was analysed as a dichotomous variable (negative or positive), according to cut-off values of the NNA assay (≥ 1.64 mg/ml of specific anti-FVIII IgG 7 ). Treatment type was defined as treatment with either plasma-derived FVIII (pdFVIII) or recombinant-derived FVIII (rFVIII). 5 | Validation of 2015 model The predicted risk of inhibitor formation was calculated for each individual in the SIPPET study, using the formula described in the original paper. 4 | Development of new model Three different models were fit using logistic regression. The first two models were developed to be used before any FVIII exposure; the first model contained only F8 gene mutation as a predictor, the second model also included NNA status. The third model was developed to predict inhibitor risk just after the first treatment episode and consisted of F8 gene mutation, NNA status, treatment intensity and treatment type. Variable selection was based on the strength of the predictors and subject-matter knowledge. Family history was difficult to ascertain correctly and was therefore not included as a predictor. For treatment intensity, we chose 2 ED's instead of 5 ED's because the aim was to develop a model that could be implemented almost immediately after the start of treatment. Consequently, patients with an inhibitor event in the first 2 EDs were excluded from the analysis of the full model. | Internal validation of the new model using a bootstrapping procedure To correct for overfitting, a uniform shrinkage factor was estimated using the bootstrap resampling method. 10 Next, model coefficients were multiplied by the shrinkage factor and the model constant was re-estimated with the shrunken coefficients. | Handling missing values Missing values for any of the predictors or outcome variable in the SIPPET data set were imputed using multivariate imputation by chained equations. Model coefficients of each imputed data set, their C-statistics and corresponding standard errors were pooled using Rubin's rules to obtain the final estimates. 11 Internal validation using bootstrap resampling was performed within each imputed data set. The results (ie the calibration intercept, slope, shrinkage factor and optimism corrected C-statistic) were also pooled using Rubin's rules. The calibration plot was constructed by combining the imputed data sets and fitting the shrunken model to this pooled data set. | Statistical packages The data were prepared for analysis using IBM SPSS statistics version 25. Analysis was performed using R version 3.1.0. | General information Characteristics of the validation cohort are shown in Table 1. Overall, 76/251 patients developed an inhibitor, 50/76 inhibitor patients had a high-titre inhibitor. Furthermore, 75% of patients had a F8 null mutation, 9.6% had a positive family history, 7.6% were NNA-positive, and 16.3% were treated for at least 2 consecutive days at first treatment. NNA status was unknown in 14 patients, and F8 gene mutation was unknown in 20 patients. Figure 1A shows the calibration plot of the risk score, as applied to the validation cohort. Overall calibration was limited, as the model highly overpredicts in the higher risk ranges. Table 3 shows the unadjusted and adjusted associations (of the full model) between each predictor and high-titre inhibitor formation. | Development of new prediction models, association between predictors and inhibitor formation In the multivariable model, the strongest predictors were F8 gene mutation type (odds ratio: 3.94) and NNA status (odds ratio: 3.38). | Development of prediction models before exposure to FVIII products The C-statistic of the model with only F8 gene mutation was 0.59 (95 CI: 0.54-0.64). The C-statistic of the model with only F8 gene mutation and NNA status at treatment initiation was 0.61 (95 CI: 0.52-0.71). | Development of full prediction model The C-statistic of the full model was 0.66 (95 CI: 0.57-0.75). The shrunken regression coefficients of the final logistic model are shown in Table 4. Figure 1B shows the optimism corrected calibration plot of the new model. Overall calibration was low to moderate, as the model underpredicted in the higher risk ranges. The predicted inhibitor risk for an individual in the SIPPET cohort ranged from 6% to 62%. Table 5 shows the incidence of inhibitor development across different categories of predicted risk. Table 6 shows the sensitivity, specificity, positive and negative predictive values of the model for different model cut-off points. The positive predictive value was very low when using the low-and medium cut-off values and slightly higher but still low for the high cut-off value. Conversely, the negative predictive value was high for all three model cut-off points. | Main findings A published inhibitor prediction model showed limited performance in our cohort. Furthermore, the performance of a new model that included novel predictors was also limited. | External validation of 2015 model The limited performance of the old model may partly be explained by differences in patient characteristics between development and validation cohorts. Curiously, a positive family history of inhibitors was more common among non-inhibitor patients in our cohort (which reduced model performance). Family history was often difficult to ascertain, which could explain the aforementioned results. However, we were able to include the F8 gene mutation in our model (which explains a large of part of familial inhibitor risk). Similarly, mean treatment intensity (which is consistently reported to be associated with inhibitor development) was also higher in non-inhibitor patients. b For the multivariable model, missing values were imputed using multiple imputation, one patient with an inhibitor event in the first 2 EDs was excluded from the analysis, so the total sample size for this analysis was 250. c 1 missing value, due to one patient being excluded from the analysis due to experiencing an inhibitor event in the first 2 EDs of treatment. . The formula is then as follows: 1 / (1 + exp(−(linear predictor))). As an example, the risk of inhibitor formation within 50 EDs for a patient treated with plasma-derived FVIII, who was positive at baseline for NNAs, who was treated for at least 2 consecutive EDs at treatment initiation, and whose F8 mutation is unknown is 1/(1 + exp(− (−2.71 + 0 × 0.29 + 1 × 0.95 + 1 × 0.90 + 0 × 1.07 + 1 × 0.25))) = 35%. Furthermore, the 2015 model used a stepwise predictor selection procedure, which is known to produce overfitted models. 12 However, the study partially corrected for this by shrinking the final model coefficients through bootstrapping. TA B L E 5 Incidence of inhibitor development across different risk categories Lastly, the poor calibration in the higher risk range (over 50%) was mostly due to the very low number of patients in this area. Overall, whether the 2015 model underperforms in general or is merely poorly generalizable to the type of patients enrolled in the SIPPET cohort remains an open question. | Development of pre-FVIII exposure prediction models The two simple prediction models were chosen to contain only predictors measurable before FVIII exposure. Both models performed poorly. To construct an accurate pre-FVIII exposure prediction model, additional predictors that can be measured before treatment are necessary (eg, certain gene variants). | Development of full prediction model The full model performed similarly to the 2015 model. The model included treatment intensity, which is consistently associated with inhibitor development. 13 However, our definition of treatment intensity (two consecutive EDs) has some limitations, as the second dose might have been a prophylactic dose. Also, instead of receiving one dose, some patients may have gotten two half doses over 2 days. The association between FVIII product type and inhibitor development was not statistically significant due to a lack of power caused by not having enough high-titre inhibitor events. This predictor was still included based on previous literature and subject matter knowledge, as models with predictors selected solely using significance levels perform poorly when externally validated. 14 However, model performance was still very limited. The maximum predicted inhibitor risk was 62% and, except for one outlier, no patients had a predicted inhibitor risk over 40%. Therefore, prediction in the higher risk ranges was not possible. However, calibration in the lower risk ranges was acceptable, and the negative predictive value of the model using the lowest model cut-off of 10% was 95% (ie, of the 41 patients with a predicted risk below 10% only two developed an inhibitor). Therefore, we can conclude that the model is useful for identifying low-risk patients. However, only 16% of patients fell into this low-risk category. These were all patients with a F8 non-null mutation or an unknown F8 mutation, no detectable NNAs before treatment initiation, and who were not intensively treated at first treatment. The model did not include genetic risk factors other than F8 gene mutation, and this could have impacted performance. Furthermore, we found no association between family history and inhibitor development in the SIPPET cohort. This result was probably biased, as family history was difficult to ascertain correctly in our cohort (which mostly consisted of patient from the developing world). Therefore, we decided to exclude this predictor from the model. These results could be the first step in developing a model for this aim. However, these tools should not be used in clinical practice to select high-risk patients, as all models perform very poorly in this regard. | Implications for clinical practice For this reason, the new prediction model was not converted into a tool that could be used by clinicians (eg, a nomogram or a score chart). | Implications for future research All prediction models incorporated the most important pretreatment risk factors. But even so, performance of these models was still unsatisfactory. However, these models did not incorporate time-varying predictors (eg, the cumulative number of EDs, FVIII exposure frequency, on-demand vs. prophylactic treatment, exposure to FVIII during trauma or during surgery). For example, much information could be gained by measuring the antibody response over time, 15 as was done in a recent study by Reipert et al. 16 Interestingly, this study found that during treatment, the appearance of IgG1 antibodies, followed by IgG3 antibodies, was a strong biomarker of future inhibitor development. A different approach would be to incorporate genomic information at baseline, such as HLA class II haplotypes 17,18 and/or gene variants of other genes previously associated with inhibitor formation (eg, IL-10 and CTLA-4). 19 | CON CLUS ION Performance of old and new prediction models for inhibitor formation after external validation is limited. However, the new model with all predictors may be useful for identifying patients with a low risk of inhibitor formation. Further research is needed to obtain more precise prediction models for clinical use. ACK N OWLED G EM ENTS SH, RB, SG, PMM, CV, IG, AE, ME, VR, PE, MK, FRR and FP jointly designed the study, performed the research and contributed to writing the paper. SH analysed the data. We thank the SIPPET investigators for their help with patient recruitment and data collection (the full list of investigators is provided in the Appendix S1). This work was partly supported by a research grant from the Leiden University Fund / Van Trigt Fund and by Ricerca Corrente of the Italian Ministry of Health. DATA AVA I L A B I L I T Y S TAT E M E N T The data that support the findings of this study are available from the corresponding author upon reasonable request.
2021-05-15T06:16:57.354Z
2021-05-14T00:00:00.000
{ "year": 2021, "sha1": "e7c8583fa371aeb7f1e48b07966c9516f7a9e8a8", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/hae.14325", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5cde385de3011168980bd5a30abe6b1142bdb892", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232387450
pes2o/s2orc
v3-fos-license
Premenopausal Women With a Diagnosis of Endometriosis Have a Significantly Higher Prevalence of a Diagnosis or Symptoms Suggestive of Restless Leg Syndrome: A Prospective Cross-Sectional Questionnaire Study Background Endometriosis and restless leg syndrome (RLS) are both chronic conditions that can negatively affect a woman’s quality of life. A higher prevalence of RLS is seen in women and particularly in those who are pregnant, suggesting a possible ovarian hormonal influence. Endometriosis is a common (affecting 1 in 10 women) estrogen driven gynecological condition, and the prevalence of RLS in women with symptoms or a diagnosis of endometriosis is unknown. Methods A prospective, cross-sectional, observational self-completed questionnaire study was distributed to 650 pre-menopausal women attending the gynecological department at Liverpool Women`s Hospital over a period of 4 months. 584 questionnaires were returned and 465 completed questionnaires were included in the final dataset. Data on RLS-associated (The International Restless Leg Syndrome Study Group rating scale) and endometriosis-associated (modified-British Society of Gynaecological Endoscopists pelvic pain questionnaire) symptoms were collected. Results Women who reported a prior surgical diagnosis of endometriosis had a greater risk of having a prior formal diagnosis of RLS (OR 4.82, 95% CI 1.66,14.02) and suffering RLS symptoms (OR 2.13, 95% CI 1.34-3.39) compared with those without a diagnosis. When women with either a formal surgical diagnosis or symptoms associated with endometriosis were grouped together, they also have a significantly increased risk of having either a formal diagnosis or symptoms suggestive of RLS (OR 2.49, 95% CI 1.30, 3.64). In women suffering with endometriosis-associated symptoms, the cumulative endometriosis-associated symptom scores demonstrated a modest positive correlation with RLS severity scores (r=0.42 95% CI 0.25 to 0.57). Conclusions This is the first study highlighting an association between the symptoms relevant to the two chronic conditions RLS and endometriosis, showing that women with a reported prior surgical diagnosis or symptoms suggestive of endometriosis have a significantly higher prevalence of a prior formal diagnosis or symptoms suggestive of RLS. This data will help in facilitating the discovery of novel therapeutic targets relevant to both conditions. The simultaneous treatment of these conditions could potentially lead to improvement in the overall quality of life for these women. INTRODUCTION Endometriosis is a common, chronic, estrogen driven condition, occurring almost exclusively in women of reproductive age. It is defined as the growth of endometrium-like tissue, beyond the usual place, the uterine cavity (1). The prevalence among women of reproductive age is reported to be 10% while 25-50% of infertile women are reported to have a surgical diagnosis of endometriosis at laparoscopy (2,3). The complex pathophysiology of ectopic growth of the endometrium is not fully understood (1). Laparoscopy is the gold standard diagnostic tool, with analgesics and hormonal contraceptive medications recommended as first line treatment (4,5). At present, no curative treatments are available (5) and the existing evidence for disease progression is conflicting. Endometriosis is therefore a challenging condition to manage, with a significant cost to the suffering women, their families, the health service and society in general. More than half of the women with a diagnosis of endometriosis reported their symptoms to negatively impact on their quality of life (QoL) affecting their relationships and jobs, general physical and psychological health, and social functioning (6). Current literature further suggests that women with endometriosis have an increased prevalence of chronic pain syndromes including irritable bowel syndrome (IBS), fibromyalgia, painful bladder syndrome/interstitial cystitis and vulvodynia, which will add to their symptom burden (7)(8)(9)(10)(11)(12)(13). Therefore, improving the management of endometriosis associated chronic symptoms is a major unmet need in women's health (6). Restless Leg Syndrome (RLS) also known as Willis-Ekbom disease, is a sensory-motor disorder, affecting around 5-10% of the general population (14). The pathognomonic feature is the irresistible urge to move the legs due to an unpleasant nonpainful sensory disturbance, described in a variety of ways for example as crawling, creeping and pulling (refer to (15,16). The etiology of RLS, like endometriosis, is poorly understood. The National institute of clinical excellence (NICE) in the UK recommends conservative therapy for mild-moderate symptoms of RLS and pharmacotherapy for moderate-severe RLS with more frequent symptoms. However, effective symptom control remains difficult, with persistent symptoms often negatively impacting on a patients quality of life (17). The relationship between endometriosis and female sex steroid hormones is well recognized (1). RLS is twice as common in women compared to men, and more common in pregnancy (18), indicating a possible hormonal basis (14). The relation of both conditions to other chronic pain syndromes and their possible hormonal influence led us to hypothesize that endometriosis and RLS may be linked. This study therefore, was conducted to assess the previously unknown prevalence of RLS associated symptoms among non-pregnant, premenopausal women and to ascertain if a possible prior surgical diagnosis or suffering with symptoms associated with endometriosis, will alter the prevalence of RLS symptoms. MATERIALS AND METHODS This questionnaire study was approved by the North of Scotland Research Ethics Committee 2 (LREC: 17/NS/0070). Questionnaire Development The International Restless Leg Syndrome Study Group (IRLSSG) Severity Rating Scale (19) was identified as the most suitable validated, clinical tool (self-completed questionnaire) for assessing RLS associated symptoms, to indicate a possible diagnosis of RLS and severity of symptoms. Since there are currently no reliable self-administered diagnostic questionnaires for endometriosis, to identify the presence of endometriosis associated symptoms, two validated and widely used tools were considered, the Endometriosis Phenome (and Biobanking) Harmonisation Project (EPHect) Endometriosis Patient Questionnaire (EPQ) a 25-page document (20), alongside the shorter British Society for Gynecological Endoscopy (BSGE) pelvic pain questionnaire (21). The questionnaire was modified according to the feedback from a group of gynecological patients who volunteered to attend a focus group at the Liverpool Women's hospital (LWH). The attendees (n=10) found EPHect EPQ to be too lengthy and reported that it would be unacceptable to them as completion before leaving the clinic will be tedious. Subsequently, to obtain a higher return of fully completed questionnaires, their preferred option, the modified BSGE pelvic pain questionnaire for endometriosis associated symptoms was selected for our study. IRLSSG and BSGE questionnaires were then merged to devise the final study questionnaire (Supplementary Figure 1), and its suitability and acceptability was confirmed in a preliminary pilot study, including 15 gynecological patients of reproductive age. Data Collection and Analysis The questionnaire was distributed to 650 gynecological patients under the age of 50, attending the gynecology outpatient department (which included the general gynecology clinics, and specialist clinics such as endometriosis, colposcopy, urogynecology and fertility) at LWH from the 5th of October 2017 to the 11 th of January 2018. Verbal and written (participant information sheet) information was imparted at the distribution of the questionnaire by the researchers. This detailed that consent is assumed on the voluntary return of the completed questionnaire, which did not contain any personal identifiable data. Questionnaires were considered eligible only when all questions were answered and the data provided by the responders fulfill the inclusion criteria of being non-pregnant females of 18-50 years of age. Previous studies have indicated an increased prevalence of RLS in pregnancy, thus pregnant women were excluded and since endometriosis almost exclusively affects women of reproductive age, the study only included women between 18-50 years of age. The demographic data collected in the questionnaire included age, confirmation of non-pregnant status, use of hormonal and other medications, menstrual history, parity, smoking, alcohol intake and whether women had a prior surgical diagnosis of endometriosis/prior formal diagnosis of RLS. In the UK, a formal diagnosis of RLS is made by an appropriate specialist physician (for example, a sleep specialist or a neurologist) according to the standard guidance, fulfilling the criteria defined by the IRLSSG as recommended by NICE (17). The endometriosis associated symptoms questionnaires collected data on premenstrual, menstrual, and non-cyclical pelvic pain, dyspareunia, bowel/ bladder associated pain, and back pain with a visual analogue score (VAS) indicating the severity of each pain symptom reported. We defined a score of ≥7 VAS for any of these pain symptoms to be indicative of the responder suffering with possible endometriosis-associated symptoms. Women were categorized to the group suffering with symptoms suggestive of RLS, when the participant responded "yes" to the question "Do you ever have the irresistible urge to move your legs due to an unpleasant sensation?". The severity of RLS was calculated by the cumulative sum of 10 RLS VAS responses as previously described (22). Although endometriosis cannot be diagnosed efficiently with the analysis of symptoms, the symptoms of pelvic pain that are associated with endometriosis remain a major therapeutic challenge. Therefore, we attempted to assess the severity of these symptom perceived by the respondents of our study by calculating the cumulative sum of the 8 endometriosis associated pain symptom VAS responses. Sample Size Calculation Our initial calculations suggested a minimum sample size of 200 would be sufficient to draw conclusions, yet due to the heterogeneous nature of the symptoms associated with endometriosis in particular, we aimed to collect approximately 400 responses. We distributed 650 questionnaires assuming over 60% return of fully completed responses from eligible patients within our study period. Statistical Analysis The study population was divided into groups, considering those with a surgical diagnosis or symptoms associated with endometriosis and those without, the two main outcome measures included a prior diagnosis of RLS and presence of RLS associated symptoms (based upon the responses to the question 'Do you ever have the irresistible urge to move your legs due to an unpleasant sensation?'). Those who responded "yes" were classified as a case and those who responded "no" a non-case. Initially summary data for the different groups were extracted and reported using standard summary statistics, mean and standard deviation for continuous data and counts and percentages for categorical data. Hypothesis test using independent ANOVA and Chi-squared tests were undertaken depending on the distribution of the data. The odds-ratio was used to quantify the risk of RLS when comparing those with confirmed diagnosis or reported symptoms suggesting endometriosis and those without symptoms. We investigated the association between continuous endometriosis scores and continuous RLS severity scores using Spearman correlation. Statistical analysis was performed using SPSS 21.0 for Windows (SPSS Inc., Chicago, IL, USA). RESULTS The overall response rate for this study was high (90%, 584/650), 119 questionnaires were excluded due to incomplete responses or responders not meeting all pre-determined eligibility criteria. Therefore, the eligible number of questionnaires included in the final analysis was 465 (71.5%) (Figure 1). Features of the patient population are detailed in Table 1 (segregated by endometriosis status) and Table 2 (separated by RLS status). Of the 465 questionnaires analyzed, 71.8% (334) of the participants reported that they had symptoms associated with endometriosis, which we pre-defined as a response of ≥7 on VAS for any of the pain questions. However, only 99 (21.3%) out of those, self-reported receiving a formal surgical diagnosis of endometriosis. This gives a rate of formal-surgical diagnosis of endometriosis to be 29.6% (99/334) of the women suffering with endometriosis associated symptoms. Women with a self-reported surgical diagnosis of endometriosis were older (P<0.01), more likely to use hormonal medications (P<0.01) and less likely to consume alcohol (P=0.05) than those without a prior diagnosis of endometriosis. Although there was an apparent trend for the women with a diagnosis of endometriosis to use more antidepressants, these differences did not reach statistical significance (P=0.06). Among our study population, 42.6% (198) of patients had symptoms of RLS, but only 15 (3.2%) reported receiving a prior formal diagnosis of RLS. This gives a rate of formal RLS diagnosis to be 7.6% (15/198) among the women suffering with RLS related symptoms. Women with a self-reported formal diagnosis of RLS were older (P<0.01), less likely to smoke (P<0.01), more likely to be parous (P=0.02) and more likely to use anti-depressants (P<0.01) than those without a prior diagnosis of RLS. The women with symptoms but no formal RLS diagnosis in post-hoc analysis were also significantly more likely to use antidepressants than those with no symptoms or diagnosis ( Table 2). Women with a self-reported surgical diagnosis of endometriosis were more likely to have a self-reported formal DISCUSSION This cross-sectional study suggests that there is a link between endometriosis and RLS. The data confirms that RLS symptoms are highly prevalent among non-pregnant woman during the reproductive years (gynecological population), and this is greater than the reported rates among the general population. Our results further suggest that a surgical diagnosis of endometriosis increased the likelihood of a formal diagnosis of RLS or women to be suffering with symptoms of RLS. Having either a formal diagnosis or symptoms associated with endometriosis in turn increased the likelihood of having a formal diagnosis or symptoms suggestive of RLS, and demonstrated a modest positive correlation between the severity of symptoms associated with endometriosis and RLS. This signifies the need to consider and treat both conditions simultaneously in these patients to improve their general wellbeing. The prevalence of idiopathic RLS is reported to be between 1.9-4.6% of European adults (16,23,24) with a negative impact on their wellbeing (19). Those with early onset of symptoms (< 45 years) may have worsening of symptoms over time (19), therefore the observed high prevalence of symptoms suggestive of RLS among the non-pregnant women in the reproductive age (nearly 10-fold increase from the general population) is of concern. Almost 60% of those who recounted a surgical diagnosis of endometriosis, also reported experiencing an irresistible urge to move their legs due to an unpleasant sensation, a cardinal feature of RLS. However, a previous formal diagnosis of RLS, was only reported by 7.1% of women who had a diagnosis of endometriosis, suggesting a lack of awareness of this condition, thus, the majority of symptomatic women are not formally diagnosed. The high prevalence of RLS symptoms in gynecological patients, and particularly in those who are suffering with symptoms associated with endometriosis or with a self-reported surgical diagnosis of endometriosis is therefore important for many reasons. Firstly, there may be common etiological factors between the two conditions that can be identified for novel therapeutic avenues. For example, iron deficiency is an identified causative factor for RLS (19) and RLS symptoms are exacerbated with iron deficiency (25). Albeit limited, there is evidence suggesting that women with endometriosis have significantly decreased values of hematocrit, hemoglobin concentrations and mean cell volume compared with age-matched controls (26). In a non-human primate (macaques) model of spontaneously occurring endometriosis, surgical diagnosis of endometriosis were associated with a significant increased rates of iron deficiency anemia, lower systemic iron stores, and decreased serum iron levels (26). Furthermore, women with endometriosis commonly complain of symptoms such as fatigue and malaise (27), which may be linked to or exacerbated by, iron deficiency. Therefore, evaluating common causal factors for RLS in women with endometriosis, (for example, conclusive assessment of the iron stores in women suffering with endometriosis associated symptoms) may unveil new therapeutic options of rectifying those abnormalities, in this context, iron deficiency. Dopaminergic dysfunction is postulated to be involved (28) in the pathogenesis of RLS. RLS symptoms are typically worse at night and in the evening, resulting in insomnia, anxiety, depression, loss of energy, and disturbances in behavior, cognition and mood (14,16,29,30). The observed association between RLS and endometriosis should prompt clinicians to consider co-existence of symptoms, when managing women with either of these conditions. Both these chronic conditions require long term symptom management. Since the severity of the symptoms also seem to correlate positively with each other, clinicians seeing patients with one condition may need to seek for a diagnosis of the other and if found to co-exist, simultaneous treatment for both endometriosis and RLS associated symptoms is likely to improve overall wellbeing of the patients. The management pathway for RLS recommended by a task force for the European Restless Legs Syndrome Study Group (22) and NICE clinical knowledge summaries (17) suggest lifestyle change advice and self-help measures could be sufficient in patients with mild RLS symptoms. These include good sleep hygiene, reducing caffeine and alcohol intake, smoking cessation and moderate regular exercise. Those patients with severe and disabling symptoms may significantly benefit from early detection and treatment (31). Non-pharmacological treatments (e.g. lifestyle change advice) could still be worthwhile in this (32,33) and although not yet specifically tested in clinical trials, alpha-2-delta ligands such as gabapentin and pregabalin are commonly used in clinical practice to treat endometriosis associated pelvic pain (34,35). A major limitation of this study is not directly verifying the information from the self-completed questionnaires with the hospital records, and this needs to be considered when interpreting our results. For example, our study cohort included women with a self-reported surgical diagnosis of endometriosis and those with symptoms of endometriosis without a prior surgical diagnosis. This latter group would naturally include women who have either not had a diagnostic laparoscopy or those who have had endometriosis excluded by a diagnostic laparoscopy. Further appropriately designed studies in the future are warranted to explore differences in RLS symptoms in such potential subgroups of women. The calculation of a correlation coefficient between the sum of the RLS VAS and endometriosis VAS responses should be considered with caution due to the fact that although the IRLSSG Severity Rating Scale is validated for use on a continuous scale, the BSGE Pelvic Pain Questionnaire is not. The use of a single question "Do you ever have the irresistible urge to move your legs due to an unpleasant sensation?", will over-estimate true prevalence of symptoms suggestive of RLS, since we did not include all 5 essential diagnostic criteria according to IRLSSG (15,16). Additionally, the VAS questions may potentially over-estimate the prevalence of symptoms associated with endometriosis due to subjective symptom exaggeration. Although our cohort included 465 fully completed questionnaires from women, our reported findings will need to be confirmed in a larger future cohort, which also allows investigation of the molecular mechanisms between the two conditions in order to draw definite conclusions. An increased awareness of RLS in women with endometriosis and or those with associated symptoms, and screening women for RLS by their healthcare professionals may offer an opportunity to alleviate some common and synergistic symptoms (e.g. lack of sleep, low mood) and thus would be beneficial (36). Conclusion This cross-sectional study builds on the current understanding of endometriosis and suggests that there is a link between symptoms of endometriosis and RLS. Although the results of the study cannot be immediately applied to the wider population, the results are significant due to the substantial impact these two chronic conditions have upon quality of life, and improvements in knowledge can lead to a holistic approach to patient care. The results highlight a promising area of medicine, which could have a significant impact on care in these complex patients. Further research is needed to confirm or refute these findings, to encourage clinicians to actively seek for a diagnosis of RLS in the endometriosis population and if found to be present, offer treatments and lifestyle advice to relieve associated symptoms and improve the overall quality of life of thousands of women. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author. ETHICS STATEMENT The studies involving human participants were reviewed and approved by North of Scotland Research Ethics Committee 2 (LREC: 17/NS/0070). Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
2021-03-29T13:12:08.432Z
2021-03-29T00:00:00.000
{ "year": 2021, "sha1": "0b0eb38becae8cf05737a589961dff5fabc1544e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2021.599306/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "0b0eb38becae8cf05737a589961dff5fabc1544e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
254037409
pes2o/s2orc
v3-fos-license
Explicit parametric solutions of lattice structures with proper generalized decomposition (PGD) Architectured materials (or metamaterials) are constituted by a unit-cell with a complex structural design repeated periodically forming a bulk material with emergent mechanical properties. One may obtain specific macro-scale (or bulk) properties in the resulting architectured material by properly designing the unit-cell. Typically, this is stated as an optimal design problem in which the parameters describing the shape and mechanical properties of the unit-cell are selected in order to produce the desired bulk characteristics. This is especially pertinent due to the ease manufacturing of these complex structures with 3D printers. The proper generalized decomposition provides explicit parametic solutions of parametric PDEs. Here, the same ideas are used to obtain parametric solutions of the algebraic equations arising from lattice structural models. Once the explicit parametric solution is available, the optimal design problem is a simple post-process. The same strategy is applied in the numerical illustrations, first to a unit-cell (and then homogenized with periodicity conditions), and in a second phase to the complete structure of a lattice material specimen. Introduction Architectured materials (or metamaterials) are built as tessellations of small-scale structures (also named as cells). The cells are designed to obtain specific physical properties of the bulk. This concerns often the mechanical response, which is the scope of this work, but not only. concept is also used to obtain astonishing thermic, acoustic, optic or electromagnetic properties, see for instance [5]. Thus, as stated in [26], scientific and technical communities trend toward obtaining novel macro-structural material responses via micro-structural design. Achieving these new material properties is expected to highly impact the design of new devices in many fields of science and technology, see [8,10]. Just to mention a few applications, the use of architectured materials results in negative or null thermal expansion coefficients [15,30], pentamodes or simile-fluid solids [2,17,21], elastic buckling or snapping based metamaterials [23,25], negative index of optical refraction [13,16] and materials with negative Poisson's ratio [7,14,24], also known as auxetic materials. A natural problem arising in this context is the design of an architectured material with tailored properties. This results in an inverse problem consisting in identifying the design parameters of the cell producing the desired properties in the bulk material. This is already addressed by the early works of Sigmund [28,29] in the context of computa-tional mechanics with homogenization. The main difficulty of handling a parametric description of the micro-structure and to determine its influence in the emerging properties of the bulk is the multidimensional character of the problem. The computational complexity increases exponentially with the number of parameters and hence the burden of the inverse problem. Quoting [26], this problem"[…] is generally ill-posed, and making progresses requires a combination of computational strategies to strengthen our physical intuition". A particular and interesting application of these ideas is producing auxetic metamaterials with additive manufacturing (or 3D printing). The seed cell is in this case a 3D printed lattice material with parametrically described shape and size. In this context, different numerical models testing diverse scenarios for the micro-structure have successfully predicted the material response of the bulk. For instance, taking an inverse honeycomb configuration, first proposed by [1] and extensively analyzed later by [12,18,27]. However, as pointed out above, the computational burden is still a bottleneck when the number of parameters is moderately large. One possible solution to affordably deal with the socalled curse of dimensionality is using reduced order models (ROM). Here, we focus on a particular ROM: the proper generalized decompostion (PGD), presenting a separable approximation computed using a combination of a greedy algorithm (to compute successively the terms) and an alternated directions scheme (to compute iteratively the modes in each term), see [9,32]. This technique is specially well suited to simulate the behavior of the parametric structure (with parameters describing both geometrical and material properties of the cell) because it provides an explicit parametric solution, also denoted as computational vademecum. In the PGD philosophy, the computational vademecum is obtained offline, often using High Performance Computing resources. Then, to solve the inverse problem, the parametric design space is browsed as a post-process, in an online phase providing real-time responses. This paper aims at presenting a new PGD formulation to efficiently deal with the parametric lattice material problem. The explicit parametric solution (computational vademecum) is to be exploited to derive the macro-scale properties of the resulting architectured material an hence to efficiently solve the inverse problem under consideration. The remainder of the paper is structured as follows. Section 2 describes the parametric discrete problem and the functional setup. Section 3 introduces in detail the generalization of the Proper Generalized Decomposition to the case of intrinsically discrete lattice structures. Then, in Sect. 4 the proposed strategy is applied to solve the parametric model in different structural configurations. Some concluding remarks are drawn in Sect. 5. Problem statement: parametric lattice structure The shape and material properties of a lattice structure are characterized by n p parameters μ i , for i = 1, 2, . . . , n p , each ranging in a real interval I i ⊂ R. For the sake of shortening the notation, the n p parameters are collected in a vector μ = [μ 1 μ 2 . . . μ n p ] T . Accordingly, μ ranges in the multidimensional parametric space D = I 1 × I 2 × · · · × I n p ⊂ R n p . The deformation of the structure is described by a vector of generalized displacements (nodal displacements and rotations), U(μ), which is the unknown of the parametric problem. The number of degrees of freedom of the problem is denoted by n dof and therefore for every value of the parameters, U(μ) ∈ R n dof . The parametric input data boils down into the stiffness matrix K(μ) ∈ R n dof ×n dof and the vector of generalized nodal forces (including forces and moments) F(μ) ∈ R n dof . Thus, the equilibrium equation that has to be solved to obtain U(μ) reads Equation (1) is stated in a parametric fashion, in the sense that input data K and F depend on μ and therefore also the solution U depends on μ. Thus, all these mathematical objects are fields taking values in the multidimensional parametric space D. In the following it is assumed that the parametric dependence of the input data is regular enough to be square integrable, that is F(μ) ∈ [L 2 (D)] n dof and K(μ) ∈ [L 2 (D)] n dof ×n dof . Note that the parametric functional space L 2 (D) is also expressed in terms of the sectional spaces L 2 (I i ), i = 1, 2, . . . , n p . Namely, L 2 (D) = L 2 (I 1 )⊗ L 2 (I 2 ) ⊗ · · · ⊗ L 2 (I n p ) which means that freezing all the parameters but one, the remaining functional dependence is square integrable. Matrix K is build up assembling the contribution of all the beam elements of the lattice structure. The formulation used in the numerical examples correspond to Bernoulli beam elements. However, all the developments are valid for any beam or structural element formulation. The parameters included in μ do characterize the geometry and material properties of the individual elements. The same concept is readily generalized to the parameterization of substructures that are also assembled into the global structure. This strategy is particularly useful in the context of architectured materials, in which a unit-cell is periodically replicated in the bulk structure. Equation (1) is also expressed in integral form. First, the residual of (1) is introduced as Thus, using the weighted residuals idea, one can state that U(μ) is the solution of (1) if and only if Note that no integration is performed in the physical space. The integrals involve only the parametric space D. This is due to the algebraic nature of (1), which can be seen as already discretized in space. The role of the space integration (energy product) is played here by the scalar product of the residual (forces) and the test function (virtual displacements). The problem represented by the equivalent Eqs. (1) and (3) is discretized in order to devise a numerical solver. Finitedimensional spaces V i ⊂ L 2 (I i ), i = 1, 2, . . . , n p , of dimension n d,i are introduced to approximate each sectional parametric space. Accordingly, the space where the discrete approximation to U(μ) lies is V 1 ⊗ V 2 ⊗ · · · ⊗ V n p n dof with overall dimension equal to The total number of unknowns n Full is growing fast with the number of parameters n p , producing the so-called curse of dimensionality. In other words, the number of degrees of freedom in the full multidimensional problem, n Full , grows exponentially with n p . Reduced order models (ROM) are possible alternatives to overcome this difficulty. In particular, the proper generalized decomposition (PGD) is a ROM based on the idea of providing a separable decomposition of the multidimensional function and therefore reducing the exponential computational complexity into a linear one. Next section discusses the application of PGD to the structural problem under consideration. Separable approximations The unknown U(μ) is to be approximated by a separable approximation U n PGD (μ) with n terms, namely where each term, for m = 1, 2, . . . , n is determined by a displacement vector u m describing the spatial mode and a set of parametric functions G m i (μ i ), for i = 1, 2, . . . , n p . In the following, in order to alleviate the notation and where there is no ambiguity, the explicit dependence on μ i is omitted and G m i (μ i ) is written as G m i . In order to use PGD, the input data, K(μ) and F(μ), has to be expressed in separated form, that is n k and n f being the number of terms needed to express K and F in a separable manner, K k and f the corresponding spatial modes and B k i (μ i ) and S i (μ i ) the parametric functions. For the sake of simplifying the notation, also the parameter dependence of B k i and S i is omitted in the following. In the case the input data are not originally separated, a separable approximation has to be obtained as a pre-process, see [31] for a discussion on the possible errors introduced in this phase. Sectional norms The PGD solver is based on the discretization and solution of the integral form of the problem presented in (3) with separable approximations. Formulating the PGD requires introducing sectional norms, that is norms along each of the independent parametric dimensions that allow measuring the modes. For instance, the modes G m i ∈ L 2 (I i ) describing U PGD in (5) are naturally measured with the standard norm in L 2 (I i ), namely The choice for the sectional norm affecting the space (or physical) dimension, that is the norm to measure the modal vectors u m ∈ R n dof , is not as trivial. The obvious choice of selecting the Euclidean scalar product in R n dof would yield The Euclidean norm, however, does not take into account the nature of the generalized nodal displacements (mixing displacements and rotations) and the resulting measure lacks of physical meaning. A more suitable choice is using a structural mass matrix M u with sound physical rationale that results in a norm such that Once these norms are available, it is interesting normalizing the modes, that is taking Thus, the separated representation of (5) is rewritten as being β m = u m the amplitude of term m of the separable sum. Note that β m provides key information on the relative importance of the different modes in the separable approximation and therefore it is used as one of the criteria to decide whether the number of terms, n, is sufficient to obtain the desired accuracy. A global norm in [L 2 (D)] n dof is also introduced such that, for any U(μ) ∈ [L 2 (D)] n dof . Greedy strategy and rank-one approximation The PGD methodology aims at numerically solving (3) using an approximation of the form shown in (5). This is performed using a greedy strategy, that is computing sequentially the terms of the sum in (5). Thus, we start computing U 1 PGD , and then, when U 1 PGD is available, compute U 2 PGD , and then U 3 PGD …Each of the steps of the greedy algorithm (that is, updating some U n−1 PGD into U n PGD ) consists in solving a rankone approximation problem. Accordingly, it is assumed that U n−1 PGD is known and that where the superscript n in u n and G n i is here omitted to simplify the notation. The problem consists now in finding u, G 1 , G 2 , . . . , G n p such that U n PGD fulfills (3). The complete unknown function u n p i=1 G i is said to be of rank one because it is build as the product of sectional functions. The problem at hand is nonlinear (because the unknown functions are multiplying each other) but with a number of degrees of freedom much lower than the original one. Recall that the discrete version of the full linear problem has a number of degrees of freedom n Full given in (4). Instead, with the PGD strategy, each rankone approximation problem is nonlinear but with a number of degrees of freedom, n RankOne given by: Typically, due to the additive nature of n RankOne in terms of the sectional dimensions, n RankOne n Full holds and therefore the reduction of the dimension of the problem is well worth the difficulties associated with the newly acquired nonlinear character. Due to the incremental character of (13), the expression of the residual (2) is rewritten as which, using (6), becomes The test function δU in (3) (parametric virtual displacement, the explicit dependence on μ is omitted now in the notation) has to be selected now in accordance with the unknown of the rank-one problem consisting in introducing in (3) the residual as defined in (15) or (16). This test function is taken as a variation of the actual unknown u That is, instead of taking an arbitrary δU ∈ [L 2 (D)] n dof as in (3), one must take arbitrary δu ∈ R n dof and δG i ∈ L 2 (I i ), for i = 1, 2, . . . , n p . Then, in the numerical version, the finite-dimensional sectional spaces V i replace L 2 (I i ). The numerical strategy to deal with the nonlinearity of this problem is based on the alternated directions idea [3]. Alternated directions solver The alternated directions strategy to solve the rank-one nonlinear problem consists in successively solving for each one of the unknowns assuming that the rest of them are known. Computing u Thus, the first step consists in, assuming G i known, for i = 1, 2, . . . , n p , compute u such that R(U n PGD ) as defined in (16) satisfies (3). Since G i is known, δG i is taken to be zero, for i = 1, 2, . . . , n p , and δU in (17) reduces to for all δu ∈ R n dof . Note that the notation has been shortened in a single multidimensional integral but using × n p j=1 I j instead of D because this notation is going to be useful in the following. The previous equation fulfilled for all δu ∈ R n dof is equivalent to the following algebraic equation in R n dof : Substituting R(U n PGD ) by the expression (16) one gets Using the expressions for U(μ), K(μ) and F(μ) in (5), (6) and (7), respectively, and the definition of the residual in (2) the resulting equation reads That is, a linear system of equations for u: where scalars c k ,ĉ and c k,m are computable terms defined by: Computing The subsequent steps consist in computing one parametric mode, say G i , assuming that u and the rest of the modes G j , for j = 1, 2, . . . , n p and j = i, are known and therefore the corresponding variations δG j and δu are taken to be zero. Thus, the variation δU in (17) becomes Using this variation in (3) for δU one gets that must hold for all δG i ∈ L 2 (I i ). In order to ease the reading, the range of j is omitted, emphasizing only the fact that j = i. Note that (23) is an equation for the unknown function G i (·) that appears explicitly in R(U n PGD ), see (15). Indeed, the integral equation (23) results in the following equation That is where both the term in square brackets in the left-hand side and the right-hand side of (25) are computable functions in I i . The expression for these functions is readily determined using the forms of the data given in (5), (6) and (7). The resulting expressions read where the computable functions d k Convergence control and stopping criteria For each term of the PGD solution, that is for each value of n, the alternated directions iterations are expected to converge to the best rank-one approximation of U n PGD − U n−1 PGD , being U n−1 PGD known and U n PGD unknown. This iterative algorithm (see "Appendix A") requires a stopping criterion to decide whether the current iteration is acceptable or not. The stopping criterion is based on the stationarity of the solution: the iteration is validated if the modification from the previous iteration is small enough. Thus, assume that u and G i , for i = 1, 2, . . . , n p characterize the previous iteration and that after the alternated directions loop the new approximation is given by u new and G new i , for i = 1, 2, . . . , n p . The stopping criteria is expressed in terms of the difference between the two successive iterations, measured with the norms introduced in Sect. 3.2. In particular, a typical convergence criterion is to continue iterating while where · Glob is introduced in (12) and η tol is a userprescribed tolerance. The computation of these norms is simplified by using the normalized modes, u and G i , introduced in (10) and the amplitude β. Namely, where all the sectional scalar products are denoted in a unified fashion by the bilinear operator (·, ·). Note that if the new iteration coincides with the previous one, this expression vanishes because the amplitudes are equal and the scalar product of the identical and normalized modes is equal to one. In "Appendix A" the algorithm of the alternated directions solver is detailed, see Algorithm 1. PGD compression The n terms of PGD solution U n PGD may contain redundant information. This is associated with the greedy strategy employed to compute the successive terms, with no enforcement of any orthogonality between the successive modes. Thus, the number of terms in an optimal separable approximation required to achieve the same level of accuracy as in U n PGD is often much lower than n. This can be checked a posteriori in the 2D case, where the Singular Value Decomposition (SVD) provides an optimal separation, with the least number of terms. For higher dimensions, the problem still holds but there is no optimal solution to compare with (the High Order SVD, or HO-SVD, is no longer optimal). In order to mitigate the effect of this phenomenon, a common practice in the PGD codes is to implement the so-called PGD compression, see the appendix in [22] where it is referred also as HO-PGD. This compression consists in a least-squares projection of U n PGD into the same approximation space, computed with the very same PGD strategy, that is combining a greedy algorithm for the terms and an alternated directions scheme for the modes. In the context of the parametric structural problem (1), the PGD compression is formulated as follows. Let U n PGD be the raw PGD solution with n terms. Instead of solving the original equation (1) or its integral counterpart (3), now the objective is obtaining U com minimizing the least-squares functional J defined as More precisely, the aim is at computing a separated approximation U n com with n terms expressed as using a PGD approach to minimize (32) with the expectation of getting a sufficiently accurate approximation to U n PGD with n n. Again, the main idea of the PGD strategy is formulating the problem of finding a rank-one approximation. Thus, let us briefly describe how to compute the first term of U n com taking U com = u Following the expansion in (31), the expression for J (U com ) is reduced to its dependence on the unknowns u, G 1 , . . . , G n p and reads Again the alternated direction scheme is used here. The first step assumes that G 1 , . . . , G n p are known to compute u. Note that the previous functional is quadratic for u and therefore the sectional minimization problem results in solving a linear system of equations for u. Indeed, for given values of G 1 , . . . , G n p , the functional dependence on the remaining unknown reads Note that matrix M u cancels in both sides and the solution of the linear system of equations is provided by an explicit expression. The iteration to compute modal function G i , provided that u and G j are available for j = i, leads also to a simple equation. The sectional functional to be minimized reads which results in Note that expressions (34) and (35) provide a very simple solver for each of the alternated directions iteration. In the case of the subsequent terms of U n com , that is for n > 1, the expressions are very similar because one has to replace only U n PGD by U n PGD − U n−1 com . Therefore, the expressions (34) and (35) are not significantly different, just that the sums take n + n − 1 terms instead of n. The stopping criteria in this case are identical to those presented in Sect. 3.4.3. Again, in the rank-one approximation, the only check that has to be performed is that the iterations have reached a stationary configuration. In fact, the same expression as in Eq. (31) can be straightforwardly used here taking u instead of u and G i instead of G i . Numerical examples: application to structured auxetic patterns The generalization of PGD to structural problems is used to model a parametric architectured material which is expected to exhibit auxetic properties (that is, negative Poisson ratios). The inverted honeycomb shaped cell is adopted here as a mechanism to generate an auxetic behavior, as introduced in [1]. The geometry of the structure to be analyzed is shown in Fig. 1 and the corresponding unit-cell and its parameterization are illustrated in Fig. 2. The unit-cell is formed by 8 segments (numbered with lowercase roman numbers, i, ii, . . . , vii, viii) modelled by beam elements and the actual shape of the cell is characterized by the following parameters: a: the length of the obliquely oriented beam elements ii, iv, vi and viii. Thus, the n p = 3 free parameters are t, a and α, that is; The width of the cell, w in Fig. 2 has to be larger than b and therefore the restriction cos α < b/2a must hold. For our parametric analyses, the parameters were chosen to range in the following intervals That is, the solution U n PGD (·) takes values in D = I 1 × I 2 × I 3 , where μ ranges. The corresponding intervals are uniformly discretized such that the spaces V 1 , V 2 and V 3 have dimensions n d,1 = n d,2 = 100 and n d,3 = 500. The loads in all the examples are enforced via prescribed displacements (no tractions are applied on the boundary). In the analysis of the unit-cell, this includes periodicity constraints. The expressions for the parametric dependence of stiffness and mass matrices can be easily derived from standard Bernoulli beam elements. Recall that the free parameters describing the lattice structure are t, a and α. The other parameters of the beams (b, Young's modulus, depth of the rectangular cross-section) are taken to be equal to one. The non-homogeneous Dirichlet boundary conditions (nonzero prescribed displacements) are enforced in the first PGD mode, thus homogeneous essential conditions are enforced in the subsequent modes. The tolerance η tol in (30) to control the convergence of the alternated directions iteration is set to 10 −6 , see schematic implementation in Algorithms 1 and 2 of "Appendix A". To control the number of terms in the PGD expansion, the criterion selected is based on the reduction of the amplitudes β m , m = 1, 2, . . . , n. Namely, the process is stopped at term number n if β n max m=1,...,n (β m ) < 10 −3 . Unit-cell problem using homogenization The goal here is to exploit the generalized solution of the parametric unit-cell to extract the effective mechanical properties at the macro-scale. As it is standard in the homogenization practice, see [4,20] and references therein, the unit-cell has to be subjected to different periodic loading conditions. In this case three loading cases are considered as described in Fig. 3: the three solutions are denoted Analyzing the unit-cell model (Fig. 2) allows computing the material effective mechanical properties through homogenization theory, which describes how these loading conditions, applied to a material structure at macro-scale, should be applied equivalently to the unit-cell by using periodicity boundary conditions at micro-scale. For further details on homogenization, we suggest the readers the works of [4,20] and references therein. Once the three solutions are available, the effective constitutive Hooke matrix for the homogenized material reads where for I , J = 1, 2, 3. It is worth noting that wh stands for the area occupied by the cell. Note that both left and right hand sides of Eq. (37) In this case it is clearly seen that the second modal function in Fig. 9c (blue color) governs the auxetic behaviour of the structured material for α < 90 • . Figures 10, 11 and 12 illustrate the load case XY , where the required number of modes is 6 and 4, for the raw PGD and the compressed one, respectively. As shown in "Appendix B", the analytical solution U(μ) of the discrete system (1) in terms of the parameters μ i is available. This is going to be used in the following to assess the convergence of U n PGD (μ), the proposed numerical solution using PGD with compression. In order to measure the error, the parametric solution is evaluated in the Cartesian domain I 1 × I 2 × I 3 using uniformly distributed points. For where the superscripts i, j and k range the points in the domain where the multidimensional representation is sampled. The results of the convergence using Eq. (38) for the load cases XX, YY and XY are shown in Fig. 13a-c, respectively. The components of the effective constitutive tensor can be constructed using the generalized displacement solutions of the unit-cell problem. For the present example, it can be shown that the effective mechanical properties result orthotropic, see "Appendix B". One of the most relevant mechanical properties in auxetic materials is the effective Poisson's ratio. Being the material sheet orthotropic, two different Poisson's ratios -ν eff 12 and ν eff 21 -are obtained using (36) and (37), as mentioned in [6], which assumes a plane stress state and reads: Therefore, in order to compute (39), the displacements solutions of the unit-cell problem for load cases XX and YY are needed. In Fig. 14a, b we can see the orthotropic Poisson's ratios for the parametric Cartesian domain given by I 2 × I 3 , taking t = 1 40 . We fixed t as a constant since a and α parameters are highly more relevant for Poisson's ratio variations. The represented response surfaces are computed using the PGD solutions for the unit-cell subjected to load cases XX and YY. The approximations have 7 and 3 modes respectively, as shown when using PGD compression. Figure 14 shows very realistic and detailed response surfaces of the Poisson's ratio obtained by modifying parameters a and α, this latter being the most relevant parameter involved in auxetic behaviour. Furthermore, the normalized modal functions confirm such statement, including also the almost non-relevant dependence on thickness t. It is wellknown that Poisson's ratio limits for isotropic materials are: −1 < ν < 0.5. The values obtained for ν eff 12 and ν eff 21 , as shown in Fig. 14, do not completely verify this restriction. This is because, in the 2D-orthotropic setup, the thermodynamical consistency, see reference [19], is guaranteed by the conditions: ν eff 12 ν eff 21 > 0 and 1 − ν eff 12 ν eff 21 > 0, which are truly respected by the results displayed in Fig. 14. In addition, in order to asses the error in the Poisson's ratios shown in Fig. 14, an error ς r is computed in Eq. (40), in a similar fashion that in (38) but considering a Cartesian domain I 2 × I 3 , with μ 1 = 1 40 . Therefore, the accuracy of the PGD orthotropic Poisson's ratio responses is compared against the analytical expressions obtained in "Appendix B". It can be seen in Fig. 15 a very good agreement in the orthotropic Poisson's ratios attained by PGD. Periodic pattern Here, the PGD approximation of the solution will be carried out in a periodic lattice material structure by taking into account the total number of degrees of freedom (non homogenized), instead of just a single cell. To do so, we use a periodic pattern made up from a total of 5 × 5 unit-cells. Two load cases are going to be considered: (i) a uniaxial tensile test in X direction (Fig. 16) and (ii) a unixial tensile in Y direction (Fig. 17). The evolution of the modal amplitudes β m is plotted in Figs. 18 and 19 for the load cases X and Y respectively. In addition, these figures compare how modal amplitudes evolve between the standard PGD greedy approach and the compressed PGD one. The latter scheme shows not only a significant reduction in modes but also an improved evolution in the decreasing tendency of the modal amplitudes. Compared with the unit-cell model, the full pattern analysis gives the possibility of a better exploration of the parameters influence in a generic structured material. This is because the boundary effects are visible in this model, whereas in the homogenized one they are cancelled by the imposed periodic constraints. In the following, a measurement of the orthotropic Poisson's ratios in the full lattice material arrangement is proposed. Due to boundary effects, the Poisson's ratio is not uniform all along the sections of the structure. Therefore, it is obtained as an average of the transversal deformations in all the corresponding boundary nodes, see The calculation of ν pat 21 PGD is done analogously but using load case Y instead. In Fig. 21 it is shown the Poisson's ratios relative error ς r , computed as reported in Eq. (40), but comparing here the full structure model against the homogenized one. Two different sizes have been considered for the full lattice material, one contains a total amount of 5 × 5 unit-cells whereas the other holds 10 × 10 unit-cells. As expected, Fig. 21 suggests that the Poisson's ratios computed in the lattice material structure compare better to the homogenized ones as long as the number of unit-cells is increased in the full structure. Irregular pattern In this section, the same load cases as in Sect. 4.2 are shown, but in this example, the irregular pattern of Fig. 22 will be used, for which its analytical solution is not available due to the high complexity of the parametric result. The modal amplitudes using PGD compression can be seen in Fig. 23. Results show that, as expected, the modal amplitudes decrease by increasing the number of PGD modes. The present results show a similar global behaviour when compared to the periodic pattern, despite noticeable differences in the parametric modal functions. An example of this is depicted in Fig. 24 for the normalized modal functions G m 3 (α). Concluding remarks The present paper presents in detail the generalization of PGD to parameterized structural problems. This tool is applied to model the macroscopic behavior of architectured materials, where the parameters describe the shape and structural properties of the microstructure. This allows explicitly reproducing the response surfaces for the quantities of interest (e.g., the different orthotropic Poisson's ratios if the goal is obtaining specific auxetic properties) in terms of the design parameters. In other words, the quantities of interest are explicitly represented by computational vademecums. Consequently, the inverse problem corresponding to optimal material design is solved as a trivial post-process. The 3D printing opens the possibility of manufacturing any of these materials for arbitrary values of the parameters and therefore this technique is currently extremely pertinent. Any optimal configuration proposed by the methodology can be easily brought into reality. The possibility of explicitly representing the dependence of the macroscopic response as a function of the parameters describing the metamaterial cells is a promising tool. So far, the capabilities of this technology are demonstrated for 2D tessellations of seed cells with linear behavior. The extension to 3D complex structures is conceptually straightforward and opens the door to very interesting applications. Modeling the nonlinear regime is also important to properly describe the full range of applications. The nonlinear generalization is less obvious but deserves devoted research efforts. ; β ← β new and ζ = η tol β new end Algorithm 1: PGD alternated directions nonlinear solver Data: U n PGD (μ): u m , G m i , for m = 1, 2, . . . , n U n−1 com : u m , G m i , form = 1, 2, . . . , ( n − 1) Result: u G i , for i = 1, 2, . . . , n p Initialize: assign values to u and G i ; select a tolerance η tol while ε > ζ do contain equilibrium and periodicity conditions, given a specific load case in the unit-cell. If the constraints are applied using a direct method [11], then the system unknowns can be split into released and constrained such as The system can be solved for the released unknowns only: where While ∀μ ∈ D = I 1 × I 2 × . . . × I n p , K(μ) ∈ R (n dof ×n dof ) , K R (μ) ∈ R (n R ×n R ) . In the unit-cell problem, n dof = 24 is reduced to n R = 13. Now the system can be solved symbolically with the parameters μ = [t a b α] T . With this, the parametric solution for case XX results: Figure 25 shows a scaled analytical solution for case XX, evaluated at parameters b = 1, t = 1 40 , a = 0.5, α = 60 • as an example. Finally, the components of the effective elasticity matrix are calculated from (37) and are shown in Eqs. (43)-(48). It is worth mentioning that these results are affected by a factor of E, the Young's modulus of the constituent material, which is here omitted for the sake of clarity:
2022-11-28T14:22:51.061Z
2018-01-10T00:00:00.000
{ "year": 2018, "sha1": "423b65c61c356bde15baf74a7324a7f5270c390e", "oa_license": "CCBYNCSA", "oa_url": "https://www.scipedia.com/wd/images/6/69/Draft_Samper_247455909_7453_2018-CM-SGAMD-blanc.pdf", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "423b65c61c356bde15baf74a7324a7f5270c390e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
264522457
pes2o/s2orc
v3-fos-license
SecureLoop: Design Space Exploration of Secure DNN Accelerators Deep neural networks (DNNs) are gaining popularity in a wide range of domains, ranging from speech and video recognition to healthcare. With this increased adoption comes the pressing need for securing DNN execution environments on CPUs, GPUs, and ASICs. While there are active research efforts in supporting a trusted execution environment (TEE) on CPUs, the exploration in supporting TEEs on accelerators is limited, with only a few solutions available [18], [19], [27]. A key limitation along this line of work is that these secure DNN accelerators narrowly consider a few specific architectures. The design choices and the associated cost for securing these architectures do not transfer to other diverse architectures.This paper strives to address this limitation by developing a design space exploration tool for supporting TEEs on diverse DNN accelerators. We target secure DNN accelerators equipped with cryptographic engines where the cryptographic operations are closely coupled with the data movement in the accelerators. These operations significantly complicate the scheduling for DNN accelerators, as the scheduling needs to account for the extra on-chip computation and off-chip memory accesses introduced by these cryptographic operations, and even needs to account for potential interactions across DNN layers.We tackle these challenges in our tool, called SecureLoop, by introducing a scheduling search engine with the following attributes: 1) considers the cryptographic overhead associated with every off-chip data access, 2) uses an efficient modular arithmetic technique to compute the optimal authentication block assignment for each individual layer, and 3) uses a simulated annealing algorithm to perform cross-layer optimizations. Compared to the conventional schedulers, our tool finds the schedule for secure DNN designs with up to 33.2% speedup and 50.2% improvement of energy-delay-product.CCS CONCEPTS• Computer systems organization → Neural networks; Data flow architectures; • Security and privacy → Security in hardware. INTRODUCTION Deep neural networks (DNNs) are increasingly deployed in securitycritical applications that process sensitive user information or make high-stakes decisions.However, the security threats exploiting hardware-level vulnerabilities can undermine the privacy and integrity necessary for such applications.For example, prior work has shown that the confidentiality of the DNN models and training data can be leaked via bus snooping attacks and cold boot attacks [11].Moreover, the integrity of the DNN models can be tampered with via data corruption attacks and RowHammer attacks [16,17,31,44,52], leading to malfunctioning DNNs that generate either extremely low accuracy or biased output on the assigned tasks. An appealing solution to these security threats is to provide a trusted execution environment (TEE) for DNN computation.There have been extensive research efforts in supporting TEEs on CPUs, including the commercialized solutions by major chip vendors and various open-source solutions from academia, such as Intel SGX [7] and Keystone [26].These solutions often rely on cryptographic operations (in software or hardware) to perform encryption and authentication for off-chip data accesses [9,10,37,39,45,51].However, as those solutions target general-purpose applications, they cannot match the high data-intensity nature of DNN applications, let alone being used in DNN accelerators.As such, researchers have been working on customizing TEE solutions for DNN accelerators [18,19,27].These schemes leverage the structured and predetermined data access patterns of DNN accelerators and derive a coordination plan between data movement and cryptographic operations.As a result, the cryptographic operations and the data movement in DNN accelerators become closely coupled. However, there exists one key limitation of all existing works.Recent years have witnessed innovations in DNN accelerators with various designs and deployment setups, ranging from highperformance data centers to low-power edge devices [5,6,13,22,30].DNN accelerator architectures can vary significantly in terms of dataflow, PE, and on-chip buffer organizations.Unfortunately, the existing works on secure DNN accelerator designs only considered a few specific architectures [22] as their baseline designs, and it is difficult to generalize the cost of securing those designs to other diverse DNN accelerators with distinct performance goals and area/energy budgets.This paper aims to address this limitation and develop a design space exploration tool for secure DNN accelerators.We identify several challenges in developing such a tool, especially related to identifying the optimal scheduling of the workload. Challenges Secure DNN accelerators need to include on-chip cryptographic engines that perform encryption and authentication operations.To ensure data integrity, a cryptographic hash is introduced that is associated with each block of data (called an authentication block) and is used to verify the integrity of the data before performing any computation on it.For data confidentiality, this process requires the decryption of data flowing from DRAM to on-chip buffers and the encryption of data flowing in the opposite direction.When fetching a unit of data from DRAM to on-chip buffers, we need to fetch the whole authentication block containing this unit of data with its corresponding hash.Upon writing the data back to DRAM, a new hash needs to be computed based on the whole block of data and written back together with the data.The cryptographic operations described above introduce extra on-chip computation and additional off-chip memory accesses. When designing a design space exploration tool for secure DNN accelerators, in addition to counting the performance overhead of cryptographic operations, we need to tackle an important yet unexplored research challenge.The challenge arises when the authentication block is not fully aligned with the tiling of data (tiles are the unit for data movement between memory levels in DNN accelerators).Such misalignment of the authentication block and the tiles leads to fetching redundant data for the purpose of performing cryptographic authentication rather than DNN computation. This challenge is further exacerbated by cross-layer dependency among the layers in a DNN.Specifically, the output feature map of one layer is used as the input feature map to the next layer, and hashes will be computed and associated with fixed authentication blocks when the output feature map is generated.Since tile assignment is done independently for consecutive layers in traditional DNN scheduling, misalignment in tiling between one layer's output feature map and the next layer's input feature map introduces additional challenges for assigning authentication blocks.Furthermore, cross-layer dependency due to authentication blocks also implies that the schedules of consecutive layers are intertwined, exponentially increasing the search space for scheduling.This Paper In this work, we present a framework for design space exploration of secure DNN accelerators, for systematic investigation of the performance, area, and energy trade-off for supporting a TEE in different DNN accelerator designs.A fair comparison among different designs requires a scheduling algorithm that can elicit the best possible performance of an accelerator design for a given DNN workload [4,8,15,20,32]. At the core of our framework is a scheduling search engine with three steps.First, we start by augmenting a baseline DNN scheduler with the capability to take the performance and energy overhead of the cryptographic engines into account.Next, we formulate the authentication block assignment problem into a mathematical problem that can be analytically solved with computationally-efficient algorithm and figure out the optimal authentication block size for each datatype and layer.Finally, we solve cross-layer optimization of the overall scheduling using simulated annealing, a heuristicbased search algorithm, to trade-off between search time and the quality of results. We implement SecureLoop framework upon an existing scheduling tool, Timeloop [32].We show that, compared to the baseline scheduler that targets traditional DNN accelerators without cryptographic support, our cryptographic-engine-aware scheduler can find better schedules for secure DNN designs with up to 33.2% speedup and 50.2% improvement of energy-delay-product.We use our tool to perform a thorough design space exploration across multiple DNN workloads, and we derive the area versus performance trade-offs for different secure accelerator designs, providing insights on which designs can be the Pareto front of this trade-off curve. BACKGROUND 2.1 DNN Accelerator Design Space Exploration DNNs [14,25,28,36,38,41] are comprised of multiple layers.A multi-dimensional convolutional (CONV) layer is widely used for image and video processing applications.The computation of a 2-dimensional CONV layer (shown in Fig. 1a) involves taking a 3-dimensional ′ × ′ × tensor called the input feature map (ifmap) and 3-dimensional tensors called weights with a size of × × , performing convolution operations to produce a tensor called output feature map (ofmap) with the size of × × .1 Fully-connected layers that compute matrix multiplication can also be described in this form by setting , , and to 1, and and to be the size of ofmap and ifmap vectors. DNN accelerators are designed to exploit substantial data reuse within this multi-dimensional convolution and matrix multiplication computation.Given an architecture specification such as the number of processing elements (PEs) and on-chip buffer sizes, a designer aims to optimize performance and energy efficiency of an accelerator by figuring out the optimal schedule of the given DNN workload.A schedule describes how the computations and data movement are temporally and spatially mapped to hardware resources, and can be succinctly formulated using a nested for-loop called "loopnest" [32].For example, in Fig. 1, we show an example architecture specification (Fig. 1b) and a sample loopnest schedule (Fig. 1c) corresponding to this architecture's memory hierarchy.The loopnest schedule in Fig. 1c describes the tiling strategy between memory levels and the multiplication-and-accumulate compute order.Note that a schedule is also referred to as a mapping in the literature [20,32].Prior work acknowledges that the schedule search space for DNN accelerators is large, and efficiently searching for the optimal schedule presents a research challenge [4,8,15,20,32].Several methodologies have been proposed.For example, Timeloop [32] used brute-force search over all possible loopnests, and supported approximate methods like random pruning to reduce the search time.CoSA [20], on the other hand, formulated the search problem as a constrained-optimization problem that can be solved using integer programming techniques.Furthermore, other classes of schedulers proposed to use machine learning driven approaches, such as [15]. The goal of this paper is to augment the design space exploration tools with the capability to take data encryption and authentication into account and find the optimal schedules for secure DNN accelerators. Memory Encryption and Authentication We consider both the confidentiality and integrity of data stored in the off-chip DRAM.A trusted execution environment (TEE) assumes that the on-chip structure are trusted and the off-chip memory is insecure.To ensure the confidentiality and integrity of data stored in the off-chip DRAM, cryptographic primitives, such as authenticated encryption, are often used. An authenticated encryption scheme takes a plaintext, a secret key, and an encryption seed as inputs, and computes a ciphertext and a hash.Fig. 2 depicts the interface of a cryptographic engine that implements such a scheme with an explicit annotation on where each type of data is located.The hash is stored off-chip and is used to verify the integrity of the ciphertext.The encryption seed is composed of a counter, the address of the data, and a randomly generated initialization vector.The counter serves as a version number for the data and is incremented every time the accelerator generates a new version of the data.Since DNN accelerators use explicit data orchestration [34] and the accelerators have full knowledge of the version number, recent works [18,19,27] propose to track the counter using on-chip structures or the host CPU.Therefore, we assume the counters can be computed and accessing them does not incur complicated off-chip accesses. All datatypes in a DNN (i.e., weights, ifmaps, and ofmaps) are in plaintext when they are stored and processed on-chip.When ofmaps or intermediate partial sums are generated and need to be written back to the DRAM, the cryptographic engine computes the ciphertext and hash corresponding to the data.When the data is fetched in the opposite direction, the accelerator retrieves the ciphertext data along with its associated hash from the DRAM, and feeds both into the cryptographic engine.The cryptographic engine validates the integrity of the ciphertext data against its hash and decrypts the data before supplying it to the on-chip components.AES-GCM There are several standardized protocols for authenticated encryption, and among them, AES-GCM (Galois Counter Mode) has been widely used for its appealing characteristics in performance [37,51].As shown in Fig. 2, an AES-GCM block is primarily composed of an AES engine and a Galois-field multiplier.The encryption seed is fed into the AES engine to generate a onetime pad.Then, the one-time pad is XOR-ed with a plaintext to obtain a ciphertext, and vice versa.A hash is computed from a ciphertext using the Galois-field multiplication. When performing design space explorations for secure DNN accelerators, we need to account for the overhead introduced by the cryptographic engine.There exists a variety of AES implementations with diverse performance and area characteristics.For example, Fig. 3 compares the AES hardware accelerator implementations published between 2001-2018 in circuits literature [2,3,12,29,42,53].It shows a clear trade-off between performance and area, where performance is measured by the average latency of encrypting/decrypting a 128-bit block (y-axis) and the area is counted by the number of equivalent gates to fairly compare among different technologies (x-axis).Our design space exploration tool, SecureLoop, can help select the appropriate cryptographic engine architecture to achieve an optimal performance/area trade-off. MOTIVATION AND GOALS We aim to develop a design space exploration tool for secure DNN accelerators, equipped with a search algorithm that identifies the optimal scheduling considering the unique properties of secure DNN accelerators.In this section, we point out that a cryptographic engine, which are often considered as a low-cost add-on to a predefined DNN accelerator in prior work [18,19,27], can pose significant overhead to different designs.Besides, we point out that authentication block assignments introduce a significant amount of complexity to our scheduling search space that our tool needs to navigate. Overhead Due to Cryptographic Engines Existing work on designing secure DNN accelerators [18,19,27] overlooks the fact that cryptographic engines can pose non-trivial overhead to the performance, energy, and area of the accelerator design and significantly shift the optimal design choices.As shown in Fig. 3, existing cryptographic engines do not achieve area-efficiency while attaining high performance at the same time.To make the point clearer, consider the DNN accelerators that target low-power and resource-constrained embedded platforms and IoT devices, such as Eyeriss [6] and other designs [13,30].To augment these accelerators with cryptographic engines to support a TEE, for example, we can use one AES-GCM engine that handles encryption and authentication for each datatype (i.e., ifmap, ofmap, and weight) as in [27].When each AES-GCM engine is composed of a fullypipelined AES engine and a single-cycle Galois-field multiplier [2], this configuration requires 416.7kGates in area, approximately 35% of the logic gates in Eyeriss [6], incurring extensive area overhead. We note that prior work [18,19,27] only considered solutions for power-hungry accelerators, such as TPU [22], with large silicon area (e.g., > 100mm 2 ), and those design choices are not transferable to low-power and energy-efficient accelerators.Furthermore, the throughput of cryptographic engines has non-trivial impact on the loopnest scheduling.As cryptographic operations, such as encryption/decryption and authentication, accompany off-chip accesses, the supply of off-chip data to a DNN accelerator can be throttled by cryptographic engines if they have insufficient throughput.So far, we have shown that cryptographic engines complicate the design space of secure DNN accelerators.Our tool, SecureLoop, strives to perform a holistic assessment of the overhead due to cryptographic engines. Authentication Block Assignment Authentication block assignment is a critical challenge for our design space exploration tool, as it extensively complicates the scheduling for secure DNN accelerators.Recall from Section 2.2, to perform memory authentication, a hash is computed for each block of off-chip data to verify its integrity.We call the unit of data that a hash is associated with an authentication block, or AuthBlock for short. In prior work [18,19], an authentication block is assigned using a strategy we refer to as "tile-as-an-AuthBlock".Specifically, in DNN accelerators, data is grouped into tiles and off-chip access is performed at the granularity of a tile.The size of the tile can be chosen to optimize for data reuses.The "tile-as-an-AuthBlock" strategy assigns authentication blocks to exactly match each datatype's tile organization. 3.2.1 Cross-layer dependency."Tile-as-an-AuthBlock", as a simple strategy, optimizes for minimizing the amount of hash reads for an individual DNN layer.However, it can incur unforeseen overhead due to cross-layer dependency.Cross-layer dependency arises from the characteristic of a DNN that the output feature map (denoted as ofmap) of one layer serves as the input feature map (denoted as ifmap) of the next layer.Fig. 4 provides an example to illustrate how such a dependency complicates the data traffic due to the AuthBlocks. Consider a piece of data, when served as ofmap, the tiling strategy divides the data into 1×3 tiles.When served as ifmap, the tiling strategy divides the data into 2 × 2 tiles.We are running into a situation where we need to find an AuthBlock assignment strategy for the same piece of data that will be accessed by the accelerator with distinct patterns.If we follow the "tile-as-an-AuthBlock" strategy as in prior work, when assigning AuthBlock according to the ofmap tiles, we end up with a significant amount of redundant accesses when the data is served as ifmap.As shown in Fig. 4(c), when the accelerator fetches the first ifmap tile for DNN computation, it is forced to fetch the whole AuthBlock 1 and 2, doubling the off-chip traffic. One workaround to reduce the redundant data accesses is to allow two different AuthBlock assignments for the same piece of data, which require a potentially high-cost "rehash" operation.Specifically, the AuthBlock assignment of the data was first optimized for ofmap access patterns (e.g., using "tile-as-an-AuthBlock").Before the data is used as ifmap, the accelerator reads the data into the accelerator, fully decrypts the data, and re-assigns hashes based on a different AuthBlock organization that is optimized for ifmap access patterns.Rehashing introduces extra delays and off-chip traffic, degrading the performance overall.Thus, to avoid rehashing, we aim to find the unified AuthBlock assignment considering different tiling strategies for one layer's ofmap and the next layer's ifmap. Halos. Another problem that the "tile-as-an-AuthBlock" strategy faces is for convolution accelerators that directly perform CONV layers, instead of converting them to matrix multiplications using im2col.Fig. 5 compares how tiles are organized for the two different types of accelerators.Fig. 5(a) shows that, due to coarsegrained tiling, the accelerators dedicated for convolutions can have overlaps between tiles, especially in the ifmap datatype.We refer to the overlapping region as a "halo" throughout this paper.However, in Fig. 5(b), in the matrix multiplication case, each element exclusively belongs to one tile and there does not exists any overlap between tiles.The existence of halos makes "tile-as-an-AuthBlock" an unappealing strategy.If we allow two AuthBlocks to share the overlapping data, we are forced to duplicate the halo data by encrypting and authenticating the data at least twice using different encryption seeds, which are composed of different counters, addresses, and initialization vectors.As a result, both the off-chip traffic and the memory footprint overhead are increased.Alternatively, not duplicating the halo data can result in large redundant reads if some AuthBlocks span across both the non-overlapping data and the halo data in one tile.In SecureLoop, we aim to search for the AuthBlock assignment to minimize the additional off-chip traffic caused by halos. Goal of AuthBlock Assignment. To summarize, AuthBlock assignment poses a critical challenge in identifying the optimal scheduling for secure DNN accelerators, primarily for two reasons.First, the misalignment between AuthBlocks and data tiles, caused by cross-layer dependency or halos, leads to redundant data fetches for cryptographic authentication rather than DNN computation.Second, cross-layer dependency due to the AuthBlock assignment implies that the loopnest scheduling of two layers becomes fundamentally intertwined.There might be a loopnest schedule for one layer that is not optimal on its own, but results in better overall performance when it is considered together with its next layer. Our design space exploration tool, SecureLoop, aims to search for the optimal AuthBlock assignment strategy to reduce off-chip traffic and maintain high performance.We consider the impacts of both the size and the orientation of the AuthBlocks and examine how they affect the additional off-chip traffic.In addition, we consider cross-layer dependency directly from the loopnest scheduling level, and search for schedules that optimize for global performance rather than a single-layer performance.In Section 5.1, we demonstrate that using an optimal AuthBlock assignment and performing the cross-layer optimization can provide 3-33% faster schedules and reduce the additional off-chip traffic from cryptographic operations by 37-94% compared to the "tile-as-an-AuthBlock" strategy. SECURE ACCELERATOR SCHEDULING We present SecureLoop, a design space exploration tool that is equipped with a scheduling search engine (Fig. 6) for secure DNN accelerators.First, we introduce a simple model to estimate the performance and energy overhead of various cryptographic engines.The estimated cost is used to properly configure the architecture parameters, such as the off-chip bandwidth, of the existing loopnest scheduling algorithms.This approach is general enough to be compatible with a broad range of existing loopenst scheduling algorithms, such as Timeloop [32] and CoSA [20]. Second, we design a methodology to search for optimal authentication block assignment that takes both the size and the orientation of AuthBlocks into consideration.The key research challenge is that counting the amount of extra off-chip traffic caused by integrity verification via detailed simulation has scalability issues and cannot cope with a large search space.The approach that we take to address this scalability issue is to formulate the counting problem as a mathematical linear congruence problem and solve it efficiently. Finally, we design a cross-layer fine-tuning stage to optimize both the scheduling and authentication block assignment strategy for cross-layer dependencies.The research challenge here is that the search space is amplified exponentially when we consider multiple layers together, especially for DNN workloads with a large number of layers, such as MobilenetV2 [41].We use simulated annealing by heuristically defining neighboring loopnest configurations, and search for the final schedule. A Model for Cryptographic Operations We aim to identify the loopnest schedules for secure accelerators by leveraging the existing DNN loopnest schedulers.Recall that the difference between a secure DNN accelerator and a traditional accelerator is the extra cryptographic operations performed by the augmented cryptographic engine.The DNN loopnest schedulers have to be modified to account for those cryptographic operations.We adopt a simple and integrable solution that models the cryptographic operations as an additional constraint upon the off-chip DRAM bandwidth.Given that each piece of off-chip data access needs to go through both the DRAM interface and the cryptographic engine, the slower component among the two limits the off-chip bandwidth.Thus, we derive the effective off-chip bandwidth of a secure accelerator by taking the minimum of the memory bandwidth and the cryptographic engine bandwidth.This effective bandwidth replaces the original memory bandwidth for loopnest scheduling purposes.Such an approach is highly compatible with loopnest search tools whose internals may vary significantly.Besides, this approach is in line with the assumption common among existing search tools, that is, different hardware components on the DNN accelerator are appropriately pipelined with negligible pipelining overhead (e.g., using techniques such as double-buffering or buffets [34]). Mathematical Formulation for Authentication Block Assignment In the second step of our scheduler, we aim to determine an optimal authentication block assignment strategy that can minimize the additional off-chip traffic caused by data authentication, and thus minimize the extra overall performance overhead.This step requires performing an exhaustive search over all feasible AuthBlock sizes and orientations for each layer and datatype (i.e., weight, ifmap, and ofmap).Such a search poses a serious scalability issue, which we address with a mathematical formulation of the problem. The Search Space We start by describing what the search space for authentication block assignment looks like.Given the nature of the memory authentication operation, it introduces additional off-chip memory accesses, classified into two categories.First, extra accesses to fetch the hashes.Second, extra accesses to fetch the data that is not needed for the actual DNN computation, but is needed for integrity verification because it lies within the same authentication block as the data used by the accelerator.We distinguish the two types of overhead as hash reads and redundant reads respectively. There exists a non-trivial search space for authentication block assignment, because both the size and orientation of the authentication block affect the off-chip traffic overhead.We provide examples in Fig. 7 to illustrate the search space and highlight the trade-off between hash reads and redundant reads with different AuthBlock assignments.In each figure, we highlight the first ifmap tile in orange, mark each authentication block with solid blue lines, and we list the hash reads and redundant reads at the bottom of each assignment. In Fig. 7(a) and (b), the example reassembles the cross-layer dependency case described in Section 3.2, where the AuthBlock is assigned according to the ofmap tiling.Since there are two Auth-Blocks in total, the hash reads overhead is low.However, large redundant reads are incurred as all data belonging to AuthBlock 1 and 2 has to be fetched when accessing the first tile.Fig. 7(c) and (d) compares two cases of using a horizontal Au-thBlock with varied size.In (c), it is an extreme case where the AuthBlock size is 1, meaning each element is assigned with a hash, resulting in high hash reads overhead with zero redundant reads.In (d), when we increase the AuthBlock size from 1 to 2, the hash reads are reduced by half, but we start to have redundant reads because some of the AuthBlocks span across the boundary of the first tile.These two cases clearly demonstrate the impact of the size of AuthBlocks.When the AuthBlock size increases, the hash reads decrease, but the redundant reads can increase. To further complicate the space, the orientation of the AuthBlock also matters.Fig. 7(e) shows vertical AuthBlocks with a size of 3.This strategy happens to be an ideal strategy, because every AuthBlock resides exactly within the first ifmap tile, leading to no redundant reads.Meanwhile, since the AuthBlock size is 3, it has 1/3 of the hash reads overhead compared to the horizontal size-1 strategy shown in Fig. 7(c).However, if we increase the size of the vertical AuthBlock to 6 in Fig. 7(f), the amount of redundant reads increases. In summary, both the orientation and size of an AuthBlock affect the off-chip traffic overhead.We perform an exhaustive search to identify the optimal AuthBlock assignment, and the search has to be efficient.Mathematical Formulation We now describe our mathematical formulation of the authentication block search process.This formulation will enable us to solve the problem analytically.Our formulation can be applied to the two cases where redundant reads occur: 1) cross-layer dependency, and 2) halos (Section 3.2). In both cases, we are given a piece of data, its tile organizations where several tiles overlap (e.g., overlaps between the ofmap tiles and the ifmap tiles for cross-layer dependency, or overlaps between tiles among the ifmap tiles for halos).We are asked to calculate the number of redundant reads and hash reads for each AuthBlock assignment.Since each time an AuthBlock is accessed, all the elements in that AuthBlock need to be fetched together, we reduce the problem of calculating redundant reads to counting the number of AuthBlocks that overlap with each tile. We convert the above problem into a linear congruence problem as follows using an example in Fig. 8.The example shows a 2D tensor with two overlapping tiles, called tile and tile .For illustration purposes, assuming the two tiles have the same height, ℎ, and different widths, denoted as and .Let's consider the case when we assign horizontal AuthBlocks to fully cover tile , so that no redundant access is needed when accessing tile (if tile is the ofmap tile, this will be a natural scenario as hashes will be computed as the ofmap is generated).These AuthBlocks may not fully align with the boundary of tile , and thus we need to handle the case when the AuthBlocks crosses the boundary of tile .The AuthBlocks can overlap with tile in three conditions, shown in Fig. 8(b). We denote an AuthBlock using the (, ) coordinates of its left and right edges.Specifically, an AuthBlock has its left edge labeled as ( , ) and it right edge labeled as ( , ).Then the following mathematical conditions can be used to precisely capture the three conditions.The first two scenarios are straightforward, where either the right side or the left side of the AuthBlock overlaps with tile : (1) The third scenario describes an AuthBlock wraping around tile with both of its left and right edges located in tile (assuming the tile size is lesser than ). With the above formulation, we set out to efficiently calculate the number of AuthBlocks that can satisfy one out of the three formulas above.Assuming an AuthBlock configuration with a height of 1 and a varied width denoted as , the , for the -th AuthBlock is derived as = ( × ) mod and = ( × + ( − 1)) mod , and can be plugged into the three formulas.Then, solving the inequalities for and , and converting the inequalities into the linear congruence equation by listing all possible values satisfying the inequalities, we obtain the following linear congruence problem: The formula above uses modular arithmetic, where ≡ (mod ) means that the remainders of and divided by are equal.We then end up counting how many occurrences of (0 ≤ < ⌈ ℎ× ⌉) satisfy Eq. ( 4).This linear congruence problem can be efficiently solved using the extended Euclidean algorithm that finds the greatest common denominator.We can use this algorithm to find all s in a log-linear time. The above example demonstrates horizontal AuthBlock assignment and assumes 2D tiles with the same height.However, the discussed methodology above is general enough to be applicable to vertical AuthBlock assignments and for higher-dimensional tiles with arbitrary overlapping patterns.To generalize the problem, consider a -dimensional tile.We search for AuthBlocks with − 1 of the dimensions set to 1 and the remaining dimension to be varied.In essence, we are flattening an -dimensional tensor to a 1-d vector and slicing it.Therefore, our computation complexity increases linearly as the possible value of the width of AuthBlocks, i.e., , whose maximum value is capped by the number of elements in a tile, regardless of the dimension size. Example of Analysis Results In Fig. 9, we visualize the search space of AuthBlock assignment.The example follows the setup in Fig. 8 by setting ℎ = 30, = 30, and = 20.We then sweep the AuthBlock size from 1 to 30 for the horizontal orientation (note that > 30 will result in the same redundant reads as the "tileas-an-AuthBlock"), and from 1 to 900 for the vertical orientation, where the upper bound means using the full tile as an AuthBlock, to see how these variations affect the overall off-chip traffic when accessing the misaligned tile.In both figures, we observe an inversely proportional relationship between the AuthBlock size and the amount of hash reads.When we use horizontal AuthBlocks, we observe that the overall trend between the redundant reads and the AuthBlock size is a positive linear relationship, but there exist several distinguishable local valleys.We observe the optimal assignment choice is to set = 10, which hits a local minimal value of the redundant reads, and meanwhile incurs a moderate level of hash reads overhead.When using vertical AuthBlocks, the trade-off space is rather irregular.Since the two tiles in Fig. 8 have the same height, we periodically observe zero redundant reads whenever the AuthBlock size is a factor of ℎ × ( − ) = 300.Using an exhaustive search, we identify the optimal AuthBlock size is 300. Efficient Cross-layer Fine Tuning Cross-layer dependency interweaves the loopnest scheduling of two consecutive layers.So far, we derived the loopnest schedules with the best individual layer performance in the first step of our scheduler, and identified the optimal AuthBlock assignment based on those loopnest schedules.However, the obtained schedule may not be the global optimum considering the dependency.To account for cross-layer dependency from the loopnest scheduling level, we introduce the third step in our scheduler that fine tunes the final schedule.Challenges Traditional schedulers for DNN accelerators usually search for the optimal scheduling for each layer independently, and they cannot be easily adapted to consider the influence of cryptographic operations. The search space for loopnest scheduling increases exponentially with the number of layers we have to search jointly.For brute-force search algorithms [4,8,32], computational complexity imposed by cross-layer dependency can become prohibitive for deep models [14,41].Other algorithms, such as optimization-based techniques [20], is unlikely to be applicable due to the AuthBlock assignment, as the mathematical formulation for the AuthBlock assignment (Section 4.2) cannot be easily reduced to a closed-form.Moreover, it cannot be guaranteed to result in convex or linear constraints on objective functions.There is only limited work on jointly searching the loopnest schedules for multiple layers, such as in the context of fused-layer processing [43].We consider those efforts to be promising yet orthogonal to our work. ← GetTemperature(, , init , final ) 14: end for a probabilistic method for solving an optimization problem over a large search space.It iteratively searches for the neighbors of the current state and probabilistically decides whether to move on to the new state.This probability is determined by the difference in the costs of the current state and the new state, and a parameter called temperature.The temperature is gradually decreased, such that suboptimal yet diverse states can be explored in the earlier iterations, while the best solutions can be exploited in the later iterations. Algorithm 1 describes our adaptation of simulated annealing algorithm for cross-layer fine tuning.We denote • as the optimal loopnest schedule of the -th layer found from the first step without considering cross-layer dependency.Our algorithm attempts to identify a set of loopnest schedules ( 1 , • • • , ) that results in better performance compared to when layers are considered altogether. The algorithm starts by initializing the current set of loopnest schedules as and calculates its cost using the performance model and the optimal AuthBlock assignment (lines 1-2).Then, for each iteration, the algorithm randomly selects one layer and a neighboring schedule ′ (line 6) for that layer.Observe that the key component of this algorithm is a heuristic involved in proposing a neighbor (the GetNeighbor function).There is no natural metric for measuring the similarity between two loopnest schedules.Our scheme uses the per-layer performance as the similarity metric.Specifically, we obtain top- best loopnest schedules per layer from the first stage loopnest scheduler, and the GetNeighbor function randomly samples among these different loopnest schedules to get a neighbor.When searching among possible schedules for each layer, the search space has distinct combinations to be explored in a limited number of simulated annealing iterations.The new schedule that replaces the loopnest schedule of the -th layer with ′ is probabilistically accepted (lines 9-12), and the temperature is decreased linearly (line 13).Impact of Search Parameters We examine how the search parameters affect the search result.We demonstrate the performance improvement when the simulated annealing method is used with different values of (the number of top schedules per layer to form the neighbor set) in Fig. 10.The numbers are for an architecture derived from Eyeriss [6] with a cryptographic engine with an energy-efficient AES-GCM implementation from [2] (detailed specifications in Table 2) running a MobilenetV2 [41] workload. Increasing from 1 to 2 can improve the overall performance about 5%.However, further increasing results in a slower performance improvement, and the improvement stalls around the point when = 6.Considering that a larger does not always result in better speedup but can substantially increase the search space size, we set = 6 for subsequent experiments.Also, the number of iterations directly affects the search time, and we use 1000 iterations as a default setup to trade-off the quality of results and the search time. Handling Post-processing Operations Our cross-layer fine tuning needs to consider post-processing operations between consecutive layers, such as Batch Normalization [21], activation functions, and pooling operations.There exist two cases. First, some post-processing operations can be performed on-thefly while the ofmap is being generated, thus can be considered as part of that layer.We consider Batch Normalization, ReLU activation, and adding zero pads around the feature map to fall into this case, as they are simple operations.We handle such operations using the cross-layer fine-tuning approach discussed above. Meanwhile, some other post-processing operations cannot be performed on-the-fly together with in-layer computation, such as pooling operations and operations to combine several feature maps for residual connections.The existence of these operations requires a separate computation step, and inevitably triggers rehashing.Thus, the cross-layer dependency problem for AuthBlock assignment does not exist.As such, given a full DNN, we divide them into multiple segments based on the existence of such post-processing operations and apply fine tuning within each segment. EVALUATION We first present the effect of scheduling algorithms on the performance of a secure accelerator in Section 5.1.Then, we show the performance of diverse secure DNN accelerator designs, that vary in the choice of cryptographic engines, the number of PEs, and the size of the on-chip global buffer (Section 5.2).From these experiments, we show the area-performance trade-off for secure accelerators, and provide insights on the Pareto optimal design points (Section 5.3).Base Architecture Configuration We consider diverse DNN accelerator designs in the following experiments derived from a base configuration.As the base configuration, we use a spatial DNN accelerator with multiple processing elements (PEs), where each PE has an ALU and a small local memory, operating in parallel and organized as a 2-dimensional array of shape × .The base configuration has an on-chip SRAM buffer, and the data movement can be described by its dataflow.We set the base configuration to use the row-stationary dataflow from [6], 14 × 12 PEs, and 131kB on-chip global buffer. Effect of the Scheduling Algorithm We examine the effect of different scheduling algorithms (Table 1) on the performance of secure accelerators.As a baseline scheduling algorithm, we consider Crypt-Tile-Single, which indicates that it uses the Timeloop [32] with the effective bandwidth and energy for the off-chip access reflecting the cryptographic operations, the "tile-as-an-AuthBlock" assignment strategy, and does not consider cross-layer dependency.We note that supplying the proper bandwidth and energy parameters to Timeloop is crucial to prevent suboptimal loopnest schedules degrading the baseline performance especially when the cryptographic engine has low throughput.We then add the second and third step one by one, with the most optimized version denoted as Crypt-Opt-Cross with both the optimal AuthBlock assignment and the cross-layer search enabled.The first step of our scheduler is implemented upon Timeloop, but with an extension to support top- loopnests searching and modifications to the effective energy and bandwidth for the off-chip accesses. The second and third steps of our scheduler are implemented as independent modules that accept the top- loopnest schedules for each layer as inputs, and return the final loopnest schedule and optimal AuthBlock assignments. We evaluate our scheduling algorithms on the DNN accelerator design with the base configuration we described before.The secure accelerator uses an area-efficient parallel AES-GCM implementation [2,3] as its cryptographic engine (one per each datatype).For the off-chip DRAM access, we assume LPDDR4 with the throughput of 64B/cycle.Accelergy [49] is used to estimate energy and area of each component on the DNN accelerator, assuming 40/45nm technology it supports. Fig. 11(a) shows the slowdown in secure accelerators, i.e., the number of cycles to process a workload normalized to that of baseline (unsecure) accelerators.Fig. 11(b) shows the additional off-chip traffic incurred by cryptographic operations for each scheduling algorithm.We examine three workloads with varying number of layers and characteristics: AlexNet [25] , ResNet18 [14] , and Mo-bilenetV2 [41] .These workloads are mainly convolutional, and note that we only consider first 5 layers of AlexNet that are convolutional. First, our optimal AuthBlock assignment strategy reduces the additional off-chip traffic across all three DNN workloads compared to the "tile-as-an-AuthBlock" assignment.The benefit comes from two factors: 1) rehashing operations are not necessary between dependent layers as the AuthBlocks are assigned by considering the mismatches between their tiling strategies, and 2) both redundant reads and hash reads are minimized without having to rehash or duplicate some data.Also, this step reduces the slowdown by up to 29.9% compared to Crypt-Tile-Single as well.These two factors affect deeper workloads more significantly, and the benefit of the AuthBlock assignment is most visible in MobilenetV2. Second, cross-layer fine tuning of our scheduling primarily improves the performance for a deep workload like MobilenetV2 with additional 3.3% improvement on top of Crypt-Opt-Single.Simulated annealing involves stochasticity when choosing a neighbor, and the performance gain from this step can vary due to randomness.From 5 independent runs for simulated annealing, we observe that the slowdown for MobilenetV2 with Crypt-Opt-Cross can vary from 9.76 to 9.99 with the standard deviation of 0.08, and Fig. 11(a) reports the mean value.This step does not significantly affect the performance on a more shallower workload like AlexNet, where the opportunity for cross-layer optimization is limited.Nevertheless, it is worthy to note that this step reduces the additional off-chip traffic due to redundant reads and hash reads (excluding rehashing-related traffic) by 32.6% and 16.0% even for AlexNet and ResNet18.Overall, our scheduler results in a schedule that is up to 33.2% faster and 50.2% better in EDP compared to Crypt-Tile-Single.Roofline Model We can also use the roofline model [48] to intuitively reason about the impact of scheduling algorithms (Fig. 12).In the left of Fig. 12, the roofline model describes the performance (y-axis) of each DNN workload, as a function of the computational intensity (x-axis).The computational intensity is measured by the number of operations (e.g., multiplication and addition) per byte of DRAM traffic, and performance is measured by the number of operations per second, assuming a 100MHz clock.There are two solid lines illustrating the maximum possible performance: the horizontal solid line is determined by the number of PEs that can operate in parallel, and the slanted solid line represents the performance limited by the off-chip memory bandwidth.The dotted slant line is based on the effective off-chip bandwidth of a secure DNN accelerator constrained by its cryptographic engine, assuming a single parallel AES-GCM engine processes every off-chip data transfer (in actual designs, each datatype has its own dedicated cryptographic engine, and the performance can be higher than this effective line).We can observe that the workloads were in the compute-bounded region for the unsecure baseline accelerator, but throttling from the cryptographic engine pushes the workloads to be in the effectively memory-bounded region in secure accelerators.The right part of Fig. 12 zooms in to show the different scheduling algorithms for the MobilenetV2 workload, and shows that each step in our scheduler improves the performance by finding schedules with higher computational intensity. Impacts on Architecture Configurations We evaluate the impact of using different architecture configurations and cryptographic engines using SecureLoop. Cryptographic Engine Configurations We evaluate the impact of different cryptographic engine configurations, varying in their AES-GCM engine architecture and counts, on the area overhead and the performance.We use three different AES-GCM engine implementations, summarized in Table 2.These designs have distinct characteristics in the area-throughput trade-off, with the fully-pipelined design supporting high throughput but large area overhead, whereas the serial design has low area overhead and low throughput.The parallel design is in between two other designs, with medium throughput and area overhead.The area of AES-GCM engines is normalized to 40nm technology using the equivalent number of gates [2,53].We use the same accelerator architecture as in Section 5.1 and use the Crypt-Opt-Cross scheduling algorithm.Fig. 13 compares the slowdown over the unsecure baseline design for each workload and the area overhead for each configuration.We find that similar performance can be obtained by configurations with very different area overhead.For example, the configuration with 30× serial AES-GCM engines has similar performance to the one with 1× parallel AES-GCM engine, although they have 10× difference in the area overhead.Thus, the scalability of area-efficient yet low-throughput AES-GCM engines can be problematic for DNN accelerators, and often using a moderate number of higher-throughput AES-GCM engines is a better design choice. Scaling the Number of Processing Elements We examine accelerator designs varying the number of PEs in the base configuration.We consider two cryptographic engine configurations, 1× pipelined AES-GCM engine and 1× parallel AES-GCM engine.Fig. 14 shows the evaluation result for different PE organizations 14 × 12, 14 × 24, and 28 × 24.The number of PEs determines the maximum possible performance of the accelerator if the memory bandwidth is sufficient, and this trend is well manifested for the unsecure baseline accelerators (the latency decreases almost by half as the number of PEs is doubled).However, since secure accelerators can be effectively bounded by the supply of decrypted data, the benefit of increasing the PE array size is not apparent for the design with a parallel AES-GCM engine.Thus, the performance of secure accelerators cannot be improved by more PEs unless the cryptographic engine throughput is also increased. Scaling the Size of On-chip Buffer The size of the on-chip SRAM buffer limits the maximum tile size for the ifmap and ofmap for the row-stationary dataflow architecture we used.In Fig. 15, we examine the effect of different buffer sizes (131kB, 32kB, and 16kB) on secure accelerators while other design paramters are fixed.As we scale down the buffer size, the size of tiles between the off-chip and the on-chip buffer decreases, often resulting in larger off-chip traffic.For the unsecure baseline accelerators, larger off-chip traffic is not problematic because they have sufficient off-chip memory bandwidth.However, it can further throttle the secure accelerators with limited encryption/decryption bandwidth, thus leading to longer latency for small buffer sizes.Different DRAM Technologies An off-chip DRAM with higher bandwidth does not necessarily improve the performance of secure accelerators, as the effective off-chip bandwidth is limited by the cryptographic engine.However, the energy for the off-chip access is directly affected by the DRAM technology.To illustrate these two points, we experiment with three different DRAM configurations: LPDDR4 with 64B/cycle throughput, LPDDR4 with 128B/cycle throughput, and HBM2 with 64B/cycle throughput.For the AlexNet workload, we observe that the DRAM bandwidth does not affect the latency and energy of secure accelerators.HBM2 has lower energy per access compared to LPDDR4, and the energy for both the unsecured baseline and the secure accelerators decreases compared to LPDDR4, while the latency is not affected. Impact of TEE Entry/Exit Entering a TEE and exiting from it can affect the performance when the full system is considered end-to-end.Previous works that examined the end-to-end overhead of supporting a TEE for accelerators [27] showed that the initial transfer of DNN weights to the accelerator context is the major source of latency for the entry.We note that this transfer latency might not vary significantly across different accelerator architecture, as the transfer is determined by the model parameter size and the host CPU.Furthermore, when an accelerator is serving multiple inference requests using the same DNN, this initial transfer cost of model parameters can be negligible compared to the overall execution time.Thus, we expect that TEE entries/exits do not significantly affect the optimal design of secure accelerators. Area vs. Performance Trade-off Finally, we plot the area vs. latency (for the AlexNet workload) tradeoff curve for several designs we have discussed so far in Fig. 16.We also derive the Pareto front of this trade-off curve, and observe characteristics of the optimal and suboptimal points.First, the designs with a small on-chip buffer size but with a high throughput cryptographic engine (i.e., pipelined AES-GCM engines) are often optimal.As we observed in Fig. 15, performance is not degraded much if the cryptographic engine provides sufficient throughput even if we scale down the buffer size.Thus, dedicating more area to the cryptographic engine by reducing the on-chip buffer size can provide a good trade-off. Besides, the designs with larger PE array sizes (e.g., 14 × 24 or more) but with a low throughput cryptographic engine result in suboptimal points.This observation agrees with Fig. 14 that the benefit of having higher parallelism cannot be achieved when cryptographic engines are the bottleneck. RELATED WORK We now discuss related work on supporting TEEs on generalpurpose processors and DNN accelerators.We also cover alternative approaches to secure DNN computation.Supporting a TEE for CPUs and GPUs Many optimizations have been proposed to reduce the overhead of supporting a TEE in general-purpose processors, such as CPUs and GPUs.Several works proposed techniques to reduce the overhead of traversing a Merkle tree in CPUs [9,37,51].Different counter formats were proposed to allow a more compact Merkle tree [39,45].Recent works extended a TEE for GPUs and accelerators with a trusted I/O between a host CPU and these accelerators [1,47].Tree-less Verification for DNN Accelerators Recent works showed that the counters do not have to be stored, and instead can be calculated from the computation pattern of DNN accelerators [18,19,27], removing the necessity of a Merkle tree.[18,19] tracked the counters by a MCU unit on the accelerator, and [27] proposed an external host processor to supply the counters.[18,19] further proposed "tile-as-an-AuthBlock", and we considered this strategy as the baseline in our evaluation Other Techniques for Secure Machine Learning [46] showed that a DNN computation can be delegated to an untrusted accelerator from the secure host CPU using a verifiable outsourcing algorithm.Homomorphic encryption that performs computations over the encrypted domain provides privacy to both the user input and the model, and several works proposed techniques to mitigate its overhead [23,24,35,40].Recent work explored hardware acceleration to support differential privacy as well [33].Finally, [50] presented a method for securing computations over off-chip near data processing accelerators. CONCLUSION This work presents a framework for systematic design space exploration of secure DNN accelerators supporting a TEE for privacy and integrity.We present SecureLoop, that is equipped with a scheduling search engine capable of 1) cryptographic engine aware loopnest scheduling, enabled by a simple performance and energy modeling of a cryptographic engine, 2) the optimal AuthBlock assignment navigating a complex search space dependent on both the size and orientation of AuthBlocks, and 3) cross-layer fine tuning using simulated annealing.Using our framework, we show the impact of design parameters on the performance of secure accelerators, and provide design insights for secure accelerators from the area vs. performance trade-off. (workspace/generate_arch.py).Also, an additional script workspace/scripts/fig14.sh illustrates how to configure the python scripts to evaluate architectures with different PE array shapes (Figure 14). (a) DNN workload specification.Convolutions between weight and ifmap produce ofmap.(b) An example DNN accelerator with the memory hierarchy.(c)A sample loopnest (nested for-loops) with the specification for tiling, loop order, and data bypass. Figure 1 : Figure 1: Design space exploration of DNN accelerators. Figure 2 : Figure 2: A cryptographic engine supporting AES-GCM, a widely used authenticated encryption protocol. Figure 3 : Figure 3: The tradeoff space for AES implementations. Figure 4 : Figure 4: The same piece of data is used as ofmap of one layer (a) and as ifmap of the next layer (b) and the two layers use different tiling configurations.(c) shows that redundant reads are introduced when using the data as ifmap but assigning Authblock following ofmap's tiling organization. (a) Directly computing CONV can result in "halos" (overlaps) between tiles.(b) Converting ifmap with im2col generates a larger matrix that has duplicated data, and tiles do not overlap. Figure 5 : Figure 5: Compare ifmap tiles for two different accelerators, one that directly supports CONV (a), and the other that computes matrix multiplications after converting CONV using im2col (b). Figure 6 : Figure 6: Overview of the scheduling search engine of SecureLoop. 6 Figure 7 : Figure 7: Examples of different AuthBlock assignments and their corresponding hash reads and redundant reads overhead.(a) reassembles the cross-layer dependency example discussed in Section 3.2.(b)-(f) describes 5 different authentication block assignment strategies.Each AuthBlock is marked with solid blue lines and the corresponding caption describes the AuthBlock orientation and size. (a) An example of a mismatch between the tile and the tile .(b) Three conditions for an AuthBlock to lie in the intersection of the tile and the tile from the above example. Figure 8 : Figure 8: Mathematical formulation of counting redundant reads for a given AuthBlock assignment. Figure 9 : Figure 9: The amount of off-chip traffic incurred for accessing tile in Fig. 8 when varying the AuthBlock orientation and size. Figure 10 : Figure 10: Improvement in latency (speedup) when using simulated annealing for different values of , compared to when only the top-1 loopnest schedule for each layer. Performance overhead using different scheduling algorithms, measured by the number of cycles normalized to the unsecure baseline accelerator.The additional off-chip traffic along with its breakdown into hash reads, redundant reads, and rehashing traffic for different scheduling algorithms. Figure 11 : Figure 11: Impacts of scheduling algorithms on performance and off-chip traffic. Figure 12 : Figure 12: Left: Roofline model for accelerators using different scheduling algorithms.White markers represent the unsecure baseline, and colored markers represent secure accelerators.Right: Roofline model zoomed-in to show different scheduling algorithms for the MobilenetV2 workload. Figure 13 :Figure 14 : Figure 13: Slowdown over the unsecure baseline design and the area overhead of secure accelerators varying in their cryptographic engine configurations. Figure 15 : Figure 15: Latency of designs varying in the size of on-chip SRAM buffer. Figure 16 : Figure 16: The area vs. performance trade-off of secure accelerator designs.Points highlighted with red edges indicate the Pareto front of this trade-off curve. Table 1 : Summary of different scheduling algorithms. Table 2 : Specifications of AES and Galois-field multiplier (GFMult) used to construct an AES-GCM engine.
2023-10-28T06:37:40.092Z
2023-10-28T00:00:00.000
{ "year": 2023, "sha1": "64ee238a2f533ee57cafe49f74974dbafad66e11", "oa_license": "CCBY", "oa_url": "https://dl.acm.org/doi/pdf/10.1145/3613424.3614273", "oa_status": "HYBRID", "pdf_src": "IEEE", "pdf_hash": "990a77c5a3a62ed29f20a065571f1ed31c716d3c", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [] }
14544907
pes2o/s2orc
v3-fos-license
Anchor Free IP Mobility Efficient mobility management techniques are critical in providing seamless connectivity and session continuity between a mobile node and the network during its movement. Current mobility management solutions generally require a central entity in the network core, tracking IP address movement and anchoring traffic from source to destination through point-to-point tunnels. Intuitively, this approach suffers from scalability limitations as it creates bottlenecks in the network, due to sub-optimal routing via the anchor point. Meanwhile, alternative anchorless, solutions are not feasible due to the current limitations of the IP semantics, which strongly ties addressing information to location. In contrast, novel path-based forwarding solutions may be exploited for feasible anchorless solutions. In this paper, we propose a novel network-based mobility management solution that facilitates IP mobility over such a path-based forwarding substrate. Our solution exploits the advantages of such substrates in decoupling path calculation from data transfer to eliminate the need for anchoring traffic through the network core; thereby, allowing flexible path calculation and service provisioning. Furthermore, by eliminating the limitation of routing via the anchor point, our approach reduces the network cost compared to anchored solution through bandwidth saving while maintaining comparable handover delay. We evaluate our solution through analytical and simulation models and compare it with the IETF standardized solution, Proxy Mobile IPv6 (PMIPv6). Evaluation results illustrate a significant saving in the total network cost when using our proposed solution, compared to its counterpart. INTRODUCTION T HE significant progress achieved in mobile technologies, allowing users to enjoy Internet based content services during movement, relies on mobility management protocols. Mobility management is a challenging research topic since it largely affects users' experience in respect of preventing frequent disconnections and ensuring session continuity [1]. A key fact of the current Internet is that it was built on an architecture that exploits end host IP addresses as both communication endpoints and forwarding entities. This has been a fundamental obstruction in supporting many of the features and services that came after its initial design, including end user mobility [2]. Considering the rising volume of mobile traffic, due to increased content streaming, it can be concluded that the challenge of supporting mobility will only grow bigger in the near future [3]. As predicted by Cisco, video traffic will compose 80 percent of all consumed Internet traffic in 2019 and traffic from wireless and mobile devices will rise to 66 percent of the total traffic [4]. Conventional IP mobility techniques are based on functions existing in both the mobile terminal and the network to facilitate user mobility. Recently, due to the dominance of mobile traffic over the Internet, the new generation of wireless networks emphasize solutions that relocate mobility functions and procedures from the mobile device to network components. This approach, known as network-based mobility management, allows IP devices running standard This work was carried out within the project POINT, which has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 643990. protocol stacks to move freely between wireless access points belonging to the same local domain. Network-based mobility management is a desirable solution from a network operator's perspective because it allows service providers to enable mobility support without any user interaction or mobile node (MN) modification [5] [6]. For this purpose, several standardization bodies such as the Internet Engineering Task Force (IETF) and Third Generation Partnership Project (3GPP) are expending efforts on establishing reliable and efficient network-based mobility management services and protocols. However, many challenges still remain to be solved for achieving such a goal [7]. Proxy Mobile IPv6 (PMIPv6) [8] is the only IETF standardized network-based mobility management protocol until today and is aimed at accommodating various access technologies such as WiMAX, 3GPP, 3GPP2 and WLAN. In PMIPv6, a central Local Mobility Anchor (LMA) is responsible for maintaining reachability to the Mobile Node's (MN's) IP address while the MN moves between Mobile Access Gateways (MAGs) in the PMIPv6 domain by updating the binding cache in a binding table and maintaining a tunnel to the MAG for packet delivery. On the other hand, the MAG is responsible for detecting the MN's movement and initiating binding registration on behalf of the MN [9] [10]. Proxy Mobile IPv6 also supports IPv4 stack and dual stack mobility modes [11]. PMIPv6, as with other IP mobility solutions, clearly increases network complexity. First of all, it violates network end-to-end transparency: although it provides user experience transparency, an essential goal for mobility support, it does not provide network addressing transparency which requires unaltered mechanisms for the flow of packets and unaltered logical addressing between source and destination [12]. PMIPv6 also increases network fragility due to the explosive growth of the binding table size in the LMA for all MNs in the domain. In addition, it imposes processing complexity in the network core (LMA) and edges (MAGs) to support the necessary protocol functionality during mobility [13] [14]. Similar procedures are adapted by 3GPP in cellular networks where the mobility management entity (MME) controls mobility signaling on the control plane and the serving gateway (S-GW) anchors user traffic on the data plane using the General Packet Radio Service (GPRS) Tunnelling Protocol (GTP) [15] to support mobility. GTP is a group of IP-based communications protocols used in the Global System for Mobile Communications (GSM), Universal Mobile Telecommunications System (UMTS) and Long-Term Evolution (LTE) core networks. In 3GPP architectures, GTP and Proxy Mobile IPv6 based interfaces are specified on various interface points [16]. The aforementioned mobility management approaches are mainly used to reduce mobility signaling costs in environments with a high mobility rate, but as a consequence they cause extra packet tunnelling overhead and inefficient routing due to central traffic anchoring in the network. Such drawbacks of current IP mobility solutions motivate the search for better approaches, as investigated in this paper. One promising approach involves utilizing new forwarding architectures that rely purely on path information for the end-to-end forwarding of packets Instead of relying on host address-based communication with routing information distributed over various network elements. Solutions such as LIPSIN [17] [18] and BIER [19] utilize path information stored in the forwarded packet to deliver a packet traversing through the network. In these alternative, path based approaches, the route computation determines an end-to-end path that is encoded into the packet header while the forwarding operation is considerably simpler than IP forwarding by virtue of executing a simple set membership test which can be efficiently implemented. Recent advances have shown how this path based approach can be carried out in commercially available SDN switches with a switching table size that is constant and considerably lower than traditional end-host address-based solutions [18]. Mobility in these architectures results in (partial) recomputation of a path with the opportunity to deliver the data over an optimal path after every handover operation. The main purpose of this paper is to propose and investigate a path-based approach to mobility management. However, to support the path-based forwarding, as with existing mobile IP solutions, a control plane is required. The investigation of a new control plane is out of the scope of this paper, thus, we base our proposal on an existing solution; namely Information Centric Networking (ICN) [20] [21], specifically that developed in PURSUIT [21]. PURSUIT employs a Publish-Subscribe paradigm for a pathbased information dissemination that names information at the network layer decoupling request resolution from data transfer in both time and space. The asynchronous nature of the Publish/Subscribe architecture simplifies resynchronization after MN handoffs and greatly facilitates mobility. However, clean-slate ICN architecture proposals such as PURSUIT have one significant drawback in that the network stack in every MN and server, together with application network interface code, have to be replaced. Therefore, IPover-ICN [22] has emerged as a solution that aims at enabling individual operators to enhance their services by deploying a gateway-based architecture; this offers improved IP-based services with an ICN infrastructure at its heart without incurring any changes to the end-user equipment that use existing IP protocol stacks and connectivity. The combination of the opportunities arising from the pathbased forwarding, with its direct path possibilities, and the backward compatibility of the IP-over-ICN solution poses the question: can this new form of delivery architecture improve the performance of IP mobility? In this paper, we answer the aforementioned question by virtue of proposing a novel network-based mobility management approach using an IP-over-ICN network where an efficient path-based forwarding solution provides the core of the network, while exposing backward compatible IP communication at the edges. In the proposed solution, no traffic anchoring is required to support mobility at the network core, and no MN equipment modification or user interaction is required at the network edges. To evaluate our proposal, we analyse the mobility costs in an IP-over-ICN network using random walks on connected graphs and derive the corresponding cost functions in terms of signaling, packet delivery and handover latency costs. We compare the mobility costs with those of the IETF standardized networkbased mobility management protocol, Proxy Mobile IPv6 (PMIPv6). We also conduct a discrete event simulation of both schemes to compare the MN mobility performance and verify the theoretical analysis. The rest of the paper is structured as follows. Section 2 provides an overview of the utilized IP-over-ICN network architecture that forms the mobility management solution proposed in Section 3. Section 4 gives an overview of the improvements offered by the proposed IP-over-ICN mobility management that is formally modelled through a cost analysis in Section 5 for the evaluation of the proposal. Section 6 presents and discusses the modelling and simulations results, while a survey of related work is provided in Section 7. Finally the paper is concluded in Section 8. IP-OVER-ICN NETWORKS As emphasized in the Introduction section, this paper uses a path-based approach to achieve its benefits. However, this path-based approach needs some form of architecture to manage the interface between IP forwarding and the path-based forwarding. An ICN architecture is utilized here as existing work has developed a suitable control-plane for the path-based forwarding. The proposed IP-over-ICN architecture follows a gateway-based approach, where the first link from the user device to the network uses existing IP-based protocols, such as HTTP, CoAP, TCP or IPv4/v6, while the Network Attachment Point (NAP) serves as an entry point to the ICN network and maps the chosen protocol abstraction to ICN. The ICN core employs a Publish-Subscribe paradigm [23] for information dissemination that names information at the network layer, arranging individual information items into a context named scoping. Scopes allow information items to be grouped according to application requirements, for example different categories of information. Relationships between information items and scopes are represented as a directed acyclic graph of which leaves represent pieces of information and inner nodes represent scopes. Each node in the graph is identified with its full path starting from a root scope, a more detailed explanation is given in [23]. There are three main functional entities that compose the ICN architecture as shown in Fig. 1: the Rendezvous (RV), the Topology Manager (TM) and Forwarding Nodes (FN). The RV is responsible for matching publications and subscriptions of information items while the TM is responsible for constructing a delivery tree for the information object. This delivery tree is encoded in a forwarding identifier (FID) which is sent to the publisher so that it can forward the packets containing the information object to the subscriber. Note that the FID encodes a tree to allow for possible multicast delivery, where unicast is a trivial subset of a tree. In this paper we will ignore the multicast capability as we wish to compare with existing mobile IP solutions that are usually focused on unicast. In the network, there are also Forwarding Nodes (FN) that simply forward the information object to the subscriber using the specific FID generated for this transmission [24]. Throughout this paper, the TM and RV functions are assumed to be residing in the same entity for the sake of simplicity, although they may be distributed or separated to support a scalable and resilient solution. The IP-over-ICN operation uses publish/subscribe (pub/sub) semantics for carrying IPv4/v6 datagrams over the ICN network. First, a naïve pub/sub signaling description will be given, to show the underlying principle, although in a likely deployment there will be optimizations to this naïve signaling that will be explained later. In the first instance, ICN signaling may sound complex. However, it must be remembered that this needs to be compared to the protocols required for an IP network application including DHCP, DNS, routing etc. to name but a few for general support and of course the Proxy MIPv6 signaling that is the specific protocol relevant to this paper. ICN signaling may be likened to this support signaling. A naïve ICN signaling approach To explain the underlying IP-over-ICN principle, the naïve signaling approach used is as described in [22]. ICN uses a namespace to facilitate communication, this namespace may be used to represent any form of information. In an IP-over-ICN scenario, an IPv4/v6 address simply becomes an ICN name; the NAP uses publish/subscribe semantics to map IP datagrams to ICN names and then uses these names to forward IP datagrams as ICN information items through the ICN network. To aid the description, we will consider an IP client connected to what we describe as a client NAP (cNAP) and an IP server connected to a server NAP (sNAP). The cNAP and sNAP are only descriptive notations used for the naïve description, in practice a NAP will perform functions for any client or server connected to it so that, in practice, a NAP performs as both a cNAP and sNAP. An sNAP providing connectivity to an IP server is said to subscribe to receive packets destined for the IP server, this subscription state is registered in the domain RV. Then if an IP client wishes to send data to the IP server the cNAP is said to publish the IP datagrams to the IP server NAP. To actually forward the IP datagrams, the cNAP requires an FID for the forwarding function which is obtained through pub/sub matching. Pub/sub matching occurs in the RV when both a publisher and subscriber are registered for a unique ICN name, in this case the ICN name is the server IP address. Thus, when the cNAP registers the publication to the RV, the RV notes the match and requires the TM to send an appropriate FID to the cNAP so that it can publish (transmit) the data to the sNAP. In the naïve approach, when the IP server replies this whole mechanism can be reversed so that IP datagrams can flow in the reverse direction as well. When the client/server stop communicating (e.g. after a TCP FIN or after a suitable time-out) the publish/subscription matching state can be removed from the RV as communication is no longer required. The server subscription state is still maintained so that future IP clients can start a new communication. In practice this naïve signaling approach is inefficient in terms of both state requirements in the RV and the number of signaling messages. Consequently, a practical system implements signaling optimizations including combining the cNAP publication message with an implicit subscription and only keeping the server subscription state in the RV. These optimizations are included in the signaling described in the following section. MOBILITY MANAGEMENT IN IP-OVER-ICN For mobility management in IP-over-ICN, we propose that the NAP could serve as a MAG that performs the mobility management on behalf of a mobile node. The NAP occupies a key role in both MN network attachment and IP/ICN abstraction and interfacing. Therefore, it is a natural point for detecting the mobile node's movements to and from the access link since it resides at the access link where the mobile node is attached. On the ICN side, we propose that a centralized TM initially sets up the required routing state in the network and creates FIDs to forward packets from a NAP to every other NAP according to the deployed routing algorithm. All the NAPs receive their specified FIDs and populate a local table containing the complete set of FIDs required to reach any other NAP in the network. In IP-over-ICN, the mobile node will receive the IPv4/IPv6 address that the NAP locally assigns, and the NAP will act on behalf of the mobile node as the publisher or the subscriber towards the ICN. The ICN represents the network structure of IP addresses in a namespace under a unique root scope and an IP address of any device is interpreted as an appropriate ICN name under this scope. This means that the NAP will be ready to receive any information being sent to the assigned IP address by determining the appropriate ICN name according to the defined namespace. Therefore, any IP packet being sent to an IP address allocated to an IP device will arrive at the NAP serving it as an ICN-compliant notification to a subscription to this IP address (represented as an appropriate ICN name) [22]. The IP namespace proposed includes a network prefix scope identifier that serves as a root identifier and represents the IP network prefix allocated to serve the subject network domain. Under this root scope, there exists a so-called IP scope that represents the individual IP addresses allocated to IP endpoints that exist within the domain. These identifiers are formed by hashing a fully qualified IP address into a single 256 bit identifier. Fig. 2 shows a sequence diagram of the messages exchanged to establish a session between two IP endpoints in the proposed IP-over-ICN network. In this scenario, we assume that both the mobile node and the corresponding node are in the same network domain. For simplicity, the examples assume a single subnet where a MN is likely to keep its IP address when moving among NAPs. The ICN core maintains session continuity by maintaining the same pub/sub matching relations at the rendezvous even when a MN moves from one NAP to the other. This forms one of the IP-over-ICN advantages compared to Proxy MIPv6 networks for intra-domain scenarios because scalability is maintained by dividing and regionalizing the broadcast domain behind NAPs and routing is done through the ICN infrastructure using ICN semantics. This removes the scalability restrictions that would exist in an IP-core that would have to route /32 host-routes for every host in the domain. In the IP-over-ICN case, the external IP network could be divided into subnets (maybe for address allocation reasons). The IP-over-ICN will treat the IP addresses in the same manner as a single subnet as forwarding within the ICN is orthogonal to the IP address allocation. For IP-over-ICN networks, end-node IP address sustainability can be maintained using any suitable IP autoconfiguration mechanism suitable for the network infrastructure deployed. One example is the Dynamic Host Configuration Protocol (DHCP) where every NAP can act as a DHCP server serving the entire subnet deployed in the IP-over-ICN domain. We propose that every DHCP server can be configured to only assign local addresses (for MNs that locally attach to the NAP) from a specific pool within the subnet while it assigns addresses from outside the pool only to MNs that have previously been allocated an IP address at a previous NAP and intentionally ask for this specific IP address at the new NAP. This ensures that no IP address conflict would happen when the MN moves between NAPs. When a MN moves to a new NAP and goes into the DHCP RENEWING state, it would simply send a DHCPREQUEST message including the previously assigned IPv4 home address in the "Requested IP Address" option. The DHCPREQUEST is sent to the address specified in the Server Identifier option of the previously received DHCPOFFER and DHCPACK messages. The DHCP server would then send a DHCPACK to the MN to acknowledge the assignment of the committed IPv4 address following RFC2131 [25] and RFC5844 [11]. Each DHCP server on every NAP is configured to have the same IP address throughout the network, enabling the DHCPREQUEST message to be automatically sent to the available DHCP server on the access link without any delay. To facilitate IP address reuse, we propose that the Rendezvous keeps track of all IP addresses used to maintain pub/sub relations in the network and sends periodic reports to all DHCP servers notifying them of abandoned IP addresses. In the aforementioned scenario, first MN A attaches to NAP A (Link Layer Connectivity and IP Address Establishment). Then NAP A extracts from the first packet sent from MN A towards MN B the source and destination IP addresses. NAP A translates the extracted addresses into appropriate ICN names according to the defined IP namespace before publishing the destination address Scope /IP-Prefix/IP-B to the domain Rendezvous on behalf of MN A. Upon receiving this publication, the Rendezvous then matches it with a previous subscription of NAP B to the same scope on behalf of CN B. The Rendezvous triggers NAP A to start publishing information to the identified subscriber located at NAP B. NAP A then looks up it's local database for the appropriate FID to reach NAP B and uses it to send a PubiSub message directly to NAP B that includes the first data packet destined from MN A to CN B in addition to an implicit subscription to MN A's own scope /IP-Prefix/IP-A. NAP B utilizes it's local Rendezvous to maintain a match pub/sub relation for scope /IP-Prefix/IP-A, looks up it's local database for the appropriate FID to reach NAP A and uses this FID to start publishing information to the identified subscriber located at NAP A. At this point MN A and CN B can commence data exchange. This procedure is only required for the first data packet exchange between the two IP endpoints. Subsequent data packets can be directly sent using the allocated FIDs. Fig. 3 shows a sequence diagram of the messages exchanged to manage a handover procedure for MN A from NAP A to NAP C. After initiating the handover procedure, the NAP on the previous link (NAP A) signals destination NAP B by sending an iUnsub message on behalf of MN A for it's own scope /IP-Prefix/IP-A. This way the local Rendezvous at NAP B can remove the subscription state for MN A. According to this example scenario, MN A re-attaches to NAP C and re-establishes Link Layer Connectivity and IP Address allocation through DHCP which triggers NAP C upon receiving the first IP packet from MN A to Publish the destination Scope /IP-Prefix/IP-B to the domain Rendezvous on behalf of MN A. The RV at this point re-matches the same publications and subscriptions established previously and triggers NAP C to start publishing information to the identified subscriber located at NAP B. NAP C then looks up its local database for the appropriate FID to reach NAP B and uses it to send a PubiSub message directly to NAP B that includes the first data packet destined from MN A to CN B in addition to an implicit subscription to MN A's own scope /IP-Prefix/IP-A. NAP B utilizes its local Rendezvous to maintain a match pub/sub relation for scope /IP-Prefix/IP-A, looks up its local database for the appropriate FID to reach NAP C and uses it to start publishing information to the identified subscriber located at NAP C. At this point MN A and CN B can commence data exchange without further disruption using MN A's new location. Fig. 1 shows the participating entities and communication message flows for each of the control and data planes during mobility. On the link layer, a number of metrics exist to indicate the quality of connection and are used to indicate mobility is occurring. One of these metrics is the Received Signal Strength Indicator (RSSI) which we use in this paper -alternatively, other predictors of mobility could be used, but their investigation is out of the scope of this paper. The RSSI value is part of the data transmitted by all mobile user equipment units. It is intended as a means to obtain a relative indication of the quality of connection that exists between the MN and the network access point it is connected to on the wireless network. This could be used as the trigger for movement described in this example. Which NAP a client connects to is almost entirely determined by the MN itself. Thus, when a client is given a choice between multiple NAPs offering the same service, it will always choose the NAP with the highest RSSI. On the other hand just like the initial association sequence, when a MN is moving it also uses RSSI to determine when to disassociate from a NAP and WHY IP-OVER-ICN FOR NETWORK-BASED MO-BILITY MANAGEMENT? Proxy MIPv6 uses a centralized mobility management entity on both the data and control plane to facilitate network based mobility support. This approach on the one hand helps to reduce signaling costs in high mobility rate environments but on the other hand increases traffic and packet delivery cost within the networks core. Using this approach, all the traffic sent to and from a mobile node is driven through a local mobility anchor (LMA) that keeps track of the mobile nodes location and routes the traffic accordingly. This approach leads to using sub-optimal routes for packet delivery, thereby increasing the traffic overhead and endto-end delay. The problem is evident, for example, when accessing a nearby server of a Content Delivery Network (CDN), or when receiving locally available IP multicast packets or sending IP multicast packets. A path-based approach on the other hand only requires a central point for mobility signaling (here IP-over-ICN) and delivery path creation, while the actual payload is delivered from source to destination through the shortest path without any anchoring. To show the overhead caused by traffic anchoring in a simple way, we use the example in Fig. 4. As shown in the example, for a packet sent from a mobile node (MN) to reach a corresponding node (CN) in Fig. 4(a) it crosses two routers (hops) in an IP-over-ICN network while it crosses four hops in Proxy MIPv6 networks to support network controlled mobility. Thus, the packet delivery cost using IP-over-ICN is half the cost of Proxy MIPv6 using this topology. The gain shown in this example is topology dependent as can be seen in Fig. 4(b) where the number of hops crossed in an IP-over-ICN network is one hop versus three hops in Proxy MIPv6 networks. Therefore packet delivery cost in this IP-over-ICN scenario is one third the cost of the Proxy MIPv6 solution due to the fact that more links have been added to the same setup. An extended evaluation of network topology effect on router-level Internet performance has been shown in [26] and verified that many different graphs having the same distribution of node degree, may be considered opposites from the viewpoint of network engineering and result in widely varying end user bandwidths and router utilization distributions. MOBILITY MODELLING AND COST ANALYSIS In order to analyze the mobility behavior of mobile nodes in Proxy MIPv6 and IP-over-ICN networks, a random walk mobility model is applied on connected graphs that represent the network topology in terms of wireless access points. This approach has been chosen due to the importance of the network topology and its influence on the total cost as described in the previous section. Fig. 5 shows an example network topology graph consisting of 10 nodes that will be used to explain the details of the analysis performed. Given a random starting point, we select a random neighbor to move into (assuming equal transition probability to any neighbor for simplicity), then we select a neighbor of this new point at random, and move to it etc. The random sequence of points selected this way is a random walk on the graph. A random walk on a network graph of access points possesses some unique distinctive properties that can be pointed out, including that of spatial homogeneity. This means that the transition probability between two points (x and y) on the graph should depend on their relative positions in space. This is obviously due to the fact that a mobile user can only move to a neighboring access point from any access point it is attached to at any given time. Also this implies that this random walk demonstrates the skip-free property, namely that to go from point x to point y it must pass through all intermediate points because it can only move one point at each step. In our analysis the wireless network is modelled as a connected graph whose nodes represent the coverage areas. This allows for flexibility in topology formation and cell shape assumptions from square and hexagonal cells to completely random topologies. Using a random walk on a connected graph to model user mobility leads to a discrete time finite Markov chain which provides a very practical and reliable way of estimating the location and direction probabilities of a moving user. The location probability represents the likelihood that a MN is located within the range of a specific access point at a given point in time, while the direction probability represents the likelihood that a MN is moving into the coverage area of a specific neighboring access point within the given set of neighboring access point at a given point in time. The Markov chain will be used to derive the global balance equations and also to introduce mobility rates into our mobility analysis. A random walk on a connected and undirected graph can be represented as follows [27]. If G = (V, E) is a connected, non-bipartite, undirected graph where V are vertices that represent network access points and E edges that represent the interconnections between the access points. Each access point, k ∈ V , has a set of neighbors N k = {v : v ∈ V, (k, v) ∈ E} with the number of neighbors denoted as |N k |. This graph represents a Markov chain where the states are the nodes of G. Mobility is represented through elements p (k,j) of the direction probability matrix P = (p (k,j) ), ∀k, j ∈ V , where p (k,j) is the probability that a MN was in the previous time slot within the range of an access point k ∈ V and moves to an access point j ∈ V in the current time slot. Given uniform probability of neighbor choice, p (k,j) depends on the degree, |N k |, of a node k by: Otherwise. Given the direction probability from the equation above, there exists a unique steady-state location probability distribution vector Π = (π 1 , π 2 , . . . , π K ), such that 0 ≤ π k ≤ 1 for 1 ≤ k ≤ K and π k represents the probability of a node being located at k ∈ V . The steady-state probability vector can be obtained by solving Π = ΠP [28]. From our network model example in Fig. 5, the direction probability matrix and the steady-state probability vector are shown in Table 1. The mobility on the connected graph above can be represented as a Markov process where states represent the traversed network access points and transitions between states represent flows of a Markovian process. Fig. 6 shows a complete Markov chain representation of the example network topology shown in Fig. 5. The Markov process π 5 π 1 π 2 π 3 π 6 π 4 π 7 π 8 π 9 p (8,9) µ p (8,6) µ p (8,5) µ p (8,4) introduces the mean cell border crossing rate µ where in the analysis we assume that the mean cell border crossing rate is the same in all cells; this essentially assumes that mobility is homogeneous and that cells have the same area. Note that the local mobility anchor (LMA) has not been included in the Markov chain as it is not part of the mobility model as no MN transition into the LMA is permitted. Assuming a system at steady state, the detailed balance equation for a user at state 1 (Network Access Point 1) can be obtained as follows: The general case for cell k, where the neighbors of the cell are N k , is represented in Fig. 7. Thus, the specific example in (2) can be generalized as the global balance equation: where p k = p (k,1) = p (k,2) = . . . = p (k,N k ) is the generalized direction probability for a MN to move out of cell k. [29]. Fig. 8 shows a mobility scenario in a Proxy MIPv6 domain with one MN and a static corresponding node (CN). This scenario is considered in our mobility cost analysis. It can be concluded from Section 4 that no general closed form formula can be provided for mobility cost analysis, due to the high dependability of the total cost on the network topological aspects. Therefore, the mobility cost analysis will be conducted by feeding the topological factors into the cost analysis equations. Specifically, the total cost for PMIPv6 Ω is split into signaling Υ and packet delivery cost Λ as follows. 1 Ω = Υ + Λ where the signaling cost Υ is the signaling overhead for supporting mobility services for a MN. Λ is the packet delivery cost that captures the tunneling and traffic anchoring overhead. Υ is calculated as the product of the size of mobility signaling messages and the hop distance. 1. Please refer to Table 2 for a a list of the notations used in this paper. While Λ is calculated as the product of the total packet size (including tunneling overhead) and the hop distance. The signaling cost Υ in Hops.Bytes/s can be calculated as: where π k is the location probability of a MN at MAG k; p (k,j) is the direction probability for the MN to move into MAG j coverage area from MAG k; µ the MN's mobility rate of transition through a cell; h k,a is the number of hops between the LMA and MAG k; and, h j,a is the number of hops between the LMA and MAG j. Table 3. If we set the proxy binding update and proxy binding acknowledgment size (in bytes) as and substitute with |L T | in (5) we derive The packet delivery cost Λ is mainly used to investigate the tunneling and packet delivery overhead and is calculated as the product of total IPv6 packet size (including tunneling overhead) and the hop distance. The packet delivery cost for PMIPv6 Λ is given by where R is the average packet rate, and O is the direct path packet cost in PMIPv6 which is obtained as The parameter h s,a (ϕ + ζ) is the direct path packet cost from the corresponding node (CN) to the LMA and is equal to the number of hops between the CN and the LMA h s,a multiplied by the average data packet length in Bytes including tunnelling overhead (ϕ + ζ). On the other hand, h k,a (ϕ + ζ) denotes the direct path packet cost from the MN (k) to the LMA and therefore the cost is equal to the number of hops between the MN and the LMA, h k,a , multiplied by the average data packet length in bytes including tunneling overhead (ϕ + ζ). The complete path packet cost is the sum of the cost between the CN and the LMA and the MN, k and the LMA. 2. In this paper, |x| denotes the length of message x. IP-Over-ICN Mobility Cost Analysis The mobility messages in the proposed IP-over-ICN infrastructure are totally incurred within the ICN core. Fig. 3 shows the sequence of mobility messages that take place during handover in an IP-over-ICN domain. For simplicity we always assume in our analysis that only one end of the communication is mobile (MN) and that the corresponding node (CN) is static and not generating any mobility signaling. After initiating a handover procedure, the NAP on the previous link (i.e. NAP A) signals the destination NAP (i.e. NAP B) by sending an iUnsub message ( u ) from the MN's own IP address scope. This enables NAP B to gracefully remove the subscription state for MN A from the CN's IP address scope. This state was established prior to the handover at NAP B's local RV. Upon the MN re-attachment to a new NAP (NAP C), and after it establishes Layer 2 connectivity and IP address allocation, NAP C receives the first IP packet destined to the CN and sends a Publish request message ( r ) to the domain RV requesting publication to the CN's IP address Scope. Upon receiving the publish request, the RV matches it with a previous subscription to the same address scope requested by NAP B and sends a start publish message ( s ) to the NAP on the new link (NAP C). NAP C then locally looks up the appropriate FID to reach NAP B and uses it to send a PubiSub message ( p ) to NAP B that includes the first data packet from the MN to the CN in addition to an implicit subscription to MN A's own IP address scope. The PubiSub message triggers NAP B to utilize its local Rendezvous to maintain a match pub/sub relation for the mentioned scope, looks up its local database for the appropriate FID to reach NAP C and uses it to start publishing information to the identified subscriber. At this point MN A and CN B can commence sending and receiving data payload messages (ζ). Fig. 9 illustrates the detailed message formats and sizes for the mobility messages needed in an IP-over-ICN setup. The mobility signaling cost equals the size of the signaling messages multiplied by the number of hops. Therefore, the introduced signaling overhead is given by where h k,s is the number of hops between the previous NAP k and the destination NAP s, h j,v is the number of hops between the new NAP j and the RV/TM and h j,s is the number of hops between NAP j and the destination NAP s. | u | denotes the size of an implicit unsubscribe (iUnsub) message sent from NAP k to NAP s when the (a) Proxy MIPv6. (b) IP over ICN. MN initiates a handover. | r | is the size of a publish request message sent from NAP j to the RV/TM upon a change in the MN's NAP attachment requesting publication to the destination address scope. | s | stands for the size of a start publish message sent from the domain RV/TM after a match pub/sub happens triggering NAP j to start sending data packets to NAP s. Finally, | p | is the size of a publish with implicit subscribe (PubiSub) message sent from NAP j towards NAPs including the first data payload in addition to an implicit subscription to the MN's address scope at the new location (NAP j). In the upcoming evaluations the payload size has been excluded from the p message size as it is not considered a mobility signaling cost. The packet delivery cost, Λ , is mainly used to investigate the packet delivery overhead and is calculated as the product of total packet size and the hop distance: where R is the average packet rate in an IP-over-ICN network, and O k is the direct path packet overhead in IPover-ICN obtained as where h s,k is the number of hops between NAP s where the CN is attached and NAP k where the MN is attached, and ϕ is the size of the ICN packet header. Finally ζ is the average payload length in Bytes. Handover Latency Analysis In this section, we analyze the latency differences between Proxy MIPv6 and IP-over-ICN. To allow a simple analysis, latency is interpreted in terms of number of messages exchanged, processes required and hops traversed to facilitate a successful mobility handover. According to the sequence diagrams in Fig. 10, we assume that p denotes message processing time, m denotes the time to exchange a message and h denotes the number of hops that a message traverses. For simplicity we will assume that p and m are represented in arbitrary time units with p = m = 1 time unit i.e. that a link transmission delay is comparable to forwarding delay. Therefore, for Proxy MIPv6, the handover latency cost T c according to the message sequence in Fig. 10(a) and the hop notations in Table 2 would be T c = 5p + mh k,a + 2mh j,a (13) The PBA message and its subsequent processing in dashed line (according to Fig. 10(a)) between the LMA and the previous MAG has not been added to the latency cost in (13) as it does not impact the time consumed by the MN to re-establish the session on the new MAG. For IP-over-ICN, the latency cost T c according to the message sequence in Fig. 10(b) and also referring to the hop definitions in Table 2 would be T c = 5p + mh k,s + 2mh j,v where the PubiSub message sent at the end of an IP over ICN handover has not been added to the latency cost in (14) as it carries the MN's first data payload and therefore does not incur any extra latency. Although, (13) and (14) have the same number of node processes, the costs T c and T c are not necessarily equal to each other, as the path lengths may not be equal. If the LMA (in PMIPv6) and the RV/TM (in IP-over-ICN) are the same location, then the last term may be the same in both cases, however, the middle term differs as h k,a represents the source to anchor hop-count, whereas h k,s represents the source to corresponding node hop-count. MOBILITY MANAGEMENT SIMULATION AND COST EVALUATION To evaluate the performance of the proposed IP-over-ICN mobility solution and show the significance of the established analytical model, a discrete time event simulation of both Proxy Mobile IP and IP-over-ICN has been conducted in R. The built simulation environment has been used to investigate the mobility costs (mainly mobility signaling, packet delivery and handover latency costs) with different scenarios and compare the results with those of our analytic model. Random geometric networks have been used to represent network topologies in our simulation to ensure spatial homogeneity of the positions of the MAGs and NAPs. Various network topology sizes with a different number of nodes (MAG's and NAPs) have been simulated with varying node degree. Both MAG's and NAPs have been simulated using a circular coverage area with a radius of 500 m. The same central node was used to represent the LMA and TM/RV in all cases to ensure valid cost comparisons. A random walk mobility model has been used to capture user mobility with various speed values ranging from pedestrians moving at a rate of 3 miles/hour to vehicles travelling at 70 miles/hour. Initial user locations are randomly distributed using a uniform random distribution. In our traffic model, we assume that all the users in the network exchange video data with an arrival rate of 1 Mbps following a Poisson distribution. Every simulation experiment was run for 1800 seconds and repeated 20 times with results collated after reaching a steady state. Validating the Analytical Model Through Simulation The first simulation results use the topology example in Fig. 5 to compare the performance with those obtained from the cost analysis functions and mobility model in the previous section. Our modelling results where calculated as follows. Assuming a vehicle moving at a constant velocity of 70 mile/h through a circular coverage area with a radius of 500 m, this would result in the vehicle spending 25.12 second in every cell it traverses at a mobility rate (µ) of 0.039 1/s. Therefore, applying the derived cost functions for PMIPv6 and IP-over-ICN in equations (7), (8), (10) and (11) to the network model example in Fig. 5; and utilizing the Markov mobility model in Fig. 6 while substituting the variables with the values in Table 5; yields in the following results. Υ = 21.624 hops.Bytes/s, Λ = 809176 hops.Bytes/s, Υ = 43.758 hops.Bytes/s and Λ = 218400 hops.Bytes/s. To compare the results with those that would be obtained from a simulation that uses the random walk mobility model, a single MN was simulated to move randomly with speed of 70 mile/h. Both the MN initial location and traversed paths were selected randomly from a uniform distribution. Fig. 11 shows the accumulative simulation and modelling results for the example topology in Fig. 5 over 1800 seconds. The results show that the modelling results fall withing the 95% confidence intervals of the simulation results showing that there is a high degree of certainty that the two methods agree. Fig. 11 also shows both the total packet delivery and signaling costs for Proxy MIPv6 and IP-over-ICN, and it is clear from the results that Proxy MIPv6 imposes a higher packet delivery cost of more than three times that of IPover-ICN reaching about 15 × 10 8 Hops.Bytes due to the longer traffic paths imposed by using a central LMA. Also from Fig. 11 it can be seen that IP-over-ICN imposes higher signaling cost than Proxy MIPv6 reaching approximately 9 × 10 4 Hops.Bytes compared to about 4 × 10 4 Hops.Bytes incurred by Proxy MIPv6. But despite the signaling cost results, the high difference in magnitude of packet delivery cost between Proxy MIPv6 and IP-over-ICN indicates that IP-over-ICN highly outperforms PMIPv6 in the total cost. Mobile Node Speed Variation The second set of results use random geometric networks of 100 nodes with average connection degree of 4 neighbors (between 1 and 8 neighbors for every NAP/MAG in the network). 50 MNs where simulated to roam freely and randomly within the network domain. Various node speeds have been used in this experiment ranging between pedestrian speeds of 3 miles/h and highway speeds of 70 miles/hour. The MN initial locations, traversed paths and speeds, were all selected randomly from uniform distributions. Fig. 12(a) shows the average packet delivery and signaling costs at 70 miles/h for both Proxy MIPv6 and IP-over-ICN. According to this figure, Proxy MIPv6 shows approximately double the packet delivery cost imposed by IP-over-ICN due to the central traffic anchoring. Although IP-over-ICN shows higher signaling costs, the high difference in figures between packet delivery cost and signaling cost implies that IP-over-ICN requires half the total cost of Proxy MIPv6 in order to provide network enabled mobility support. Another simulation run is shown in Fig. 12 Fig. 13(b). Network Topology Size Variation The miles/h. Fig. 14 shows the total cost (packet delivery + signaling) for both Proxy MIPv6 and IP-over-ICN for each of the simulated topology sizes. It can be seen from the trends that IP-over-ICN always outperforms Proxy MIPv6 in terms of the total cost required to support network-based mobility management with an improvement factor of at least 1.8 due to the sub-optimal triangular routing mechanism of Proxy MIPv6. Handover Latency The final simulation experiment examines the handover latency in both IP-over-ICN and PMIPv6 networks using the same network topology of Section 6.2 with 100 MNs roaming freely and randomly within the network domain at 70 miles/hour. Fig. 15 shows an empirical cumulative distribution function of the handover latency in both investigated domains. From this figure it can be seen that IP-over-ICN and PMIPv6 networks have highly convergent distributions that are nearly identical at the upper range of handover latencies between 20-25 unit time. This is due to the high similarity in the number of signalling messages and processes required to facilitate handover in both domains with minor differences in the traversed paths as outlined in section 5.3. This clearly illustrates that IP-over-ICN offers no extra cost in handover latency while earlier results show significant savings on the data plane traffic. RELATED WORK Proxy MIPv6 [8] has been adopted by the IETF to support network-based mobility in wireless networks by utilizing the LMA as a centralized mobility management entity on both the data and control plane. On the other hand, 3GPP specifies the General packet radio service (GPRS) Tunnelling Protocol (GTP) [15] to support mobility in cellular networks by anchoring user data plane traffic at the serving gateway (S-GW) and control plane traffic at the MME. GTP is an important IP/UDP based protocol used to encapsulate user data when passing through core network using GTP-U and also carries bearer specific signaling traffic between various core network entities using GTP-C. Also in efforts to significantly improve handover between heterogeneous network technologies, IEEE standards association has developed 802.21 [31] that defines a media-independent handover (MIH) framework. The standard defines the tools required to exchange information, events, and commands to facilitate handover initiation and handover preparation. A large number of efforts have focused on amendments, improvements and cost evaluation of the standards mentioned previously, we summarize the most significant of them below. Distributed Mobility Management (DMM) efforts [32] [33] try to solve Proxy MIPv6 drawbacks by evolving towards a flatter architecture using distributed anchoring, thereby providing a more efficient way to handle mobile traffic. In these approaches, although the LMA functionality is distributed into the network edges, they still perform traffic tunnelling and anchoring in a localized manner which does not eliminate the traffic overhead imposed to support mobility. IEEE 802.21 Media Independent Handover (MIH) functionality assisted Proxy MIPv6 solutions such as [34], aim at reducing handover latency and signaling cost in heterogeneous wireless networks. The base station with MIH functionality performs handover on behalf of the MN. The analytical evaluation shows that the proposed mechanism can outperform the existing mechanism in terms of handover latency and total number of over the air signaling messages. But despite that, the sub-optimal core routing problem remains unsolved. Path-based forwarding architectures such as Software Defined Networks (SDN), bring new possibilities to improve the mobility management with lower traffic cost, better scalability and faster handover. Today, the most known approach is testing mobile flow entries against matching rule fields and finding a correct output action through every OpenFlow switch along the path, which has high costs in mobile flow management. Most of the proposed SDN architectures in wireless networks cannot be directly applied to large-scale networks due to this reason [35]. OpenFlowenabled proxy mobile IPv6 (OF-PMIPv6) is proposed in [36] where the control path is separated from the data path by performing the mobility control at the controller, whereas the data path remains direct between the MAG and the LMA in an IP tunnel form. This method achieves improved handover latency over conventional Proxy MIPv6, while the data plane anchor problem is persistent. Other SDN efforts such as [37] propose rule caching mechanisms to tackle the limited rule space problem in existing SDN devices. Such approaches propose to support completely flat mobility architectures, but as a drawback, they incur additional processing complexity to manage the proposed caching mechanisms. CONCLUSION Efficient mobility management solutions are essential to accommodate the immense growth of mobile networks, users and generated traffic. In this paper we introduced a novel, anchor-free, mobility management solution that utilises a revolutionary path-based forwarding substrate to enable direct communication between the source and destination. We evaluated the cost of our solution through analytical modelling and simulations; and, compared it with the conventional PMIPv6. Evaluation results have shown that the delivery cost of our solution is approximately half that incurred by the PMIPv6 counterpart; for similar or (in some cases) reduced end-to-end latency. Consequently, we have shown that significant resource saving can be achieved using our proposed solution. By introducing the anchor point, PMIPv6 clearly violates network end-to-end transparency, and also introduces a network state (not flow-based, but device based), which is considered a drawback for processing, security as well as failure perspectives. Strictly speaking, IP-over-ICN still violates the transparency but at a much better point of the system, namely at the attachment points of both communication parties. This paper demonstrates that this is an improved point for the violation, as it allows optimal delivery paths i.e. the same path that would be used if mobility had not occurred. This paper has used an IP-over-ICN solution as an embodiment to facilitate the proposed anochor-free mobility solution. However, the proposed mobility solution can be facilitated by any forwarding architecture that purely relies on path information stored in the forwarded packet for the end-to-end delivery; in this case, mobility simply results in partial recomputation of the path, with the opportunity to deliver the data over an optimal path after every handover operation.
2017-02-07T12:27:50.000Z
2016-10-27T00:00:00.000
{ "year": 2016, "sha1": "a513dfaf8e58f095bb146ee4ea4c7db09289b86b", "oa_license": null, "oa_url": "http://repository.essex.ac.uk/22345/1/IEEE%20Transactions%20on%20Mobile%20Computing.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "86de6f6aa057e8b59e7e9eb5e6d22763b8f2af91", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
8416330
pes2o/s2orc
v3-fos-license
Neural dynamics of change detection in crowded acoustic scenes Two key questions concerning change detection in crowded acoustic environments are the extent to which cortical processing is specialized for different forms of acoustic change and when in the time-course of cortical processing neural activity becomes predictive of behavioral outcomes. Here, we address these issues by using magnetoencephalography (MEG) to probe the cortical dynamics of change detection in ongoing acoustic scenes containing as many as ten concurrent sources. Each source was formed of a sequence of tone pips with a unique carrier frequency and temporal modulation pattern, designed to mimic the spectrotemporal structure of natural sounds. Our results show that listeners are more accurate and quicker to detect the appearance (than disappearance) of an auditory source in the ongoing scene. Underpinning this behavioral asymmetry are change-evoked responses differing not only in magnitude and latency, but also in their spatial patterns. We find that even the earliest (~ 50 ms) cortical response to change is predictive of behavioral outcomes (detection times), consistent with the hypothesized role of local neural transients in supporting change detection. Introduction A key aspect of the process by which our brains analyze, represent and make sense of our surroundings is the ability to rapidly detect changes in the ongoing sensory input. The auditory system is hypothesized to play a central role in the brain's change detection network by serving as an 'early warning' device, continually monitoring the acoustic background for potentially relevant events (Demany et al., 2008;Murphy et al., 2013). Although the neural mechanisms by which listeners detect change in simple acoustic patterns have been extensively investigated (e.g. Martin and Boothroyd, 2000;Krumbholz, 2003;Gutschalk et al., 2004;Näätänen et al., 2007;Chait et al., 2008;Grimm et al., 2011;Andreou et al., 2015), how change is detected in crowded acoustic scenes containing multiple concurrent sources remains poorly understood. Previous neuroimaging studies of auditory change detection suggest that changes in crowded acoustic scenes are successfully encoded by the earliest stages of cortical processing in primary auditory cortex (Puschmann et al., 2013a(Puschmann et al., , 2013b. It is only later stages of processing in non-primary temporal and frontal regions that determine whether listeners report hearing a change (Gregg and Snyder, 2012;Puschmann et al., 2013aPuschmann et al., , 2013bGregg et al., 2014;Snyder et al., 2015). These findings are compatible with the notion that when detecting change, the behavioral outcome depends on the success of higher-level processes that extract object-based perceptual representations from acoustic scenes (Eramudugolla et al., 2005;Backer and Alain, 2012) or that maintain and compare information from prechange and postchange portions of the sensory input (Eramudugolla et al., 2005;Gregg and Samuel, 2008;Pavani and Turatto, 2008). However, a common feature of previous neuroimaging studies of auditory change detection (Gregg and Snyder, 2012;Puschmann et al., 2013aPuschmann et al., , 2013bGregg et al., 2014) is the use of silent or noise interruptions separating the pre-change and post-change scenes (see also Eramudugolla et al., 2005). Consequently, the extent to which the results might generalize to naturalistic listening situations in which changes occur in an uninterrupted, ongoing scene is unclear (Cervantes Constantino et al., 2012). In particular, it is likely that the scene interruptions masked local neural transients evoked by change, thereby forcing listeners to rely on higher-level processes that encode and compare pre-interruption and post-interruption scene information in a working memory store (see Rensink et al., 1997;Demany et al., 2008). Indeed, in a series of behavioral experiments, Cervantes Constantino et al. (2012) demonstrated that auditory change detection is at least partly reliant on local transients, similar to what has been established for visual change detection (Yantis and Jonides, 1984;Rensink et al., 1997). A further question concerns the extent to which the neural mechanisms supporting change detection are specialized for different forms of acoustic change (see Molholm et al., 2005). Previous behavioral investigations suggest that listeners are more accurate and quicker to detect a change involving the appearance, as opposed to the disappearance, of an auditory object (Huron, 1989;Pavani and Turatto, 2008;Cervantes Constantino et al., 2012). This perceptual asymmetry may have an origin in known differences in neural responses evoked by the onset versus offset of sound, which include amplitude, latency and spatial distribution (Hari et al., 1987;Pantev et al., 1996;Phillips et al., 2002;Qin et al., 2007;Pratt et al., 2008;Scholl et al., 2010). However, as previous work on onset vs. offset detection only investigated neural responses to single sounds, and in passive listening situations, the extent to which neural processing is specialized for detecting appearing and disappearing objects in crowded acoustic scenes is unknown. In the current study, we recorded magnetoencephalography (MEG) brain responses while listeners were presented with artificial acoustic scenes containing as many as ten auditory objects, each formed of a sequence of tone pips with a unique carrier frequency and amplitude modulation pattern. The task for listeners was to detect a change involving the appearance or disappearance of one of those objects within the scene. Our aims were twofold: 1) to characterize neural responses to appearing and disappearing objects in an ongoing, crowded acoustic scene and 2) to determine which stage of neural processing contributes to detection success by relating MEG responses to behavioral outcomes. Methods Participants 14 (5 female) right-handed participants aged between 22 and 36 years (mean = 27.8, SD = 3.98) were tested after being informed of the study's procedure, which was approved by the research ethics committee of University College London. All reported normal hearing, normal or corrected-to-normal vision, and had no history of neurological disorders. Stimuli Stimuli were 2500-3500 ms duration artificial acoustic 'scenes' populated by four or ten streams of pure-tones designed to model auditory sources (shown in Fig. 1). Each source had a unique carrier frequency (drawn from a pool of fixed values spaced at 2*ERB between 200 and 2800 Hz; Moore and Glasberg, 1983) and temporal structure. Within each object, the durations of the tones (varying uniformly between 22 and 167 ms) and the silent interval between tones (varying between 1 and 167 ms) were chosen independently and then fixed for the duration of the scene. This pattern mimics the regularly modulated temporal properties of many natural sounds. Previous experiments have demonstrated that each stimulus is perceived as a composite 'sound-scape' in which individual objects can be perceptually segregated and selectively attended to, and are therefore good models for listening in natural acoustic scenes (Cervantes Constantino et al., 2012). Importantly, the large spectral separation between neighboring objects (at least 2*ERB) minimizes energetic masking at the peripheral stages of the auditory system (Moore, 1987), thus enabling the investigation of the effects of increasing scene size without the confound of increasing inter-object sensory masking. Signals were synthesized with a sampling rate of 44100 Hz and shaped with a 30 ms raised cosine onset and offset ramp. They were delivered diotically to the subjects' ears with tubephones (EARTONE 3A 10 Ω, Etymotic Research, Inc) inserted into the ear-canal and adjusted to a comfortable listening level. As shown in Fig. 1, acoustic scenes in which each object was active throughout the stimulus are referred to as 'No Change' stimuli (NC). In other scenes, either of two types of change could occur partway through the stimulus: one that involved the appearance of a new object into the scene or one that involved the disappearance of an object from the scene. These are referred to as 'Change Appear' (CA) and 'Change Disappear' (CD) stimuli, respectively. Importantly, the other (non-changing objects) in the scene remained active without interruption. The specific configuration of carrier frequencies and temporal modulation patterns varied randomly across scenes. To enable a controlled comparison between conditions, NC, CA and CD stimuli were derived from each configuration of carrier frequencies and modulation patterns, and then presented in random order during the experiment. Fig. 1 shows an example of one such configuration. The timing of change varied randomly (uniformly distributed between 1000 ms and 2000 ms post scene onset), but with the following constraints: The nominal time of change for CA objects coincided with the onset of the first tone while for CD objects, the nominal time of change was set to the offset of the last tone augmented by the intertone interval, i.e. at the expected onset of the next tone, which is the Fig. 1. Example acoustic scenes containing four objects (scene size four condition). The plots represent 'auditory' spectrograms, equally spaced on a scale of ERB-rate (Moore and Glasberg, 1983). Channels are smoothed to obtain a temporal resolution similar to the Equivalent Rectangular Duration (Plack and Moore, 1990). Dashed lines show the nominal change time of appearing and disappearing objects. earliest time at which the disappearance could be detected. The interval between the time of change and scene offset was fixed at 1500 ms. Manipulations of change type (CA/CD/NC) and scene size (four/ten objects) were fully crossed, resulting in a 3 × 2 factorial design. To discourage participants from adopting a bias for 'change' responses (see 'Procedure' section below), the NC condition contained more trials (336 in total) than either CA or CD conditions (each of which had 224 trials). Stimuli were randomly ordered during each of eight presentation blocks of 98 trials. The inter stimulus interval varied randomly between 900 and 1100 ms. Procedure Stimulus delivery was controlled with Cogent software (http:// www.vislab.ucl.ac.uk/cogent.php). Participants were instructed to press a button, held in the right hand, as soon as they detected a change in each scene. Before the experiment, participants completed a practice session of 14 trials containing examples of all change type and scene size conditions. MEG data acquisition and pre-processing Magnetic fields were recorded with a CTF-275 MEG system, with 274 functioning axial gradiometers arranged in a helmet shaped array. Electrical coils were attached to three anatomical fiducial points (nasion and left and right pre-auricular), in order to continuously monitor the position of each participant's head with respect to the MEG sensors. The MEG data were analyzed in SPM8 (Wellcome Trust Centre for Neuroimaging, London, UK) and FieldTrip (Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, the Netherlands) software implemented in Matlab. The data were downsampled to 250 Hz and epoched −200 to 400 ms relative to change times (or at matched times in the NC condition). This epoch encompassed change detection related brain processes leading up to the initiation of the behavioral response, which ranged from 410 to 1153 ms across participants and conditions. After epoching, the data were baseline-corrected relative to the 200 ms pre-change period and low-pass filtered at 30 Hz. Any trials in which the data deviated by more than three standard deviations from the mean were excluded from subsequent processing. Before averaging epochs, Denoising Source Separation (DSS) was applied to maximize reproducibility of the evoked response across trials (de Cheveigné and Simon, 2008;De Cheveigné and Parra, 2014). For each subject, the first two DSS components (i.e., the two 'most reproducible' components; determined 0 to 400 ms relative to scene onset) were retained and used to project the change-evoked data back into sensor space. Unless stated otherwise, only detected trials were analyzed for CA and CD conditions. Likewise only correct rejections were analyzed for NC stimuli. Because this inevitably resulted in an unequal number of trials across conditions, a random selection of trials was discarded to equate trial numbers when comparing conditions. This was performed independently for each participant and statistical comparison. MEG statistical analysis To assess the time-course of change-evoked responses, the MEG data across the sensor array were summarized as the root mean square (RMS) across sensors for each time sample within the −200 to 400 ms epoch period, reflecting the instantaneous magnitude of neuronal responses. Group-level paired t-tests (one-tailed) for effects of change type and scene size were performed for each time sample while controlling the family-wise error (FWE) rate using a non-parametric permutation procedure based on 5000 iterations (Maris and Oostenveld, 2007). Reported effects were obtained by using a cluster defining height threshold of p b .01 with a cluster size threshold of p b .05 (FWE corrected), unless otherwise stated. Prior to computing correlations between detection time and the magnitude of MEG responses, we averaged the MEG RMS signal across time-windows centered on each peak in the group-averaged RMS time-course. The size of each time-window was chosen to be approximately the full width at half maximum (FWHM) of the relevant peak. Any participant whose mean data deviated by more than one standard deviation from the group mean was removed to minimize the occurrence of spurious correlations. This was done for each peak separately and resulted in the removal of one to three participants across tests. To analyze the effect of scene size on peak latencies, we averaged the MEG RMS signal across time-windows of interest centered on each peak in the group-averaged RMS time-course (as with previous analyses, the size of each time-window was FWHM). Importantly, time-windows were selected in a statistically unbiased fashion (Friston and Henson, 2006;Kriegeskorte et al., 2009;Kilner, 2013), based on the average across conditions of interest (i.e. scene size conditions). Statistical tests of peak latency differences were conducted on subsamples of the grand averaged RMS time-course using the jackknife procedure (Efron, 1981). In the jackknife procedure, the grand averaged data are resampled n times (with n being the number of participants) while omitting one participant from each subsample. Statistical reliability of an effect can then be assessed using standard tests (e.g. t-test), not across individual participants, but across subsamples of the grand average. This technique has been shown to be superior to computing latency differences from individual participant data because of the higher signal-to-noise ratio associated with grand averages (Miller et al., 1998;Ulrich and Miller, 2001). When using the jackknife procedure, tand F-statistics were corrected following the procedure in Miller et al., 1998 andMiller, 2001. To assess differences between magnetic field topographies, we first averaged the MEG data across time-windows centered on peaks in the group-averaged RMS time-course and then for each participant, computed the global dissimilarity between topographies (square root of the mean squared differences between the signals at each sensor, normalized by the RMS) (Murray et al., 2008; see also Tian and Huber, 2008;Tian et al., 2011). To assess group-level statistical reliability of the measured dissimilarities, we used a procedure similar to that proposed by Tian et al. (2011). Briefly, the data were partitioned into two halves according to whether a presentation block was odd or even numbered. Computing the topography dissimilarity between odd and even blocks provides a null hypothesis against which to compare the observed topography dissimilarities. Observed topography dissimilarity significantly greater than the topography dissimilarity between odd and even blocks (assessed using a one-sample t-test against zero) indicates a significant difference between two topographical patterns (independent of overall response strength) due to a change in the configuration of underlying cortical generators (Tian and Huber, 2008). Effect of change type We first assessed the effect of change type on change-evoked MEG responses at scene size four (in which the majority of changes were detected in both CA and CD conditions). Relative to NC stimuli, CA stimuli evoked a significantly increased response from 28 to 400 ms (shown in Fig. 3A). CD stimuli, on the other hand, evoked a neural response that reached significance from 88 to 224 ms and then again from 236 to 400 ms. To test whether the CA response emerged significantly earlier than the CD response, we repeatedly resampled the grand averaged RMS time-course (using the jackknife procedure) and for each subsample, computed the earliest latency at which the RMS for CA and CD conditions deviated by more than two standard deviations from the mean RMS across time in the NC condition. As shown in Fig. 3B, this analysis showed that the CA evoked response deviated from the NC condition significantly earlier than the CD response (jackknife adjusted t(13) = −3.91, p b .001). We next assessed whether evoked responses to CA and CD stimuli at scene size four differed not only in latency but also in their topographical patterns. Visual inspection of topographies (averaged across subjects and time-windows centered on each CA/CD peak in the RMS timecourse) indicates that the CA response consists of three distinct components with M50-like, M100-like and M200-like latencies and topographies (shown in Fig. 3A). Henceforth these peaks will be referred to as cM50, cM100 and cM200. In contrast, the CD response consists of a single, cM100 component (shown in Fig. 3A). To statistically assess these differences in topographies, we computed the global dissimilarity between the topographical pattern at the time of the cM100 of CD and each of the three CA patterns. Dissimilarity that is significantly greater than zero across the group indicates a difference between two topographical MEG patterns (independent of overall response strength) due to a change in the configuration of underlying cortical generators. The dissimilarity between the (only) CD peak (112-204 ms) and the first (cM50) peak of CA (40-72 ms) was significantly above zero (t(13) = 6.35, p b .001 Bonferroni corrected for multiple comparisons across four CA/CD peaks), consistent with the opposite average topographies (Fig. 3A). Also significant was the dissimilarity between CD and the third (cM200) CA peak (212-316 ms; t(13) = 4.31, p b .001 Bonferroni corrected). In contrast, the dissimilarity between the CD peak and the second (cM100) CA peak (96-152 ms) was not significantly above zero (t(13) = 0.817, p = .107), indicating that this portion of the change-evoked response is common to both CA and CD conditions, and consistent with both peaks having an average M100-like topography. Thus, neural responses to CA and CD differ not only in latency, but also partly in their topographies. It is especially notable that the first responses in both conditions differ in their topographies suggestive of different underlying computations for the initial detection of CA and CD events. To verify that the initial cM50 component was present for CA, but not CD stimuli, we again analyzed the time-course of the MEG signal but this time preserved the polarity of the signal (unlike the previous RMS analysis). This provides a more definitive test for the presence of an M50-like component in CA and CD conditions because this component was observed in the topographic analysis to be opposite in polarity to the subsequent cM100 component and should therefore be readily apparent in the polarity preserved MEG time-course. We grouped the MEG channels according to whether they displayed positive or negative signal at the time of the M50 peak at scene onset (M50 responses to sound onset were observed in every participant). The change-evoked response was then averaged across channels within these groupings and the resulting time-courses analyzed (shown in Fig. 3C). Relative to NC stimuli, CA responses showed a significant deflection at around 50 ms (cM50), followed by a later deflection opposite in polarity at around 120 ms (cM100). However for CD stimuli, only the later deflection was observed with no suggestion of a preceding deflection opposite in polarity, even at a lenient cluster size threshold of p b .05 (uncorrected). Thus, this analysis provides convergent evidence that CA (but not) CD stimuli evoke an early M50-like response. Effect of scene size The effect of scene size is shown in Fig. 4. While the pattern of evoked peaks remained qualitatively similar, the magnitude of MEG responses was significantly reduced at scene size ten (compared with scene size four) and this occurred for both CA (from 44 ms) and CD objects (from 40 ms). Additionally, visual inspection of the data suggests that the peak latency of the early cM50 component of CA was unaffected by scene size, whereas peak latencies of later CA/CD components increased with increasing scene size. Thus, we statistically assessed whether the scene size manipulation influenced peak latencies. Although the latency of the cM50 peak of CA was not affected by increasing scene size (jackknife adjusted t(13) = 0.60, p = 0.278), growing scene size resulted in increased latencies for the cM100 (jackknife adjusted t(13) = 2.14, p = 0.026) and cM200 (jackknife adjusted t(13) = 2.10, p = 0.028) peaks of CA. This result may be interpreted as indicating a qualitative difference between the cM50 and cM100/ cM200 peaks in terms of susceptibility to scene size. However, this pattern was not supported by a statistical interaction between peak (cM50/ cM100/cM200) and scene size (four/ten) and therefore will not be discussed further (jackknife adjusted F(2,26) = 4.03, p = 0.123). For CD objects, the effect of scene size on peak latency was not significant (jackknife adjusted t(13) = 1.53, p = 0.075). Loftus and Masson, 1994). Relationship between MEG responses and behavior Listeners' behavioral data provide two measures of detection success: 1) accuracy (i.e. whether a change was detected or undetected) and 2) detection time (i.e. speed of the behavioral response). In our final analysis, we determined whether change-evoked responses were dependent on these behavioral outcomes. In addition, we compared Undetected changes with the NC condition to test for processing of change in the absence of reported awareness. Because detection accuracy (hit rate) was approximately 90% in the CA condition at scene size four, we restricted this analysis to the more difficult scene size ten condition for which there was a more equal proportion of Detected and Undetected changes. To analyze the effect of detection accuracy on CA responses, we inevitably had to exclude participants with too few Undetected changes because of the high detection accuracy (the group mean hit rate was approximately 80%, even in the more difficult scene size ten condition). We excluded participants with fewer than 20 trials per condition, leaving a sample of eight participants. As with previous analyses, resulting trial numbers were matched between conditions by randomly selecting a subset of trials, which ranged from 23 to 38 across participants. Therefore the responses plotted in Fig. 5 are based on the same number of trials in each condition. Given the low sample size and number of trials available for averaging, we indicate significant effects for CA responses using a corrected cluster size threshold (FWE p b .05 in dark red) as well as uncorrected cluster size threshold (p b .05 in light red). As shown in Fig. 5 (top), the average response to CA Undetected trials (in green) is characterized by an early deflection in the M50 range, no M100 response, and a subsequent increase in RMS shortly after 200 ms post change time. Despite the large difference in the means, statistical analysis failed to indicate a significant difference between CA Undetected and NC conditions (even at an uncorrected level) at the time of the cM50. Significant differences emerged later, from 268 to 308 ms, suggesting that scene changes were processed, at least to some extent, even in the absence of reported awareness. The mean response to the CA Detected trials (in red) shows the previously observed (e.g. Fig. 4) pattern of cM50, cM100 and CM200 deflections. However the mean cM50 here was somewhat delayed in latency (peaking at 94 ms post change time), likely due to increased noise in the data (from low sample size). A statistical comparison of responses to CA Detected vs. Undetected changes revealed a difference from 108 to 144 ms and from 260 to 308 ms (further differences are observed at an uncorrected level, appearing as early as 80 ms). The earliest period of significance overlapped with the cM50 and cM100 components in the Detected condition, however due to the noise inherent in these data it is difficult to precisely determine whether the effect is associated with cM50 or cM100 responses (or both). In the CD condition (Fig. 5, bottom), approximately half of changes were detected allowing us to analyze data from the full group of fourteen participants (trials numbers were matched between conditions and ranged from 25 to 52 across participants). The MEG response to CD objects at scene size ten was larger from 120 to 136 ms for Detected vs. Undetected changes. In addition, the response to Undetected changes differed significantly from the response to NC stimuli from 176 to 188 ms and again from 284 to 300 ms, suggesting a degree of change processing in the absence of reported awareness. To relate neural responses to our second measure of behavioral performance, detection time, we correlated the magnitude of the MEG RMS signal with listeners' detection times (shown in Fig. 6). This analysis was based on data from the full group of fourteen participants, excluding outliers so as to minimize spurious correlations (see Methods section). We observed a significantly negative correlation between detection time and the MEG RMS signal at the cM50 and cM100 peaks of CA (cM50: one-tailed Pearson's r = − 0.617, n = 13, p = .012; cM100: one-tailed Pearson's r = − 0.709, n = 13, p = .007; both p b .05 Bonferroni corrected for multiple comparisons across four CA/CD peaks). The corresponding correlations for the cM200 peak of CA and cM100 peak of CD were not significant (p = .381 and .234, respectively). Note that in this analysis, time-windows over which the RMS was averaged in the CA condition were determined based on the latencies of the whole group RMS time-course (as in Fig. 4). The effects remained significant when extending the cM50 and cM100 time-windows to 46-96 ms and 124-200 ms, respectively, so as to encompass latencies observed based on a subset of the participants used for the detection analysis above (as in Fig. 5). Discussion We investigated the dynamics of cortical change-evoked responses in rapidly unfolding, crowded acoustic scenes. Consistent with previous behavioral findings (Huron, 1989;Pavani and Turatto, 2008;Cervantes Constantino et al., 2012), changes involving the appearance of an auditory object were more accurately and rapidly detected than changes involving the disappearance of an auditory object. Underpinning this behavioral asymmetry were distinct sequences of cortical responses to the two forms of acoustic change, the magnitude and latency of which reliably scaled with the complexity of the acoustic scene (decreasing and increasing with scene size, respectively). Finally, we demonstrate that even the earliest cortical response to change is behaviorally relevant, correlating with the speed of detection. Detection of appearing versus disappearing objects The appearance of an auditory object evoked a neural response comprising three distinct peaks occurring 50, 130 and 260 ms after acoustic change resembling the M50-M100-M200 response complex often seen at sound onset (Eggermont and Ponton, 2002) and evoked by certain changes within ongoing sounds (e.g. Martin and Boothroyd, 2000;Krumbholz, 2003;Gutschalk et al., 2004;Chait et al., 2008). A striking finding here is that the opposite transition, involving a disappearing object, evoked an M100-like component only without preceding M50 and later M200 components. Distinct components suggest that processing of the two forms of acoustic change depends on partially separable cortical computations. This finding is consistent with previous studies (Hari et al., 1987;Pantev et al., 1996;Pratt et al., 2008) comparing neural responses to the onset versus offset of simple tones and noises (acoustic events that are conceptually similar to the appearing versus disappearing objects investigated here). However, our study goes beyond these previous findings in two ways. Firstly, our stimuli provide a more realistic model of natural listening whereby acoustic change is characterized by the onset or offset of energy within a dynamic (rather than static) sound, which mimics the temporal properties of many natural sounds (e.g. a bird's chirp). Secondly, the acoustic change occurs not in isolation but within an ongoing scene containing as many as ten concurrent sources. Why should detection of appearing and disappearing objects involve distinct computations? Item appearance is associated with an increased neural response within a frequency channel that was previously inactive ('local transient'). Because spectral components in the present stimuli are widely spaced across the spectral array (resulting in minimal inter-component masking), detection of item appearance based on such transients is computationally relatively simple and indeed is associated with high hit rates and rapid detection times, both of which are only mildly affected by the number of objects in the scene (see also Cervantes Constantino et al., 2012), indicative of a pop-out process. Our demonstration of an M50-like component in response to appearing objects is consistent with this response reflecting local neural transients (see also discussion in Chait et al. (2008)). In contrast, to efficiently detect item disappearance in our stimuli, the system must rely on computationally more demanding processes that acquire the pattern of onsets and offsets in each frequency channel, and respond as soon as an expected tone pip fails to arrive ('second order transient'; see Herdener et al., 2007;Yamashiro et al., 2009;Andreou et al., 2015), perhaps alongside frequency non-specific mechanisms which are sensitive to changes in timbre or loudness introduced at the time of change (see also Cervantes Constantino et al., 2012). While the neural basis of these more demanding computations remains to be elucidated, that we observed disappearance detection to be highly susceptible to increasing scene size and associated with late (N100 ms) neural responses is consistent with the notion that the underlying computations are more complex than those required for appearance detection. Contribution of change-evoked responses to behavioral outcomes A major goal of the current study was to relate change-evoked responses to behavioral outcomes and determine which stage of processing contributes to detection success. For both CA and CD, the MEG response at the time of the cM100 was larger for detected versus undetected changes. This finding is consistent with previous reports of late (N 100 ms) neural responses indexing behaviorally relevant processes that contribute to change detection outcomes (Gutschalk et al., 2008;Gregg and Snyder, 2012;Königs and Gutschalk, 2012;Puschmann et al., 2013a). However, importantly, late brain responses to undetected events differed significantly from those to scenes without a change suggesting that the auditory system processes acoustic change, at least partially, even in the absence of perceptual awareness (see below for further discussion). We also observed an even earlier relationship between behavior and neural responses, revealed as a cross-subject correlation between detection time and the magnitude of the cM50 component evoked by appearing objects. This suggests that even the earliest cortical response to change contributes to behavioral outcomes. An early behavioral Fig. 6. MEG sensor-space effect of detection success (cross-subject correlation between detection time and magnitude of MEG RMS). Gray shaded areas indicate time-windows over which the MEG RMS was averaged before computing correlations. Each circle on the scatter plots represents data from one participant. correlate of detection is compatible with our interpretation of the cM50 component of the change-evoked response as reflecting local neural transients. Note that this need not imply that neural processing at this early latency reflects 'conscious' processing (Dehaene et al., 2006;Aru et al., 2012;Pitts et al., 2014;Snyder et al., 2015). However, even if the earliest cortical response to change does not directly index detection awareness, the observed correlation with detection time nonetheless suggests that it may feed into later neural processes that do. Future research is needed to determine whether the cM50 observed in the current paradigm is additionally sensitive to detection accuracy (as well as detection time). Because this component is uniquely evoked by appearing events that are easily detected (leaving few undetected trials to analyze), we are unable to make strong conclusions with the current data. Explicit versus implicit change detection A recurring question in previous studies of change detection is whether detection in the presence versus absence of awareness (i.e. explicit versus implicit detection, respectively) is supported by the same or different underlying mechanisms (e.g. Mitroff et al., 2002;Demany et al., 2011). Consistent with the latter hypothesis, previous EEG studies of auditory change detection reported that relative to a no-change condition, detected and undetected changes modulated distinct ERP components (Gregg and Snyder, 2012;Puschmann et al., 2013a). In contrast, our results do not show evidence for modulation of distinct components during explicit versus implicit detection. Instead, we observed a graded influence of detection on the magnitude of the same (cM100/cM200) components evoked by change (i.e. detected N undetected N no-change). This finding suggests that a single mechanism generates brain responses to detected versus undetected changes, perhaps reflecting a process by which sensory evidence is assessed relative to a criterion threshold i.e. only evidence that exceeds this threshold would lead to detected responses (Mitroff et al., 2002).
2016-10-07T08:55:50.951Z
2016-02-01T00:00:00.000
{ "year": 2016, "sha1": "5ee49cb44682fca8b47b1589edb119bdb2add934", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.neuroimage.2015.11.050", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ea21007976103939dfa8eabf249fc5f4c943f048", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Computer Science", "Psychology", "Medicine" ] }
225477598
pes2o/s2orc
v3-fos-license
New insights into size-controlled reproducible synthesis of anisotropic Fe 3 O 4 nanoparticles: The importance of reaction environment Synthesis of size-controlled anisotropic magnetite (Fe 3 O 4 ) nanoparticles allows designing next-generation magnetic nanosystems with predetermined magnetic properties suited for particular applications in biomedical, information, and environment. In this work, we report a reproducible and economical approach for fabricating anisotropic Fe 3 O 4 nanoparticles via the thermal decomposition method. Controlling the reaction environment, i.e. degassing pressure is essential to obtain the reproducible synthesis of anisotropic Fe 3 O 4 nanoparticles along with monodispersity in the size and shape. At low degassing pressure, Fe 3 O 4 nanocubes are formed and the increase in degassing pressure leads to the formation of Fe 3 O 4 octahedra nanoparticles. To achieve good reproducibility (with respect to size and shape) between different batches, our finding reveals the importance of maintaing the same degassing pressure. The size of anisotropic Fe 3 O 4 nanoparticles can be varied by changing the heating rate and the solvent amount. The amount of solvent has also influence on the shape of nanoparticles, and Fe 3 O 4 nanoparticles of flower morphology are obtained at the high solvent amount. The work also provides new conceptual fundamental insights into understanding the growth mechanism of anisotropic Fe 3 O 4 nanoparticles and thus, advancing the field of materials chemistry for rationally designing anisotropic nanoparticles with tunable magnetic properties. The narrow size monomers in the reaction mixture and mass transfer of monomers. Overall, these findings provide new conceptual insights and design rules for synthesizing Fe 3 O 4 NPs with predetermined magnetic properties suited for specific applications such as magnetic hypothermia, contrast agent, photocatalysis, sensing, and energy storage. The reproducible and economical synthesis of anisotropic magnetite (Fe 3 O 4 ) nanoparticles (NPs) with narrow size distribution continues to be a topic of considerable interest owing to their unique magnetic properties 1 and numerous potential applications related to biomedical, 2, 3 catalysis, 4, 5 environment, 6 light emitting devices, 7 and sensing. 8 Anisotropic Fe 3 O 4 NPs offers two main advantages. First, the magnetic properties of Fe 3 O 4 NPs depend on the shape as well as their size and size distribution. [9][10][11] This allows the design of advanced magnetic NPs with predetermined properties suited for specific applications. Recent studies demonstrated that anisotropic Fe 3 O 4 NPs offer higher magnetic hyperthermia efficiency, 12,13 enhanced contrast in magnetic resonance imaging, 11,14,15 better targeting efficiency due to a large surface to volume ratio, 16 and longer blood circulation time compared to spherical Fe 3 O 4 NPs. 17 Second, the self-assembly of anisotropic Fe 3 O 4 NPs depends on the shape, thus resulting in the rich variety of selfassembled superstructures of distinct morphologies. 18,19 Such magnetic superstructures exhibit collective magnetic and magnetically enhanced mechanical properties which can be tailored by the shape of NPs as well as morphology of selfassembled superstructures. 20 In last two decades, various approaches based on microemulsion, 21 co-precipitation, 22 hydrothermal, 23 and thermal decomposition 10,[24][25][26][27] have been researched for the synthesis of anisotropic Fe 3 O 4 NPs. Among these, thermal decomposition of iron oleate in the presence of oleic acid and/or sodium oleate and octadecene has been used to synthesize Fe 3 O 4 NPs in different shapes (cubic, octapod, octahedra, plate, rods), 11,18,28 resulting in good control over shape and size (3-25 nm). However, this approach does not lead to fabrication of anisotropic Fe 3 O 4 NPs with phase purity, which can negatively influence crystallinity/structure of NPs and, thus, their magnetic properties. An alternative approach is based on the thermal decomposition of iron (III) acetylacetonate (Fe(acac) 3 ) in the presence of oleic acid (OA) as a surfactant and dibenzyl ether (BE) as a solvent. 29 This reaction yields considerably larger anisotropic Fe 3 O 4 NPs (size >70 nm) because of the faster reaction compared to one based on iron oleate. The rate of reaction can be slowed down by using either other ligands such as decanoic acid and 4-biphenylcarboxylic acid instead of oleic acid or a mixture of solvents (BE+ squalene or BE + octadecene + tetradecane). 10,16,29 The purpose for the use of solvent mixture is that BE decomposes to benzyl benzoate and benzaldehyde at high temperature, causing a temperature fluctuation of the reaction, which results into poor reproducibility between different batches. Despite the synthesis of Fe 3 O 4 nanocube in size range of 15-100 nm, these synthetic approaches offer less flexibility in the shape control and require the use of additional expensive solvents and ligands. It is known that the rate of reaction kinetics depends on the several factors, including the activity of precursor, ligand type, the interaction between the ligand and precursor, temperature, time, heating rate and reaction environment. 24,30 However, the role of reaction environment (i.e. degassing pressure) and heating rate on the controlling the reaction kinetics and reproducibility in size-and shape-controlled synthesis of anisotropic Fe 3 O 4 NPs have not been investigated. Here, we demonstrate a reproducible one-pot synthesis of pure anisotropic Fe 3 O 4 NPs with control over size and size distribution. The approach involves the decomposition of Fe(acac) 3 only in the presence of BE solvent and OA. Our results show that controlling the heating rate of the reaction and residual oxygen content in the reaction is crucial to regulate the reduction kinetics, and thus, to reproducibly obtain anisotropic Fe 3 O 4 NPs in different sizes with narrow size distribution despite the temperature fluctuation (±5°C) during the reaction. We also further elucidate the understanding of the reaction mechanism by controlled experiments. The proposed approach is very robust (i.e. high reproducibility between different batches) and economical because it does not rely on the use of a combination of expensive solvents (octadecene, squalene, tetradecane, sodium oleate) and additional ligands. Overall, this is the first study reporting the variation in the shape and size of Fe 3 O 4 NPs without any need for adding another solvent and ligands. The present work contributes to the fundamental mechanistic understanding of colloidal synthesis that will facilitate the design of magnetite NPs with predetermined magnetic properties tailored to specific applications. In a typical synthesis, Fe(acac) 3 (0.706 g, 2 mmol) was added to a reaction mixture of 20 mL BE and 1.26 mL OA. The resultant solution was degassed (residual pressure~0.19 mbar) at room temperature for 90 min and then heated to 290°C at a rate 8°C/min under argon (Ar) atmosphere and vigorous magnetic stirring. After maintaining the reaction at this temperature for 30 min, the solution was cooled down to room temperature. The reaction product was precipitated with a mixture of toluene and isopropanol and magnetically separated. Bright field (BF) scanning transmission electron microscopy (STEM) images reveal the cubic morphology of Fe 3 O 4 NPs, and the average size of nanocubes is determined to be ~43 nm with standard deviation (SD) of ~5% ( Fig. 1a and Fig. 1b). Note that due to the degassing of the reaction mixture, we observed lower fluctuation in the reaction temperature (290°±5°C) during the reaction, which is significantly lower to previously reported literature. 16 It should further be noted that similar size Fe 3 O 4 NPs of cubic morphology can be reproducibly synthesized between different batches at fix degassing pressure. High resolution transmission electron microscopy (HRTEM) image reveal the single crystalline nature of cubic NP with rounded corner (Fig. 1c). The FFT pattern of single crystalline NP is indexed with the help of CrystBox software, 31 and the diffraction spots (220) and (400) correspond to the interplanar spacing values 2.95 Å and 2.09 Å respectively (Fig. 1d). The interplanar lattice spacing was also determined from the image produced from the inverse FFT by choosing {220} and {400} reflections and found to be approximately closed to theoretical values (2.95 Å and 2.09 Å) for Fe 3 O 4 system (see Fig. S1 ESI †). These results suggest that nanocubes are formed because of the rapid growth along the <111> direction, and the surface of the final product is formed from six Fe 3 O 4 {100 facets with eight rounded corner Fe 3 O 4 {111} facets. The crystal structure of these NPs can be assigned to FCC (face centered cubic) Fe 3 O 4 structure based on the interplanar spacings and ratio between different planes, including angles between the planes. To confirm the crystal structure of Fe 3 O 4 NPs, we used X-ray diffraction. XRD pattern shows the presence of pure Fe 3 O 4 phase in the as-synthesized NPs, which are in agreement with JCPDS card no. 019-0629 (see Fig. S2 ESI †). To tune the size of Fe 3 O 4 nanocubes, the heating rate was varied under otherwise identical experimental conditions. At low heating rates (1°C/min and 2°C/min), two distinct size populations of Fe 3 O 4 nanocubes were observed around ~70 nm and ~240 nm (see Fig. S3 and Fig. S4 ESI †). The reaction at the heating rate of 3°C/min lead to the formation of Fe 3 O 4 nanocubes of size ~85 nm (Fig. 2a). Further increasing the heating rate from 5°C/min to 20°C/min, the average size of NPs decreases from 58 nm (SD~ 5%) to 26 nm (SD~9%), as shown in Fig. 2b-f. The formation of different sizes Fe 3 O 4 nanocubes via thermal decomposition of Fe(acac) 3 at a high temperature can be explained by the LaMer model. 24,32 According to this model, when the concentration of monomer is reached above the supersaturation limit, the nucleation occurs very quickly to bring down the concentration of monomer below the supersaturation limit, and thus inhibiting multiple nucleation events. Later, the growth of pre-formed nuclei proceeds by consuming monomers via diffusion and adsorption. To obtain NPs in different sizes with a narrow size distribution, it is thus necessary to separate the nucleation and growth processes. When the heating rate is very low (< 3°C/min), multiple nucleation events occur at different time intervals (see Fig. S5 ESI †). Therefore, the lack of clear separation between nucleation and growth processes or multiple nucleation events leads to NPs with two distinct size distribution. As the heating rate increases, the nucleation rate also increases, allowing the production of large number of nuclei in the initial stage. As a result, the size of Fe 3 O 4 nanocube decreases because of lower concentration of monomer available in the growth stage of NPs formation. A slight broader distribution (SD~9%) of cubic NPs formed at the heating rate of 7°C/min, 10°C/min, 15°C/min and 20°C/min can be seen, which can be attributed to either low availability of monomer concentration in the growth stage or less focusing effect requiring longer reaction time. These results demonstrate that the variation in the heating is crucial to obtain cubic shaped Fe 3 O 4 NPs of different sizes without any need for additional solvent and ligand molecules. To understand the growth mechanism of cubic shape Fe 3 O 4 NPs, we collected intermediate reaction products from the reaction mixture at different time intervals and analyzed by TEM. At low heating rates (i.e. 5°C/min), we noticed the cubic morphology of NPs after the reaction time (t) of 2 min at 290°C (Fig. 3a and see Fig. S6a ESI †). As the reaction progresses further (4 min to 30 min), the size of NPs increased from ~13 nm to ~58 nm while maintaining the cubic morphology ( Fig. 3a and see Fig. S6b-f ESI †). In the case of rapid heating rate (i.e. 10°C/min), we observe a rather different behavior compared to the reaction that occurred at low heating rate. Here, we observed the emergence of small octahedra NPs after t=2 min (Fig. 3b, left most panel and see Fig. S7 ESI †). These octahedra NPs transformed into truncated octahedra after t=4 min and 6 min ( Fig. 3b and see Fig. S8 ESI †). At t=10 min, these truncated octahedra NPs changed to a mixture of truncated octahedra and cuboctahedra shape (see Fig. S9a, b ESI †). Finally, these NPs gradually changed to NPs with cubic morphology after 20 min of the reaction time (see Fig. S9c ESI †) and did not observe any further change in the morphology of NP (see Fig. S9d ESI †). Therefore, the shape evolution of NPs occurs in the first 20 min of the reaction, and at the same time, the size of NPs increases from ~9 nm to ~30 nm with an increase in the reaction time from 2 min to 30 min. These experimental results indicate that the heating rate influences the evolution of the cubic morphology of Fe 3 O 4 NPs via two different pathways. This can be explained based on the availability of monomers, and the balance between the chemical potential of monomers (µ m ) and the chemical potential of different crystallographic planes. 33 The chemical potential of different crystallographic planes of Fe 3 O 4 can be ranked as µ {100} >µ {110} >µ {111} because the {100} planes are the least densely packed and the {111} planes have the highest packing density. When the heating rate is low, the precursor decomposes slowly over the time period, resulting in a sluggish rate of nucleation (low concentration of nuclei in the nucleation stage). As the reaction enters the growth stage, the growth rate of preformed nuclei is very slow due to a low concentration of available monomer (i.e. lower conversion rate of precursor to monomer). In this case, the chemical potential of monomers falls below the chemical potential of the {100} and {110} (µ {100} >µ {110} >µ m >µ {111} ). The continuous deposition of monomers along the {111} planes, thus, leads to the formation of cubic shaped Fe 3 O 4 NPs, and subsequently to growth of cubic NPs as the reaction progressed further (Fig. 3c). At faster heating rates, the high rate of precursor decomposition leads to faster nucleation rate (i.e. the large number of nuclei) as well as provides a high concentration of monomer available to growth of the NPs. In this case, the chemical potential of monomers is initially higher than that of the crystallographic planes (i.e. µ m >µ {100} >µ {110} > µ {111} ), so the growth occurs simultaneously along the all planes. The chemical potential of monomers drops due to consumption of monomers in the reaction as the reaction advances (µ {100} >µ m >µ {110} > µ {111} ). As a result, this leads to the growth of octahedra and truncated octahedra NPs. In the later stage of growth, the chemical potential of monomers falls to the point where µ {100} > µ {110} > µ m > µ {111} . Therefore, NPs grow more rapidly along the {111} planes and the shape of NPs gradually changes from truncated octahedra (larger proportion of {111} facets than {100} facets) to cuboctahedra (a larger proportion of {100} facets than {111} facets, and finally to cubic shape NPs (Fig. 3d). We performed several synthesis reactions at different heating rates and reaction times. The relationship between the time and size of Fe 3 O 4 NPs at different heating rates is shown in Fig. 4. A small or no change in the size of NPs can be noticed with time at low heating rates (1°C/min and 2°C/min). However, NP size continuously increases in the first 60 min of the reaction at high heating rates (1°C/min and 2°C/min). After 60 min, a small increase in the size can be seen. These results suggest that the nucleation occurs at a lower temperature than 290°C in the case of low heating rate. The nuclei formed at low temperature consume monomers and quickly evolve into larger NPs, preventing further nucleation events at high temperature. A high heating rate promotes nucleation at high temperature, (290°C), and produces a large number of nuclei. The monomer concentration decreases due to their consumption by growing nuclei at the initial stage of the growth process, thus preventing nucleation events at the later stage of the reaction. In the later stage, the growth of NPs most likely occurs by an Ostwald ripening process, whereby either monomer from high surface energy facets or smaller NPs of high surface energy dissolve into the solution and redeposits on low surface energy facets or NPs. As a result, NPs of smaller sizes are produced at high heating rate because of the low monomer concentration in the later stage of the reaction. We also investigated the influence of a commonly neglected reaction parameter, the residual oxygen content in the reaction environment, on the shape of Fe 3 O 4 NPs. The content of residual oxygen was controlled by degassing the reaction solution at different pressures. On increasing the degassing pressure from 0.19 mbar to 0.40 mbar and 0.71 mbar while under similar reaction conditions (5°C/min and 290°C), truncated cubic (58 nm, SD~9%) and truncated octahedra (~57 nm, SD~4%) shaped Fe 3 O 4 NPs formed ( Fig. 5a and b). When the reaction was performed at a higher heating rate (10°C/min) at degassing pressure of 0.40 mbar and 0.71 mbar at similar reaction conditions, truncated octahedra NPs of size ~38 nm and ~37 nm (SD~8%) were obtained ( Fig. 5c and d). Inset SEM images shown in Fig. 5c and d confirm the truncated octahedra morphology of NPs. Moreover, the size of truncated octahedra Fe 3 O 4 NPs depends on the heating rate at similar reaction conditions (degassing pressure~0.71 mbar, 290°C), i.e low heating rate resulted into large size of octahedra NPs (~57 nm) than to size obtained at faster heating rate (~37 nm). These results suggest that the synthesis of Fe 3 O 4 NPs is very sensitive to the change in the content of residual oxygen in the reaction environment. According to first-principle calculations, the Wulf shape of oxide material in thermodynamic equilibrium at low pressure and high temperature is a cube with [100] edges. 27 In the case of Fe 3 O 4 crystals, the cationic charge density of the {111} plane is higher than the other planes {110} and {100}. On increasing the pressure, oxygen species present in reaction environment strongly adsorb onto the plane of high charge density, i.e. {111}, thus, modifying the surface energy of facets, i.e. γ(111)<γ(100)<γ(110). As a result, the growth of NPs is suppressed in the <111> direction, and truncated octahedra shape possessing both (100) and (111) facets formed under thermodynamic equilibrium with oxygen species at reaction temperature. Therefore, the shape of NPs can be tailored by controlling the amount of residual oxygen (i.e. degassing pressure). The amount of solvent can have influence on the shape and size of Fe 3 O 4 NPs. To examine this, we varied the amount of BE solvent (10 mL, 20 mL, and 50 mL) and different reactions were performed at the heating rate of 10°C/min (T=290°C, t=30 min, degassing pressure=0.19 mbar). BF STEM images show the formation of cubic shaped Fe 3 O 4 NPs of different sizes, i.e. 86 nm and 35 nm corresponding to 10 mL and 20 mL volume of the solvent (see Fig. S10a ESI † and Fig. 2d). When the amount of solvent was increased to 50 mL, we observed a flower morphology of Fe 3 O 4 NPs of size~ 31 nm (see Fig. S10b ESI †). These flower shaped NPs are formed by joining of two, three, or more than three small individual NPs. Therefore, these NPs can also be termed as single twinned or multiply twinned. The observed decrease in size with the increase in solvent volume can be explained by a reduction in the concentration of available monomers in the reaction. In the case of a low solvent volume (10 mL), the probability of finding monomers to the close vicinity of growing NPs is high, which facilitates a higher mass transfer and, subsequently, a high growth rate. When the solvent volume is increased to 50 mL, the concentration of precursor and thus, the concentration of monomers in the solution is reduced. As a result, small sized NPs are formed because of a sluggish growth rate. Since small NPs possess high surface energy, therefore, smaller NPs join to form flower morphology, including single and multiple twinned Fe 3 O 4 NPs to minimize the overall surface energy. In conclusion, we have demonstrated shape-and sizecontrolled synthesis of Fe 3 O 4 NPs via thermal decomposition of Fe(acac) 3 in the presence of OA and BE without any additional need for using a combination of solvents and ligands. When the reaction mixture was degassed at low pressure, Fe 3 O 4 nanocubes formed, and their sizes can be varied by changing the heating rate. Our results showed two different nanocube formation mechanisms depending on the heating rate, as the heating rate influences the production of available monomers in the solution. The shape of Fe 3 O 4 NPs changes from nanocube to octahedra when the degassing pressure of the reaction mixture was increased. This transformation can be attributed to a modification in the surface energy of different crystalline facets in the presence of oxygen species. Our results suggest that the residual oxygen content in the reaction environment (or degassing pressure) is an essential experimental parameter that enables control over the shape as well as reproducibility in size from the different batches. The shape and size of Fe 3 O 4 NPs can also be controlled by changing the volume of the solvent. The size of nanocube increases with the increase in the volume of the solvent, and at high amount of solvent, Fe 3 O 4 NPs of flower morphology resulted. The difference in sizes and shapes can be understood based on the concentration of
2020-07-23T09:08:29.136Z
2020-08-17T00:00:00.000
{ "year": 2020, "sha1": "c86999f3e872db254d360aae3321d9b1eda87539", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/ma/d0ma00275e", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "035515fe56296ad66954a33312f50c368a2cb8ab", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
155385901
pes2o/s2orc
v3-fos-license
Diagnostic Value of Salivary Markers in Neuropsychiatric Disorders A growing interest in the usability of saliva has been observed recently. Using saliva as a diagnostic material is possible because it contains a varied range of composites, organic and inorganic like proteins, carbohydrates, and lipids, which are secreted into saliva. Moreover, this applies to drugs and their metabolites. Saliva collection is noninvasive, and self-collection is possible. There is a lack of risk of injuries related to injection with needle, and it is generally safe. Human saliva has been successfully used, for example, in the diagnosis of many systemic diseases like cancers, autoimmunological diseases, infectious diseases (HIV, hepatitis, and malaria), and endocrinological diseases, as well as diseases of the gastrointestinal tract. Also, it is used in toxicological diagnostics, drug monitoring, and forensic medicine. The usefulness of saliva as a biological marker has also been extended to psychiatry. The specificity of mental illness and patients limits or prevents cooperation and diagnosis. In many cases, the use of saliva as a marker seems to be the most sensible choice. Introduction At present, growing interest in the usability of saliva has been observed [1][2][3][4]. Human saliva takes part in the protection against different pathogens of oral tissues and upper respiratory and digestive systems [1,2]. One of the most important roles of saliva is to provide the right environment for oral mucosa and teeth. It protects against the variability of destructive biological or chemical substances and mechanical damage. Also, saliva plays a significant part in the primary phase of digestion and participates in the perception of different kinds of tastes. Moreover, saliva has antibacterial, antifungal, and antiviral properties due to the presence of immunoglobulins, lactoferrin, and lysozyme [4][5][6]. Using saliva as a diagnostic material is possible because it contains a varied range of composites, organic and inorganic like proteins, carbohydrates, and lipids, which are secreted into saliva. This also applies to drugs and their metabolites [6][7][8][9][10]. Its components are very sensitive, and they have a great response to toxic substances. They also correlate to the real-time level of these markers. Moreover, saliva collection is noninvasive, and self-collection is possible. There are no risk of injuries related to injection with needle, and it is generally safe [2,11,12]. In recent years, the usefulness of saliva as a biological marker has also been extended to psychiatry. The specificity of mental illness and patients limits or prevents cooperation and diagnosis. In many cases, the use of saliva as a marker seems to be the most sensible choice (Figure 2). Drug Monitoring It was proved that the concentrations of drugs in saliva correlate with the level of the drug in the blood [6,[27][28][29][30][31]. Therapeutic drug monitoring is used to optimize the management of patients receiving drug therapy. It encompasses the quantity of drug concentrations in biologic fluids. It also correlates with the patient's clinical condition and helps recognize the need to change the dosage, for example. Saliva use in drug monitoring is valuable and results from reflecting the free non-protein-bound pharmacologically active component in the serum [13,32]. One example is valproic acid, used not only in the treatment of epilepsy but also in psychiatry. It is used in schizophrenia along with other medications and as a second-line treatment for bipolar disorder. Drug determination in saliva can be a simple test checking whether the patient is taking the drugs systematically as well as drug toxicity. It also makes it possible to determine the approximate level in the serum without blood sampling [33]. Dwivedi et al. [34] showed that the mean ratio of saliva to serum-free valproic acid concentration indicates that the saliva levels can predict the free drug concentrations in serum, and it also shows the protein binding of valproic acid in both. Carbamazepine, methadone, nicotine, cocaine, amphetamines, or buprenorphine has also been measured in oral fluid [13,32,35]. An example is dementia, which is characterized by progressive cognitive impairment and behavioral changes. There are five types of dementia, for now, namely, Alzheimer's disease, vascular dementia, Lewy body dementia, frontotemporal dementia, and mixed dementias [36,38]. It is estimated that about 50% of all dementia instances are Alzheimer's disease [36,39], in which amyloid β and tau protein accumulate in the central nervous system. Amyloid β is one of the most significant sources of reactive oxygen species in patients with dementia. It is deposited in the brain and also in the peripheral regions like the nasal mucosa, lacrimal glands, or lingual glands (salivary gland epithelium cells) [24,36]. It is proved that oligomer forms of amyloid β activate nicotinamide adenine dinucleotide phosphate-oxidase (NADPH), increase the formation of hydrogen peroxide, and increase reactive oxygen species production in the mitochondria. This happens through modulation of alcohol dehydrogenase activity, which binds α-ketoglutarate dehydrogenase and amyloid β. Accumulation of amyloid β in the secretory epithelium of salivary glands in patients with dementia disrupts the local redox balance and is responsible for damage to the structure and function of salivary glands [24,36]. Changes in the composition of saliva can involve worsening in the quality of life of patients with dementia. These changes may cause problems with swallowing, inflammatory and fungal lesions, and worse cavital digestion [24,36,40,41]. It is possible that oxidative stress is a significant factor that might cause dysfunction of the salivary glands. Scientists compare this to the mechanism observed in metabolic syndromes, such as insulin resistance [36,42], obesity [36,43], and diabetes [36,44,45], or autoimmune diseases, such as Sjögren syndrome and rheumatoid arthritis [36,46]. The newest studies show that saliva might be an alternative diagnostic material to blood plasma or serum. In cases of dementia, it is used as an indicator of redox homeostasis biomarkers [24,36,40]. Choromańska et al. [36] proved decreased antioxidant properties of saliva and increased levels of DNA products in dementia patients. Moreover, they showed oxidative damage of protein and lipid, with simultaneously reduced secretion of nonstimulated and stimulated saliva. They suggested that changes in salivary redox homeostasis are independent of systemic changes in the progression of dementia [36]. Alcohol Dependence Alcohol consumption is a serious public health problem and has been associated with high mortality rates. The world's population of adults suffering from alcohol abuse is estimated at about 4.9%. More than 2% of the world's population is alcohol dependent, while in Europe, it is estimated at 4% and in America 3.4% [47]. The World Health Organization assessed that the problem of binge drinking concerns more than 7% of the world's population (over 16% in Europe and 13% in America). In the last years, binge drinking has become the dominant pattern of alcohol consumption among adults [47]. So far, some chronic alcohol markers have been found in saliva, namely, aminotransferases and gamma-glutamyltransferase, ethanol, sialic acid, hexosaminidase A, and glucuronidase. Waszkiewicz et al. [11,47,48] suggested that alcohol such as methanol, diethylene, ethylene, and glycol and salivary glycoproteins like oral peroxidase, α-amylase, clusterin, haptoglobin, heavy and light chains of immunoglobulins, and transferrin may be possible alcohol markers. In addition, chronic drinking leads to disturbances in adaptive and innate immunities, like immunoglobulin A, peroxidase, and lactoferrin [11,48]. Waszkiewicz et al. [1,49] found increased activity or concentration of β-hexosaminidase and immunoglobulin A in binge drinking [1,49]. They also showed specific changes in salivary immunity in binge drinkers and alcoholdependent patients. Furthermore, it was showed that even a single high dose of alcohol (2 g/kg) increases the level of salivary immunoglobulin A [2,50]. Binge drinking caused disturbances in innate salivary immunity (lysozyme). They found possible applicability of raised immunoglobulin A concentration and oral peroxidase activity in binge and chronic drinking differentiation [2,50]. Autism Spectrum Disorders Autism spectrum disorder is a neurological and developmental disorder that affects communication and behavior [51]. It is included in the group of developmental disorders because symptoms begin early in childhood, mostly appearing in the first three years of life [52]. Scientists estimate the prevalence of autism spectrum disorders as 6 per 1,000. However, the frequency rates vary for each of the developmental disorders in the spectrum [52]. Early diagnosis and intervention might improve functional outcomes in children with autism spectrum disorder. Diagnosis, prognosis, and monitoring of symptoms of autism spectrum disorder can also be helped with biomarkers [53]. Ngounou Wetie et al. [53] tried to optimize salivary proteomic biomarker methods and to identify initial biomarkers in children with autism spectrum disorders. They assumed that mass spectrometry-based proteomics could help expose biomarkers for autism spectrum disorder. Scientists have analyzed the salivary proteome in individuals with autism spectrum disorders compared to control subjects. They found statistically significant differences in several salivary proteins, e.g., the elevation of prolactin-inducible protein, lactotransferrin, Ig kappa chain C region, Ig gamma-1 chain C region, Ig lambda-2 chain C regions, neutrophil elastase, and polymeric immunoglobulin receptor and deletion in malignant brain tumors 1. Their achievement supports the concept that immune system and gastrointestinal disturbances may be present in individuals with autism spectrum disorders [53]. Bhandary and Hari [54] studied the role of saliva as a biomarker and oral health status of children with autism spectrum disorders. They observed that salivary pH and buffering capacity were lower in children with autism spectrum disorders than their healthy siblings [54]. In another study, the authors measured salivary micro-RNA. They assumed that epigenetic mechanisms including microRNAs might contribute to the autism spectrum disorder phenotype by changing the neurodevelopmental gene networks. They showed the presence of the differential expression of 14 microRNAs (e.g., miR-628-5p, miR-27a), which are expressed in the developing brain. Furthermore, the impact of microRNAs on brain development and its correlates with neurodevelopmental behaviors were shown. MicroRNAs found in saliva showed high specificity and cross-validated utility. MicroRNAs seem to be a potential screening tool for autism spectrum disorders [55]. Neuroendocrine System The use of saliva for monitoring steroid hormone levels has received increasing attention in recent years. The monitoring of steroid hormone levels is currently commercially available. There is nothing unusual in that, since levels of salivary steroid hormones reflect the free and thus the active level of these hormones in the blood [56]. The levels of cortisol, dehydroepiandrosterone, estradiol, estriol, progesterone, testosterone, etc. can be accurately assessed in saliva, being useful in evaluations of mood and cognitive-emotional behavior, in the diagnosis of premenstrual depression, to assess ovarian function, to evaluate risk for preterm labor and delivery, in full-term and preterm neonate monitoring, to study child health and development, as well as to predict sexual activity in adolescent males, or in Cushing's syndrome screening. Protein hormones are too large to reach saliva through passive diffusion and can reach saliva through contamination from serum as a result of the outflow of gingival crevicular fluid or from oral wounds [14]. Protein hormones are therefore not useful in routine salivary analyses. Archunan et al. [57] presented that cyclic variations in salivary levels of glycosaminoglycans (GAGs) and sialic acid (SA) as well as in steroid (estrogens, progesterone) and glycoprotein (luteinizing hormone, LH) hormones can be helpful in predicting ovulation. SA and GAG content showed a distinct peak at ovulation during a normal menstrual cycle. Such hormonal changes in estrogen levels and a peak in LH might be the reason for proteoglycan degradation. Estrogen can inhibit the synthesis of the extracellular matrix, shifting normal proteoglycan turnover toward degradation processes. Identification of the period of ovulation in humans is critical in the treatment of infertility, which may result in mental disorders [21,57,58]. An easy, new, and noninvasive method of ovulation detection may help in the infertility treatment. Besides the salivary hormonal changes, changes in salivary GAGs and SA seem to show promise in the identification of the period of ovulation as well as the assessment of endocrine function. Cortisol plays an important role as a marker of psychiatric disorders, such as anxiety and depression. Changes in cortisol levels appear in response to stress as well as emotional support. Chronic stress may lead to disease by activating the hypothalamic-pituitary-adrenocortical (HPA) axis. The correlation of cortisol levels in blood and saliva is extremely strong, and the noninvasive quantification of this hormone in saliva meets the detection criteria in biomedical research, both scientific and diagnostic [59][60][61]. Another parameter that is very helpful in assessing a neurotic disorder is alpha-amylase, which reflects catecholamines in the blood. Therefore, it reflects stress levels, reacting even faster than cortisol [62,63]. Thus, further studies focusing on changes in salivary components during different physiological and pathophysiological states seem to be warranted. Conclusions Based on these properties, human saliva has successfully been used in the diagnosis of many systemic diseases, like cancers (ovarian, lung, breast, and pancreatic), autoimmune diseases (Sjögren's syndrome, celiac disease, and Hashimoto's thyroiditis), infectious diseases (HIV, hepatitis, and malaria), and endocrinological diseases (types 1 and 2 diabetes, Cushing's syndrome) as well as diseases of the gastrointestinal tract (gastroesophageal reflux disease). Also, it is used in toxicological diagnostics, drug monitoring, and forensic medicine. The usefulness of saliva as a biological marker has also been extended to psychiatry. Saliva is recommended as an excellent material for biochemical, toxicological, and immunological diagnostics of not only oral cavity or systemic diseases but also in the still unexplored field of neuropsychiatry. Conflicts of Interest The authors declare that they have no conflicts of interest.
2019-05-17T13:46:35.977Z
2019-05-02T00:00:00.000
{ "year": 2019, "sha1": "0211026365786d2464e108e8696f8e9d9171720b", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/dm/2019/4360612.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0211026365786d2464e108e8696f8e9d9171720b", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
2578996
pes2o/s2orc
v3-fos-license
A rare case of spinal injury: bilateral facet dislocation without fracture at the lumbosacral joint Lumbosacral dislocations are rare disorders; since they were first reported by Watson-Jones [1], only 100 cases have appeared in the literature [2]. A traumatic bilateral lumbosacral dislocation is even rarer, with a mere 10 cases reported [3]. Because of its low incidence and atypical location, the lesion may often go unnoticed on initial clinical assessment [4]. Surgical treatment modalities are not defined, but open reduction and internal fixation are often necessary because of a three-column involvement [5]. In this paper, we report on an initially misdiagnosed case of lumbosacral dislocation treated with open reduction and internal fixation. Introduction Lumbosacral dislocations are rare disorders; since they were first reported by Watson-Jones [1], only 100 cases have appeared in the literature [2]. A traumatic bilateral lumbosacral dislocation is even rarer, with a mere 10 cases reported [3]. Because of its low incidence and atypical location, the lesion may often go unnoticed on initial clinical assessment [4]. Surgical treatment modalities are not defined, but open reduction and internal fixation are often necessary because of a three-column involvement [5]. In this paper, we report on an initially misdiagnosed case of lumbosacral dislocation treated with open reduction and internal fixation. Case report An 18-year-old woman was involved in a high-impact motor vehicle accident. She was hit by a truck from her left side and trapped under the vehicle with thighs flexed on to the pelvis. The patient was transported to an emergency hospital, and she was initially diagnosed with unilateral fractures of the transverse processes of the L2 and L3 vertebrae and anterior spondylolisthesis of the L5 vertebra. Conservative treatment was initiated but lumbago did not improve. She consulted our hospital 3 months after the accident. On admission, she complained of low back pain and reduced pinprick sensation in her right gluteal region and left posterior leg. She was unable to extend her hip joint completely and had difficulty extending her lower limbs because pain and muscle contractures. Radiographs of the lumbar spine showed no bony fractures, but anterolisthesis of the L5 vertebra on the S1 vertebra was evident (Fig. 1). Computed tomography (CT) of the lumbar spine revealed locked facets at the L5-S1 level (Fig. 2). Magnetic resonance imaging (MRI) subsequently revealed disruption of the L5-S1 intervertebral disc (Fig. 3). Because she also suffered from severe asthma, surgery was postponed until her respiratory dysfunction had resolved. Surgery was performed approximately 14 months after the injury. A standard posterior midline approach was used with the patient in the prone position. Bilateral L5 facet dislocation without fracture was intraoperatively confirmed, and, fortunately, the dislocation was easily reduced by manual traction without facetectomy. These findings are consistent with severe instability caused by a three-column injury. After reduction, the severely ruptured intervertebral disc at L5-S1 was removed and spinal fusion performed with pedicle screws. Spinal fusion, including posterior lumbar interbody fusion and posterior lumbar fusion, was performed with autologous bone from the posterior iliac crest. Anatomic alignment was confirmed postoperatively (Fig. 4). At follow-up 2 years after the operation, radiographs showed unaltered reduction and reliable fusion. Hypesthesia of the right gluteal region and left posterior leg, and lumbago had resolved completely; however, hip joint extension was still limited. The patient and her family were informed that data from the case would be submitted for publication and gave their consent. Discussion Fracture-dislocations of the lumbosacral spine are rare. Bilaterally locked facet injuries without fracture are even rarer and only 10 cases have been reported in the literature [3]. Traumatic lumbosacral dislocations are produced by high-impact trauma, and, therefore, are rarely found as isolated injuries [6]. Furthermore, many patients die soon after initial trauma, so many cases of traumatic lumbosacral dislocation remain unidentified [4,7]. According to Watson-Jones, who first described lumbosacral dislocation, hyperextension is the main mechanism of the injury [1]. However, most authors consider a combination of hyperflexion and compression as factors responsible for causing bilateral L5-S1 dislocation [8][9][10][11]. In our case, the patient was in a position of lumbar spinal and hip flexion when the trauma occurred (Fig. 5). We believe that hyperflexion rather than hyperextension is the most frequent mechanism of this injury. Initial showing anterolisthesis of L5 vertebra on S1; oblique views (c, d) do not show bilateral facets dislocation clearly screening of multiple trauma patients usually includes high-quality anteroposterior and lateral radiographs of the lumbar spine [4,5,10]. However, emergency room radiographs are frequently inadequate and can easily be misinterpreted as normal [4,5]. Therefore, it is important to understand the detailed pattern of the injury. CT scan through the L5-S1 area is necessary to identify the dislocation, because it readily reveals locked or fractured facets, laminar fractures, and sacral fractures. Furthermore, MRI should be performed to assess the L5-S1 disc lesion when the general condition of the patient is stable. The MRI evaluation determines the treatment strategy [12]. Because of severe spinal and ligamentous damage, traumatic fracture-dislocation of the lumbosacral spine is rendered highly unstable [12]. Because it is a three-column injury, open reduction and internal fixation should be recommended [3,5,[7][8][9][12][13][14]. This injury results in multiple organ injury, and treatment of vital organ lesions undoubtedly remains a priority. However, early surgical treatment is necessary, especially if neurologic signs are found on physical examination [4,6]. The time interval between trauma and surgery makes reduction difficult and physical symptoms, if present, can also affect the surgical outcome. In this case, to avoid pain prior to surgery the patient remained in the lumbar spinal and hip flexion position, and therefore, contracture of soft tissues and the hip joint had not resolved until the present time. Most surgeons use the posterior operative approach. To achieve normal sagittal alignment, posterior reduction is required: it enables indirect decompression of the spinal canal and nerve roots, which may improve the neurologic outcome [3,7]. If the integrity of intervertebral disc is confirmed by MRI, posterolateral fusion, only, is sufficient. However, if intervertebral disc damage is present, interbody fusion should also be performed [12]. In such cases, rigid fixation can usually be achieved by use of pedicle screws, providing immediate stability of the lumbosacral vertebrae. Conclusion Bilateral lumbosacral dislocation without fracture is a rare injury. It results from high-energy trauma and multiple injuries are often present. Lumbosacral dislocation can be missed on initial diagnosis; therefore, it is important to understand the detailed pattern of the injury. Because it is a three-column unstable lesion, open reduction and internal fixation are indicated for management of lumbosacral dislocation.
2018-04-03T00:16:20.802Z
2011-05-11T00:00:00.000
{ "year": 2011, "sha1": "ac24809ce70f445df3cb863d3b847b122c9bea1d", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1007/s00776-011-0082-y", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ac24809ce70f445df3cb863d3b847b122c9bea1d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53515966
pes2o/s2orc
v3-fos-license
IMAGE-BASED DEFORMATION MONITORING OF STATICALLY AND DYNAMICALLY LOADED BEAMS Structural health monitoring of civil infrastructure systems is an important procedure in terms of both safety and serviceability. Traditionally, large structures have been monitored using surveying techniques, while fine-scale monitoring of structural components has been done with instrumentation for civil engineering purposes. As a remote sensing technique, photogrammetry does not need any contact with the object being monitored, and this can be a great advantage when it comes to the deformation monitoring of inaccessible structures. The paper shows a low-cost setup of multiple off-the-shelf digital cameras and projectors used for threedimensional photogrammetric reconstruction for the purpose of deformation monitoring of structural elements. This photogrammetric system setup was used in an experiment, where a concrete beam was being deformed by a hydraulic actuator. Both static and dynamic loading conditions were tested. The system did not require any physical targets other than to establish the relative orientation between the involved cameras. The experiment proved that it was possible to detect sub-millimetre level deflections given the used equipment and the geometry of the setup. * Corresponding author. INTRODUCTION Deformation monitoring of civil infrastructure systems, or structural health monitoring in general, is an important procedure in terms of both the public safety and the serviceability of the structure.In order to avoid potential structural failure, the maximum loading capacity of the system must be known before its completion, and regularly scheduled maintenance checks must be performed after its completion (Park et al., 2007).Traditionally, large structures have been monitored using surveying techniques (Ebeling et al., 2011;González-Aguilera et al., 2008), while fine-scale monitoring of smaller structural components has been done with instrumentation for civil engineering purposes such as strain gauges (as explained in González-Aguilera et al., 2008;Maas and Hampel, 2006).Both methods have two downsidesdeformation could only be detected at specific point locations, and in the case of failure during the time of monitoring, the area around the object of interest can become hazardous.Thus, this paper will explore the remote sensing technique of photogrammetry for the purpose of fine-scale deformation monitoring of concrete beams. PREVIOUS RESEARCH As a remote sensing technique, photogrammetry can provide high-precision non-contact measurements of object(s) or surface(s) of interest with no risk of injury to the operators or damage to the equipment used.Here are some examples from the photogrammetric literature: Mills et al. (2001) used a single small format digital camera attached to a moving crane in order to map a test bed in a pavement rolling facility.Given the used geometry, the experiment resembled near vertical airborne mapping.Despite the undesirable base-toheight ratio, the overall reconstruction root mean square error (RMSE) for the performed experiments was about 2-3 mm; Fraser and Riedel (2000) performed near real-time multi-epoch deformation monitoring of heated steel beams while cooling off in a thermal test facility.Three digital cameras positioned at convergent geometry, and specially designed targets for a such high temperature environment were used to obtain a final precision for the reconstructed object space coordinates of 1 mm; Jáuregui et al. (2003) used double sided targets and measured deflections in steel beams at an RMSE of 0.5-1.3mm in an indoor laboratory.In addition, they also managed to measure the vertical deflections in bridge girders on a highway at an RMSE of 0.5-1.5 mm; Lin et al. (2008) monitored the deformations of membrane roofs with a precision of 1.3-1.6 mm.Their system consisted of two machine vision cameras and one data projector, and it could operate without the use of traditional signalized targets.They achieved targetless relative orientation by defining the scale through imaging the footprint of a reflectorless total station.Also, they managed to generate a point cloud without any physical targets by the means of projecting a pattern onto the surface of interest during the data collection.As stated in their article, the precision could have been significantly improved if more cameras were available.  The next sections of this paper show how a low-cost photogrammetric setup can be used for precise three dimensional (3D) object/surface reconstruction, which could be useful for the deformation monitoring of structural elements in both static and dynamic loading conditions. PROPOSED METHODOLOGY The task of deformation monitoring using imagery can be divided into four stages: (1) fulfilment of project prerequisites or system calibration, (2) data acquisition, (3) image processing, and (4) deformation analysis.These four stages are summarized below: 1.The project prerequisites include camera calibration, stability analysis, and estimation of the relative orientation between the involved cameras.In order to assure good quality reconstruction, the cameras are geometrically calibrated (Fraser, 1997;Habib and Morgan, 2003) before they can be used in the project. It also has to be verified that their internal orientation parameters (IOPs) are stable, i.e. they do not change significantly over time (Habib and Morgan, 2005;Habib et al., 2005).Estimating the relative orientation of the involved cameras requires the collection of signalized target points, and running a bundle block adjustment in order to compute the exterior orientation parameters (EOPs) (i.e. the position X 0 , Y 0 , Z 0 , and the attitude ω, φ, κ) for each camera. 2. The data acquisition stage requires that a setup of multiple off-the-shelf digital cameras and projectors is used.The cameras must be synchronized in order to avoid motion-blur-like errors.The projectors are needed to project a random pattern and thus to provide artificial texture for any objects or surfaces that are homogeneous.This artificial texture is necessary to facilitate the matching of conjugate features between overlapping images as part of the image processing (Reiss and Tommaselli, 2011). 3. The image processing procedure is necessary in order to perform semi-automated object space reconstruction (i.e. the computation of X, Y, Z coordinates for the object(s) or surface(s) of interest).The semi-automated reconstruction includes corner detection in all images, hierarchical image matching between all image pairs, corner tracking of the detected corners common to at least three consecutive images, and a series of multiple light ray intersections using the previously estimated IOPs and EOPs.The reconstruction steps are automated, except for the selection of the region of interest in the images, which is done manually (Detchev et al., 2011a). 4. The 3D reconstruction described above is done for each measurement epoch.The final step in the deformation monitoring scheme is to compute the deflections of certain features of interest relative to a reference datum for all observed epochs (Detchev et al., 2011b;Detchev et al., 2011c). Note that the photogrammetric system described here does not require any physical targets on the actual object(s)/surface(s) being monitored.The signalized targets mentioned here are only used for the purposes of establishing the relative orientation between the involved cameras. PHOTOGRAMMETRIC SYSTEM SETUP A photogrammetric system comprised of multiple cameras and projectors was installed on both sides of a 250 kN hydraulic actuator with an attached spreader beam (see Figure 1) in the structures laboratory at the University of Calgary.The system was to be used for photographing concrete beams (see example in Figure 2) subjected to different loading conditions, where the changing loads would be applied by the hydraulic actuator.A metal frame was designed and built around the actuator (see Figure 3) in order to hoist the cameras and projectors in secure positions above the beam being tested.Observing the top surface of the beam was preferred to observing its longitudinal side, because most of the deformation was naturally anticipated to be along the vertical direction.Four cameras and one projector were attached to each of the two parts of the built metal frame (see the example in Figure 4), thus using a total of eight cameras and two projectors.In order for these attachments to work, the cameras had to be mounted on tripod heads.After the cameras were installed on the metal frame, their EOPs had to be estimated through the use of signalized targets.This is why paper checker-board targets were spread out on the laboratory floor, the concrete beam, and the spreader beam before the start of the experiment (see Figure 5).Since the cameras were rigidly mounted on the metal frame, theoretically one would not expect their EOPs to change for the duration of the experiment.So ideally the bundle block adjustment for the EOP estimation would have been done only once.Nevertheless, due to the long duration of the experiment, and the necessary servicing of the cameras, it was decided that the EOPs should be recomputed before each of the conducted data collection campaigns.Once the testing began however, the targets on the beam had to be removed, and the only targets that could be used for the recomputation were the ones on the laboratory floor (see Figure 6).The targets on the floor were also used for the scale definition in the bundle block adjustment.Several distances between some of the targets were measured with a steel tape, and a distance constraint was implemented in the adjustment solution.It should be noted once again that the projected pattern (see Figure 7) rather than physical targets were used for the purpose of the actual photogrammetric reconstruction.As seen from Figure 7, the projected pattern image added artificial texture to the otherwise white-washed concrete surface, and this made the subsequent matching portion of the data processing possible and also reliable. EXPERIMENTAL RESULTS The low-cost photogrammetric system was used for the monitoring of the vertical deflections of a concrete beam subjected to static and dynamic loads.The concrete beam was 3 m long (with a cross section of 30 cm x 15 cm), it was whitewashed, and it had a polymer sheet glued to its underside.Given the hydraulic actuator setup at hand, the spreader beam attached to it was obstructing a large portion of the top surface of the concrete beam.This is why, in addition to observing the visible portions of the beam surface, the cameras also observed 13 (thirteen) 5 cm x 15 cm white-washed aluminium plates attached at 25 cm intervals to the bottom surface of the beam.These metal plates served effectively as offset witnesses to the bottom surface of the actual beam. The conducted beam deformation experiment was divided into three phases:  Phase Istatic loading based on displacement control: settle the beam on its support by first applying a displacement of approximately 3 mm, and then unloading it in order to return to zero displacement;  Phase IIstatic loading based on load control: apply a load of up to 60 kN;  Phase IIIdynamic loading based on load control: at a rate of 1 Hz apply load cycles from 24 kN to 96 kN for one hour, and then switch to a rate of 3 Hz until failure or a certain number of cycles is reached. Note that image data was collected during Phase I, Phase II, and the 1 Hz load cycles of Phase III.An example of one of the reconstructed 3D point clouds can be seen in Figure 8. CONCLUSIONS AND RECOMMENDATIONS FOR FUTURE WORK This paper dealt with the use of consumer grade cameras and projectors for the deformation monitoring of structural elements.The aim of the conducted experiment was to set up multiple off-the-shelf digital cameras and projectors on a stable metal frame in order to be able to detect deflections in concrete beams during static and dynamic load testing with a hydraulic actuator.After performing semi-automated photogrammetric reconstruction of the visible beam surface and of the full surfaces of all the metal plates, it was shown that sub-millimetre precision for the estimation of the beam deflections could be achieved in object space.Current work involves attempting to approximate the frequency of the beam movement at each reconstructed plate. In the future, additional cameras will be added to the system in order to monitor the cracks in the concrete.The main task will be to extract the crack borders from each image and track the enlargement of the crack widths in the concrete through image processing methods. Figure 1 . Figure 1.Spreader beam (in yellow) attached to a hydraulic actuator (in black and silver) Figure 2 . Figure 2. Concrete beam to be used for the experiment (placed underneath the spreader beam) Figure 4 . Figure 4. Example of the multiple camera and projector setup on one side of the built metal frame Figure 5 .Figure 7 . Figure 5. Example of the distribution of signalized targets on the floor, the concrete beam, and the spreader beam for the relative orientation estimation before the static loading case Figure 8 . Figure 8. Example 3D point cloud derived from the photogrammetric reconstruction The reconstructed surfaces in the point clouds for each epoch were first segmented (see Figure 9), and then the Z object coordinates of all the points belonging to the same aluminium plates were averaged.This yielded the centroids of each plate for each observed epoch.Then a reference epoch had to be chosenusually the epoch before any load was applied to the beam, i.e. the zero load epoch, for the static observations, and an epoch at an arbitrary time for the dynamic observations.Next, the Z values of the centroids of each plate for the reference epoch were subtracted from the Z values of the centroids for the corresponding plates for the rest of the epochs.This yielded the beam deflections (δZ) at each plate for each observed epoch, where the deflections for all the plates of the reference epochs were zeros.Example plots for the static and the dynamic data can be seen in Figure 10 and Figure 11, respectively. Figure 9 . Figure 9. Example of a segmented 3D point cloud (Note: each segmented plane has a random colour assigned to it)The first three epochs in Figure10represent the zero load state of the beam, where the one marked as 'Rep1' served as the reference, and the other two -'Rep2' and 'Rep3'were used to test the repeatability of the system.The epochs marked as one and two correspond to Phase I, and the epochs marked as three and four correspond to Phase II.It can be seen that the computed deflection values at each plate for the three repeatability sets were within a range of 0.1 mm from each other. Figure 10 . Figure 10.Plot of the beam deflections for the static portion of the experiment In Figure 11, the first reconstructed epoch served as the reference, and the following six epochs represent a three second sample interval during the 1 Hz load cycles of the dynamic load testing.The reference epoch was chosen at an arbitrary time, and this is why it does not appear to be on the top, at the bottom or in the middle of the other epochs.At this time of the experiment, a laser transducer recorded that the nominal deflection amplitude for the central plate of the beam was ±4 mm.So the difference of 7.5 mm between the maximum and the minimum deflections at the central plate, observed by the photogrammetric system, closely approximates the true range of motion for the beam at this location. Figure 11 . Figure 11.Plot of the beam deflections for a three second sample interval during the 1Hz load cycles of the dynamic portion of the experiment
2018-10-26T15:42:12.700Z
2012-07-23T00:00:00.000
{ "year": 2012, "sha1": "35446e80991ae4d0654c56526d245fe1a86465ce", "oa_license": "CCBY", "oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XXXIX-B1/103/2012/isprsarchives-XXXIX-B1-103-2012.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "35446e80991ae4d0654c56526d245fe1a86465ce", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Engineering" ] }
59354619
pes2o/s2orc
v3-fos-license
Present Status of E-business and E-banking in Bangladesh: A Study on Some Selected Commercial Banks of Chittagong City Now a day e-business has created wonderful prospect all over the world. E-banking can perform as a complementary factor of e-business. The central bank (BB) of Bangladesh has recently introduced automated clearing house (ACH) system. This pushed upward transition from the manual banking system to the e-banking system. This study has been undertaken to observe present status of e-business and as its complementary factor of e-banking system in Bangladesh. The study analyzes the data collected from local banks of Bangladesh and also used snowball sampling techniques to gather answer from about three hundred respondents who have already been using e-banking system on the basis of a questionnaire. The study found that dealing officials of the banks are not well conversant about their desk work. It has been observed that the country can benefit from successful utilization of e-business and e-banking as this will help to enhance productivity. Also, monetary gain of both producer and customer may have a feasible and positive impact on raising gross domestic product in Bangladesh. E-business, especially with the help of e-banking, can develop the economy of Bangladesh in a better way of customers’ satisfaction. INTRODUCTION E-business and e-banking means electronic business and electronic banking.Internet has opened a new vision of e-business, creating immense opportunities for marketing products as well as managing banking institutions internationally.Gradually, wireless internet system has been creating a new instance and electronic fund transfer can have a suitable formation.Ebusiness can improve the quality of the services, save customers' time, deliver them from the movement from one place to another and improve receiving the goods and services accurately and timely.E-business brings a new channel of distribution process.But this leads to change in the regulatory issues, cross border trade through emerging new marketing distribution channel.In developed countries, e-business creates an opportunity to directly selling of the product to the customer without using any intermediaries.This process occurs mainly in four systems: business to business, business to consumer, business to government and consumer to consumer.E-business expedites the process of better customer relationship management.It also helps to attain enterprise resource management as well as "e" to "e" process.As such, e- banking system can add value and value chain can be created in the e-business process.In turn, it helps to raise gross domestic product of the country.Governments as well as different international organizations have also identified that underdeveloped banking technology creates hindrance on economic progress of the country.E-banking system is a way of conducting, managing, and executing banking transactions utilizing information and communication technology (ICT) and electronic communication networks (ECN) such as the intranet and extranet.Environment of e-banking requires authentication procedures for epayment system, network environment, computer hardware and software, electronic hardware, legal bindings, etc. The security and authentication of modern banking are very much dependent on cryptography and its applications.Risk management disciplines have not evolved at the same speed and many institutions, especially the smaller ones, have not been able to incorporate ebanking risk controls within their existing risk management structures.As information symmetry and free flow of information are gaining more importance due to globalization process, Bangladeshi companies have to compete in the world market to serve corporate and other clients with round the clock services.Access to computer would be beneficial to Bangladesh like any other country.Social and economic disparity and lack of internet accessibility which creates digital divide are a great hindrance towards customer dissatisfaction of the business organization which ultimately results in negative customer relationship management.As a result, it creates a negative impact on economic development of the country.E-business process creates an opportunity for doing business through arranging real time sharing business.Organizations can take help of transaction process through e-business solutions from around the world where on-line facility can be available.Due to advancement of technology, business process of the world is gradually becoming complex for which e-business, especially in the banking sector, can supersede the traditional business process.Through e-business the country can compete with the changing global business trend and e-banking can facilitate e-business. OBJECTIVES OF THE STUDY The study has been undertaken with the following objectives: (i) to study present condition of e-business and e-banking in Bangladesh; (ii) to study infrastructural position of the country to grow e-business and e-banking of Bangladesh; and (iii) to provide some recommendations so that e-business and e-banking can bring productive results. LITERATURE REVIEW Before conducting any empirical research a review of literature is imperative in order to make the theoretical foundation as sound as possible.Otherwise, the empirical results may be weak and the analysis may not be meaningful.In such a context, the present section deals with a review of literature on the e-banking and e-business.In this study, an attempt has been made to review the literature on the subject.During the study some research works have been found which are relevant to different aspects of e-business and e-banking. R E Volume 43 Buffam (2000) depicted that companies that build the better e-business solutions will outperform their competitors.Companies that build the best e-business solutions will transform themselves into zero-latency enterprises.Companies that choose not to embrace ebusiness, or do so ineffectively, will underperform or be driven out of business.Turban et al. (2000) argued that following points of managerial issues are very important: focus of electronic commerce management; sales promotion; purchase process reengineering; just-intime delivery; new electronic intermediary business; provision of solutions; and business ethics.Rahman (2001Rahman ( -2002) ) observed that issues relating to electronic fund transfer require security, availability, authenticity, non-reputability and audibility.He suggested for appropriate control and efficient security measures and also for proper utilization of audit trail in the e-commerce system.Ali (2003) argued that Bangladeshi companies and organizations have several problems to start full swing e-business.These include limited resources, backwardness in technology, managerial inefficiency, socio-infrastructural problem such as corruption, default culture law and order situation, rampant corruption, strike etc. which penetrate for a long time.Ali, Mohsin, and Yasmeen (2004) observed that maximizing ebusiness efforts to focus on information dissemination, knowledge transfer, and technical assistance is required.Steps need to create appropriate knowledge among various procedures of e-business.Huda, Momen and Ahmed (2004) commented that the banking sector in Bangladesh is clearly recognizing the importance of information technology to their continued success.Hoq, Kamal and Chowdhury (2005) argued that a key reason why e-commerce, especially the business-to-business segment, is growing so quickly is its significant impact on costs associated with inventories, sales execution, procurement, intangibles like banking, and distribution costs.If these reductions become pervasive, e-commerce has the potential to be the application that ushers in the large productivity gains.Achieving these gains is, therefore, contingent on a number of factors, including access to e-commerce systems and the needed skills.However, what is unique about ecommerce over the Internet and the efficiency gains is that it promises the premium placed on openness.To reap the potential cost savings fully, firms must be willing to open up their internal systems to suppliers and customers. This raises policy issues concerning security and potential anti-competitive effects as firms integrate their operations more closely.Uddin and Islam (2005) observed that the multifarious projections of ICT in human life plead a winning case for institutional integration of ICT related components in rural support programs taken by Governments and NGOs.Chaffey (2006) dealt with strategy and applications of e-business and e-commerce in a logical but robust manner.He stressed that e-business and e-commerce were very important for management implications as such a bridge to link leading edge research and professional practice was required.Mia, Rahman and Debnath (2007) observed that the latest development in marketing financial services by banks is e-banking, where banks have now put themselves in the World Wide Web to take advantage of the Internet's power and access to cope with the accelerating pace of change of business environment.Pires and Stanton (2007) commented that policy wise government must recognize that the ability of countries to engage in ecommerce is tied both directly and indirectly to their attractiveness for FDI.Ahmed and Islam (2008) observed that adopting e-banking services, banks in developing countries are faced with strategic options between the choice of delivery channels and the level of sophistication of services provided by these delivery channels.Shamsuddoha (2008) argued that currently in Bangladesh, banking industry is mature to a greater extent than in the earlier period.It has developed superb image in their various activities including electronic banking.Now modern banking services have been launched by some multinationals and new local private commercial banks.Electronic banking is one of the most demanded and latest International Letters of Social and Humanistic Sciences Vol. 43 technologies in banking sector.Ahshan (2009) argued that on-line transaction would boost the gross domestic product (GDP) growth and thus, help Bangladesh achieve the Millennium Development Goals (MDGs).In the era of globalization, the internet makes the world smaller and e-commerce facilitates marketing and shopping from home.E-commerce facilitates business with customers over the internet.In e-commerce, customers can buy goods and services over the internet.Islam and Yang (2009) observed that service quality satisfaction and informational trust had important mediating effects on the balance score card performance process. These two mediating roles explain that, when an institution creates and raises the levels of service quality with satisfaction and informational trust, the results lead to a favorable customer interaction relationship and thus, could help the institution achieve higher levels for balance score card performance measure.Nyangosi, Arora, Singh (2009) argued that banking through electronic channels has gained increasing popularity in recent years.This system, popularly known as 'e-banking', provides alternatives for faster delivery of banking services to a wide range of customers.Shah and Clarke (2009) focused on human, operational, managerial, and strategic organizational issues in e-banking.They argued that e-banking management can help to expedite doing business through using electronic medium.Rahman (2010) who is the Governor of Bangladesh Bank argued that Bangladesh Bank has achieved a historic milestone in the trade and business arena, departing from conventional banking with the introduction of e-commerce recently; a giant stride towards digital Bangladesh. From the aforesaid literature review, it is evident that e-banking can act as a complementary towards e-business.With the help of e-business the country can create opportunities as this will help both producers and customers.But these theoretical observations may not be feasible in this country.As such, the study seeks to evaluate whether the country has proper infrastructure for doing e-business.What are the statuses of e-business and e-banking of the country?Does e-banking really works as a complementary to e-business in Bangladesh?The study intends to examine foresaid questions. METHODOLOGY OF THE STUDY The study is based on primary and secondary data.As such, the study has reviewed different published articles, books, newspapers and websites.The study will also collect related information regarding present status of e-banking through field visit in forty eight banks head offices, IT, MIS department and Bangladesh Bank.The study collects data on services provided by the banks, software used by the banks, vendor's name and bank's name.The study also did a survey through preparing a questionnaire.For collecting data from the respondents, the study used snowballing sampling technique which is also known as a chain referral sampling type.Snowball sampling technique is used to discover and enlist "hidden populations", who may be difficult to locate.The survey was conducted on the basis of about 300 customers' comment those have been using e-banking system.The data on the respondents who are the customers of Sonali Bank Ltd.Agrani Bank Ltd., Janata Bank Ltd., BASIC Bank Ltd., Dutch Bangla Bank Ltd., Standard Chartered Bank, Trust Bank Ltd., Prime Bank Ltd., Mercantile Bank Ltd. and Uttara Bank Ltd.These customers are from Chittagong city only.E-banking is mainly concentrated in the Chittagong city.Observations through field study were obtained at the present status of the e-banking system. R E Volume 43 SCOPE OF THE STUDY At present the total number of different categories of banks is forty eight in Bangladesh.The Bangladesh Development Bank Ltd.(BDBL) began operations on January 2, 2010 through merger of Bangladesh Shilpa Bank and Bangladesh Shilpa Rin Sangstha.From the field survey it has been observe that the following banking services are being provided by different banks: core banking, cluster banking, phone banking, sms banking, internet banking, various cards, ATM (VISA/MASTER), ATM own (VISA/MASTER), EFT, SWIFT, PC banking, POS terminal, banking KIOSK, and off-line branch computerization.However, foreign commercial banks and private commercial banks are relatively in a better position to provide on-line banking services.Moreover, when contacted with Bangladesh Bank it was informed that BACH (Bangladesh Automated Clearing House) is yet working in SIT (System Integration Testing) phase.The study was kept limited to ten commercial banks situated in Chittagong city.These branches of banks deal in e-banking under the overall control of the central bank Bangladesh Bank (BB) and thus follow the regulations relating to e-banking set by BB.Due to constrain of finance and time, the study was limited to the commercial banks of Chittagong city only. PRESENT STATUS OF E-BUSINESS AND E-BANKING Bangladeshi companies and organizations are facing the problem of starting full swing e-business.Network is a mode of communications with the computers.Networks of computers can be classified in the following way: local area network, metropolitan area network, and world wide area network.Multiple computers are connected through telephone lines, cable systems, and wireless technology is also required.According to a report published in The Daily Star (April 4, 2010), Bangladesh ranked 118th in the global Network Readiness Index in 2009-10 up from 130th a year ago, showing an upward trend in the information and communication technology sector.In South Asia, India ranked 43rd, Sri Lanka 72nd, Pakistan 87thand Nepal 124th in the 'Global Information Technology Report 2009-2010' released by The World Economic Forum (WEF) on April 3, 2010.As such, Bangladesh has to go long way to develop its network for arranging Digital Bangladesh by the year 2021 and public and private co operations and strategic alliances are required to develop e-business system in the country. Electronic payment systems for e-business are characterized by broad geographic presence and acceptance by a large number of merchants or programs.Participants in an electronic payment system may include users, financial institutions, business personnel, industrialists, merchants, third party processors, etc. WiMAX stands for Worldwide Interoperability for Microwave Access which offers wireless transmission of data via different transmission modes, from point-to-multipoint links to portable and fully mobile internet access.Telephone density is awfully little in Bangladesh.It is far much less in comparison with other developed nations of the world as well as neighboring countries.Kabir (2008) depicted that mobile phones (millions) are 36.4,fixed lines (PSTN) (millions) are 1.2, total telecom users (millions) are 37.6, tele density (%) is 26.8 in the year 2008.Outside Dhaka, at present a few computer network infrastructures have been developed so far.Apart from some educational institutes outside Dhaka, observation finds that most of the LAN setups are Dhaka centric.Bangladesh has been connected to worldwide Internet Super High Way from 2006 through an undersea submarine cable.But this single submarine cable frequently faces International Letters of Social and Humanistic Sciences Vol. 43 disruption resulting in slow band width.A huge digital divide exists on the city of Chittagong and other parts of the country.Private-public partnership is a crucial issue for information and communication technology (ICT) development and application.Private enterprise and capital can lead ICT revolution in Bangladesh.This, however, would require the government to provide the basic business environment.Rapid growth of ICT is not possible without massive investments in ICT infrastructure and human resource development in the computer and electronics and telecommunication engineering courses through ensuring quality education.Still now call charge of cell phone is not competitive in Bangladesh.Bangladesh Telecommunication Regulatory Commission (BTRC) is not playing due role in the development process of communication sector.Infrastructural problems are creating less scope to implement e-business successfully. Under the private initiative, Internet was started in 1996 by ISN in Bangladesh.ISN is the first ISP operator in this country.Still now all the Internet service providers have the server abroad, for which they are facing competitive disadvantage, as cost remains high.Security problem is still high in this country.Lack of digital accessible personnel is the real problem for the country.Moreover, some software developers of the country aren't well conversant with the market demand for which they cannot supply application software with faultlessness.Policy makers of the country are not aware of the benefits of e-business.As such, they don't put significance on proper and systematic development of e-business.In this connection it may be stated that Bangladesh bank is trying to implement automated clearinghouse through utilizing MICR (Magnetic Ink Character Recognizer) procedure.But in developed nations MICR procedure is now replaced by more sophisticated technique such as cheque truncation process.Total number of banks in Bangladesh is forty eight.Banking sector in Bangladesh, on the basis of utilization of electronic devices, can be subdivided in to three groups: i) foreign commercial banks and private commercial banks, especially 2nd (except for few banks) and 3rd generation private banks: fully e-banking; ii)1st generation private banks and some 2nd generation private commercial banks: medium range on-line banking system; and iii) nationalized commercial banks, specialized banks and few foreign bank branches of this subcontinent: low grade e-banking system. At present, the banks in Bangladesh are using the limited electronic banking services.It is expected that a bank can attain more profit and offer better services to its customers by introducing e-banking facilities.The foreign commercial banks operating in Bangladesh like Standard Chartered Bank, Citi Corp. N.A. and the HSBC are the pioneers in introducing the electronic banking facilities.They provide ATM, debit card, credit card, home banking, internet banking, phone banking, e-banking, etc. services.Among the indigenous banks, the private banks are ahead of the public ones.Prime Bank Ltd., Dhaka Bank Ltd., BRAC Bank Ltd., Dutch-Bangla Bank Ltd., Eastern and Mercantile Bank Ltd. have already stepped towards electronic banking facilities.Apart from these banks, Mutual Trust Bank Ltd. also introduced ATM service.Among the four nationalized commercial banks (NCBs), Janata Bank Ltd. has some access to the electronic banking facilities.Bangladesh Bank, the Central Bank of Bangladesh, is also trying to formulate the wide structure of e-banking facilities.All of these private banks offer limited on-line banking services.Most of these banks only offer services by providing ATM card.Most of them do not offer wide range of e-banking facilities which is the main advantage of e-banking.Deposit money in any branch and money withdrawal from ATM machine can be considered the best e-banking facilities available in Bangladesh while electronic money transfer starts in a limited edition.Sonali and Agrani Bank Ltd. is also providing on-line banking services on a limited scale.Rupali Bank Ltd. is also developing e-banking. R E T R BASIC bank which is 100 percent public owned but served as private sector banking has a technological advancement.A broad spectrum of Internet banking services, a subset of electronic finance, is available in Bangladesh with different degree of penetration.The credit card is available from VISA, Master Card and VANIK.Some foreign banks provide electronic fund transfer (EFT) services.It is at an early stage and used on a very limited scale.Microchips embedded Smart Card is also becoming popular in the country, particularly for utility bill payment.Automated teller machine (ATM) is expanding rapidly in major cities.A group of domestic and foreign banks operates shared ATM network, which drastically increases access to this type of e-banking service.The network will gradually be extended to other parts of the country.Last couple of years shows dramatic improvement in the awareness situation in the banking sector regarding the comprehensive application of ICT.Local software companies have been starting competition to supply useful complete banking software with all the basic features of banking module.However, many forms of e-banking services are not possible to offer in Bangladesh at this moment due to the technology backwardness, infrastructural under development and legal infrastructure. Those products would be very useful for export-oriented industry to reduce lead-time in export and keep comparative advantage in the international market.For foreign remittances four nationalized banks and fifteen private banks are working collaboratively with mobile phone service operators.Recently remittance could be sent in Bangladesh by banking channel through account transfer (normally takes 3 working days) or in the form of instance cash (takes 24 hours).Foreign residents can send their money and PIN (personnel identification number) through mobile phones.As a result, money transfer becomes relatively easy, quick and hassle free.But this system is also superseded by e-Remittance system.According to Ahemd (April 15, 2010), in a revolutionary step on April 13, mobile remittance service or e-Remittance was introduced in the country opening doors to millions migrant workers to help transfer their hard-earned money easily, effectively and most importantly, swiftly.The first ever remittance service for Bangladesh was jointly launched by two local banks Dhaka Bank Ltd. and Eastern Bank Ltd. and the country's second largest mobile operator, Banglalink.Credit card facility cannot be extended fully in the country, as common gateway between financial institutions cannot be established.Pricing mechanism of the products of country is not competitive, rather it is very volatile.This creates a negative impact on the customers.The process of digital divide eradication has been started very slowly.E-business give the following services: e-marketing, e-shopping mall, e-marriage scheme, e-mail, e-tender, evoting/polling, search engine, chat, e-commerce, e-stamp, e-Cash, e-music, e-entertainment, e-treatment, e-Advocacy, etc. E-governance can help us to achieve good governance of the country.If government doesn't take proper incentives to spread the computerization process, then there will be no benefit.Entrepreneurship development fund (EDF) of Bangladesh bank should be utilized properly. Only a few companies can avail the fund.ICT related companies are trying to develop e-business processes but their activities are limited.If proper procedure of e-business can be developed in the agribusiness sector, especially in the rural areas, through utilizing etechnology then it will be beneficial for the producers of the agricultural commodities.Acute shortages of human resources interested in doing e-banking business are also one of the main reasons for lagging behind.However, as legislative situation is deteriorating so there is a positive impact on the e-banking considering the safety of the people.From the field visit, it was revealed that banking sector requires rapid modification and adaptation to keep harmony with the world business.It becomes more obvious by observing the increased number of customers in some modern banks while others are losing them.In the context of Bangladesh, International Letters of Social and Humanistic Sciences Vol. 43 a country of more than 150 million people, it is to be realized that there is no other option for us but to join the current trend.According to news report published in the New Nation on August 28, 2009, the government has formulated a policy on the national information and communication technology as part of its announced plan for digitization of the nation.The policy has earmarked activities in three phases, in the short, medium and long-term plans to be implemented within 2021.The government aims at doubling the gross domestic products (GDP) during this time to achieve the goal. The policy details suggest a number of activities including spreading the use of keyboard by functionaries at different levels, encouraging the use of standard code by software sellers, developing a national web-portal and popularizing the use of e-citizen services, paying service charge through mobile phones or ticketing, etc. Land registration, passport renewal, digitization of police case dairy and case position in the court, spreading the use of broadband internet throughout the country and other such essential services may also be brought under the scheme. The new policy will be the common property of all departments and organs of the state targeted to develop a digitized nation within the stipulated time.The most part of the existing banking system in the country outside Dhaka and Chittagong cities is manual (paper based), that's why it is awkward, slow and error-prone.It, on the one hand, fails to meet the customers' demand and, on the other hand, it causes some significant losses both for the banking authority and traders.Electronic banking solves the above problems.Furthermore, it opens up some other salient aspects such as increased foreign trade and foreign investment.According to a report on "Bangladesh is developing electronic payment infrastructure" (May 20, 2008), the Securities and Exchange Commission (SEC) in Bangladesh proposed IT Consultants Limited (ITC), a manager of Q-cash brand of ATM and different cards, to raise their paid-up capital up to Tk. 500 million if the company is to proceed to initial public offerings.The Securities and Exchange Commission (SEC) has asked us to raise the company's paid-up capital to Tk. 50 crore from the current Tk 37 crore.In case the company fails to comply with the SEC requirements to increase the paid-up capital within the time specified, it will have to gain the approval of the SEC again.This measure is believed to extend the sphere of the company's influence.The company began as a private limited business in 2001.But now it is the local leader in electronic payment systems which are developing in the country with increased speed.ITC possesses necessary tools to process transactions for banks and retailers.It has the largest independent network of more than 100 ATMs in the country. RESEARCH FINDINGS AND ANALYSIS Based on the sample of five hundred customers who are habituated in e-banking system, following results have been gathered -Table 1. From the above findings it is observed that impact of e-banking has a mixed result though most of the customers support that it provides good customer services.This supports null hypothesis.But the problem is that customers think that while banking services have technologically improved, their quality worsened.In case of the other opinion survey most of the customers are providing "yes" results which also indicate that null hypothesis is correct.However, in the opinion poll survey there is a question regarding dealing officials of the commercial banks whether they are well conversant about their desk work. R E Table 1.Opinions of customers who are habituated in e-banking system (Respondents who expressed "yes" comment). Comment % E-banking services are relatively better than manual system 75% E-banking provides good customer service 72% Just-in-time services in banking can be provided 38% Bank personnel behave properly 46% Dealing officer is well conversant about his/her respective desk work 40% Banking services have technologically improved but their quality worsened 42% Source: Compiled on the basis of customers' responses. The reply indicates that 60% customers of Chittagong city think that dealing officers of the banks are not well conversant about their desk work.E-business is still not very much progressed in Bangladesh.Mass awareness is not feasible.The country faces problem of developing human-ware.Without human capital conforming to the international standards, Bangladesh is not able to compete in the global market and successful e-business cannot be feasible.The field level study observes that still nationalized commercial banks and specialized banks are lagging behind on-line banking services.Moreover, customers are not satisfied with the quality of the services.Also, they are not very happy with the behavior of the bank personnel. The study also reveals that e-business, especially with the help of e-banking, can manage economy of Bangladesh in a better way as customer relationship management increases.Local banking software should be developed properly and must have greater accessibility within and outside the country.Moreover, to produce hardware, especially computer and its accessories, local entrepreneurs are not taking any sort of strategic planning. The central bank should adopt latest technology but due to lack of vision they are adopting old technology.MICR system should be substituted by cheque truncation system.The shortage of technology-based human resources and poor telecommunication infrastructure needs to be overcome to break low equilibrium trap.Bridging the digital divide would provide technology based human resources, who can contribute to raising gross domestic product (GDP), national savings, investment, creation of employment and moving out from the vicious circle of underdevelopment.Numerous problems have been identified in on-line banking system in Bangladesh.Some of them are the following: International Letters of Social and Humanistic Sciences Vol. 43 (9) Bias of the bank management towards foreign software; and (10) Legal barriers and appropriate policy frame work. A number of customers taking banking services are not capable of bearing the cost of additional equipment like computer, computer accessories, internet, etc. in their own organizations or at home.Using internet facility is still very costly and people have little knowledge in operating computers.A few cybercafés are available but in terms of banking purpose customers do not feel safe to use these facilities.As a result, the total number of customers who are habituated in on-line banking systems is limited.In these circumstances investment in establishing e-banking facilities seems profitless.Although e-banking has bright prospects, it involves some financial risks as well.The major risk of e-banking includes operational risks (e.g., security risks, system design, implementation and maintenance risks), customer misuse of products and services risks, legal risks (e.g., without proper legal support, money laundering may be influenced), strategic risks, reputation risks (e.g., in case the bank fails to provide secure and trouble free e-banking services, this will cause reputation risk), credit risks, market risks, and liquidity risks.Therefore, identification of relevant risks, and formulation and implementation of proper risk management policies and strategy formulations and implementations are important for the scheduled banks while performing ebanking system. CONCLUSIONS AND RECOMMENDATIONS Global financial system is getting stronger day by day and it is being strengthened by the e-business.Around the globe, consumer market has greater potentialities and producers must be active, otherwise they may lose their share in the marketing strategies.Customer retention is feasible through arranging e-business otherwise if switching cost is low and other factors in between two companies are similar then a customer will switch from one company to another where technological advancement is relatively higher.Moreover, rate of call charge of cell phone should be lowered.Hidden cost in cell phone should be removed.However, ebanking as well as electronic fund transfer is electronic data interchange and not free from risk.Not only security risk, but also cost of transactions may also be raised.In this regard, Producers will also be rewarded and monetary gain can be attained.E-business, especially with the help of e-banking, can manage economy of Bangladesh in a better way as customers' satisfaction can be increased.Recommendations to implement e-business in Bangladesh successfully, following recommendations are given below: (1) In Bangladesh, on-line banking systems are yet at a take off stage.The clearing house operation in Bangladesh should be fully automated system.Banks and business organizations especially corporate houses should have adequate research, skilled manpower and technology driven strategies in this regard. (2) The country needs to develop e-business with the help of ICT facilities.ICT application and development of software are very much dependent on the quality of the workforce, and supportive infrastructure and environment.Upazilla level may be considered as the base unit which may be connected with district and then connectivity with the capital of the country can be done.However, more stress should be given on wire free connectivity for which priority should be given on WiMAX technology. R E (3) Initiatives to develop integrated e-banking software through in house built may be taken.Preference should be given by the bank authority to use local software over foreign software.Common gateway is required so that interbank transactions can be feasible.Bank can charge normal profit to enlarge the market size on the on-line banking products.Banks should have their own strategic plans to implement e-banking system.Creating awareness and consciousness among the clients of the banks is also required. (4) Public and private participation (PPP) for e-business should be encouraged for economic development.Spread of on-line banking is a very good initiative.But it is not only sufficient.Business sector as a whole should be focused on using e-business.It should be accompanied with e-governance system and should be moved towards other areas of the "e" to "e" system like e-tender, e-trafficking, e-ticket, e-learning, etc.More stress should be given to the wireless transactions and working environment due to rapid technological advancement. (5) Career path of hardware and software engineers should be properly designed.Otherwise professionals will be de-motivated and they won't work with job satisfaction. (6) E-business should be used both for agricultural sector and industrial sector.Equal importance should be given so that domestic trade and international trade can be effective.Distortion from the market should be driven out and information should be passed systematically. (7) Quality maintenance of local software should be arranged.Initiatives should be taken to set up hard ware industry so that computer and computer accessories can be prepared in the country and easily purchasable for the lower and lower middle class people.Quality education and training in the field of ICT to develop human resources are essential.Moreover, entrepreneurship should be improved for developing hardware and computer peripherals. (8) More high-speed fiber optical data communication infrastructures should be well established for speedy data communication for domestic and global high speed communication system.This will help to attain better e-business including on-line banking system.Competitive situation should be arranged so that e-business management can be improved through efficiency and effectiveness of customer services. (9) E-business can help to improve total quality management.This can also ensure quality assurance of the business sector.As such, business policy formulation and strategies are required and should be properly implemented.Adequate training and technological support should be developed so that trained manpower and technology driven organizations can be created with the help of partnership between government and nongovernment organizations. (10) Digital Bangladesh may be activated by 2021 to develop the economy of the country.Successful team building with a coherent manner for developing human ware, hardware, software and web ware are required to increase e-business process in a systematic way.Moreover, greater emphasis should be put on security system and on preventing fraud so InternationalInternational Letters of Social and Humanistic Sciences Vol. 43 that any sort of financial transactions including on-line banking payment or any other electronic fund transfer can be properly handled.(11)BTRC as a regulatory body should work with long-term vision, mission and fulfillment of goal oriented strategies.They should work as a facilitator rather not creating hindrance.VOIP should be legalized after examining and finalizing proper rules and regulations in the country.(12)Regulatory issues relating to security measures of e-banking can be improved in the following ways:(a) analyzing the potential risks in the electronic payments systems; (b) tradeoff between the efficiency of the financial system and the amount of risk incurred; (c) competitive pressures that may encourage the banks to engage in competitive deregulation; (d) effective provision and arrangement for cryptography and its applications; and (e) willingness of more customers to accept e-business as psychological patterns of the customers has been changing.Letters of Social and Humanistic Sciences Vol. 43
2019-05-16T13:06:32.055Z
2014-11-01T00:00:00.000
{ "year": 2014, "sha1": "15c965fa9714219977551343218667ad162b2310", "oa_license": "CCBY", "oa_url": "https://www.scipress.com/ILSHS.43.35.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "5b8c6455520bbed94a54ba340fbe45e337ce722b", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
247478669
pes2o/s2orc
v3-fos-license
Gross Tumor Volume Definition and Comparative Assessment for Esophageal Squamous Cell Carcinoma From 3D 18F-FDG PET/CT by Deep Learning-Based Method Background The accurate definition of gross tumor volume (GTV) of esophageal squamous cell carcinoma (ESCC) can promote precise irradiation field determination, and further achieve the radiotherapy curative effect. This retrospective study is intended to assess the applicability of leveraging deep learning-based method to automatically define the GTV from 3D 18F-FDG PET/CT images of patients diagnosed with ESCC. Methods We perform experiments on a clinical cohort with 164 18F-FDG PET/CT scans. The state-of-the-art esophageal GTV segmentation deep neural net is first employed to delineate the lesion area on PET/CT images. Afterwards, we propose a novel equivalent truncated elliptical cone integral method (ETECIM) to estimate the GTV value. Indexes of Dice similarity coefficient (DSC), Hausdorff distance (HD), and mean surface distance (MSD) are used to evaluate the segmentation performance. Conformity index (CI), degree of inclusion (DI), and motion vector (MV) are used to assess the differences between predicted and ground truth tumors. Statistical differences in the GTV, DI, and position are also determined. Results We perform 4-fold cross-validation for evaluation, reporting the values of DSC, HD, and MSD as 0.72 ± 0.02, 11.87 ± 4.20 mm, and 2.43 ± 0.60 mm (mean ± standard deviation), respectively. Pearson correlations (R2) achieve 0.8434, 0.8004, 0.9239, and 0.7119 for each fold cross-validation, and there is no significant difference (t = 1.193, p = 0.235) between the predicted and ground truth GTVs. For DI, a significant difference is found (t = −2.263, p = 0.009). For position assessment, there is no significant difference (left-right in x direction: t = 0.102, p = 0.919, anterior–posterior in y direction: t = 0.221, p = 0.826, and cranial–caudal in z direction: t = 0.569, p = 0.570) between the predicted and ground truth GTVs. The median of CI is 0.63, and the gotten MV is small. Conclusions The predicted tumors correspond well with the manual ground truth. The proposed GTV estimation approach ETECIM is more precise than the most commonly used voxel volume summation method. The ground truth GTVs can be solved out due to the good linear correlation with the predicted results. Deep learning-based method shows its promising in GTV definition and clinical radiotherapy application. INTRODUCTION According to the latest 2020 global cancer statistics, esophageal cancer (EC) ranks seventh and sixth respectively in terms of incidence (3.1%) and mortality rate (5.5%) (1). EC contains 2 most common histologic subtypes: squamous cell carcinoma and adenocarcinoma, of which the esophageal squamous cell carcinoma (ESCC) is relatively sensitive to the radiation rays (1). As a result, radiotherapy is a significant component of comprehensive therapy for ESCC patients. Three steps are included during clinical radiation treatment: CT localization, irradiation field (IF) delineation, and radiotherapy planning. Thereinto, excessive IF is current the major problem, which may cause radiation injury of lungs, pneumonia, oesophagitis, etc. A main reason for the excessive IF lies in the inaccurate definition of target volume, which current relies on the manual way, not only exhausting radiologists on a treadmill but also lacking consensus due to the high inter-and intra-observer variability (2,3). Thus, the precise definition of target volume is vital for curative treatment. From this point of view, this work is aimed at leveraging artificial intelligence-based method to explore accurate definition of target volume, as an assistant to help clinicians to determine more precise IF. Target volume definition involves the accurate delineation and prediction of the gross tumor volume (GTV) on medical images (4). For one thing, once the GTV is established, under the consideration of involved metastasis lymph nodes and organs at risk, the clinical target volume will be defined by expanding and measuring the adjacent sub-clinical disease margins (2,5). Further, the clinical target volume plus a margin gives the planning target volume (6). Thus, the precise knowledge of GTV can assist to maximize the therapy to the target lesion while minimizing damage to the surrounding normal organs or tissues (7). For another, other metabolic metrics with potential prognostic value can be derived from the GTV like the total lesion glycolysis and total tumor surface ratio (8). Meanwhile, GTV has been demonstrated as an important prognostic determinant for ESCC patients (9,10), and the research of Dubben et al. suggested that individual tumor volume should be reported in clinical studies and considered in data analyses (11). Currently, Fluorine 18-fluorodeoxyglucose positron emission tomography/computed tomography ( 18 F-FDG PET/ CT) guided precise radiotherapy for EC patients play an important role (12,13), as this multi-modality imaging technique simultaneously provides both the metabolic and anatomical information which are complimentary to determine and correct the GTV (14)(15)(16). Based on this, we retrospectively analyze an annotated clinical 3D 18 F-FDG PET/CT image set of 164 patients diagnosed with ESCC, for the purpose of assessing the feasibility of automatically defining GTV by artificial intelligence-based method. At present, for the definition of GTV, advances in GTV delineation for EC via deep learning methods are showing promise (2,(17)(18)(19)(20), but there has been limited researches into the GTV estimation. Given that, we put more effort on the estimation part. The old-fashioned method is a cuboid structure, which first needs to determine six furthest points in the main six dimensions of the tumor (4). As most tumors grow likes a sphere or spheroid, the cuboid structure will contain extra normal tissues which should not be irradiated (4). After that, the spherical shape produced from conformal planning is considered (4). In the year 2006, Crehange et al. took the tumor as two opposing truncated cones, and presented a volumetric assessment method (10). Though these rough approximations get closer and closer to the target shape, there is a certain error. The current most common method for GTV estimation is to compute the sum of lesion voxel volumes in the medical images (21,22). But since the tumor marginal area does not fill the pixel grids, the predicted GTV by this method is actually bigger than the true value. According to a recent study, equivalent ellipse can get a good fitting of elliptical or circular aggregate particles (23). This motivates us to use equivalent ellipse to fit lesion area on the axial slice. Next, inspired by the volumetric assessment method of Crehange et al., we take the volumetric tumor between two adjacent slices as a truncated elliptical cone, and then combine the integral technique to estimate the GTV value. By this way, the estimated GTV will get closer to the actual value than the voxel volume summation method, which includes the extra volume capacity in the corners of the cuboid voxel. Before the GTV estimation, it requires the lesion segmentation step from the 18 F-FDG PET/CT images. To achieve this, we employ the state-of-the-art (SOTA) esophageal GTV segmentation network, progressive semantically-nested network (PSNN), to delineate the tumor regions (2). So, to summarize the whole process, we first leverage the SOTA esophageal GTV segmentation network PSNN to implement the delineation work. Afterwards, the newly proposed ETECIM is used to estimate the GTV value. Last, we perform statistical analyses by using the SPSS software package to make a comparative assessment, for the purpose of evaluating the applicability of deep learning-based method to automatically define the GTV from 3D 18 F-FDG PET/CT images of patients diagnosed with ESCC. Data Acquisition and Ground Truth Generation This retrospective study was approved by the Ethics Committee of Fudan University Shanghai Cancer Center (No. 1909207-14-1910). The requirement of written informed consent was waived, and the data were analyzed anonymously. We collected 166 ESCC patients enrolled between February 2014 and September 2019 from the Fudan University Shanghai Cancer Center. All the 18 F-FDG PET/CT scans of patients were performed by a wholebody PET/CT scanner (Siemens Biograph mCT Flow PET/CT). In a state of fasting (at least 6 h), all the patients received a glucose level test and the blood glucose levels should be less than 10 mmol/L. The whole-body 18 F-FDG PET/CT acquisitions were started 1 h after the intravenous injection of 18 F-FDG (7.4 MBq/ kg). For the Siemens Biograph mCT Flow PET/CT scanner, a spiral CT scan with the protocol (120 kV, 140 mA, 5 mm slice thickness) was conducted. The followed PET scan lasted 2-3 min per bed position, with PET images being reconstructed iteratively via CT data for attenuation correction. The final obtained PET/ CT images were clearly displayed and were available in DICOM format. DICOM files of the 18 F-FDG PET/CT data were imported to ITK-SNAP software (Version 3.6, United States), and the ground truth GTVs were delineated by 2 experienced nuclear medicine physicians on the CT axial slices with referring to the corresponding PET images. After that, a chief physician with rich clinical experience over 15 years reviewed and determined the final ground truth mask. The delineation follows the standards for an esophageal wall thickness >5 mm or an esophageal wall diameter (without gas) >10 mm. The inclusion criteria followed principles (1): pathologically confirmed esophageal squamous cell cancer (2); complete and available 18 F-FDG PET/CT scan data before RT therapy (3); complete and available manual delineation for each 18 F-FDG PET/CT data. Thereafter, 2 patients were excluded for the lack of integrity on ground truth GTV. Hence, a total of 164 patients were finally included in the study population. To ensure rationality of the experiments, this study performs 4-fold cross-validation for evaluation. Data Pre-Processing The reconstructed CT scans are with two spatial resolutions of 0.98 × 0.98 × 5 mm 3 and 1.37 × 1.37 × 5 mm 3 , and the reconstructed PET scans are with 4.06 × 4.06 × 5 mm 3 and 4.07 × 4.07 × 5 mm 3 . For all CT slices, the matrix size is 512 × 512, whereas the PET slices have two types 200 × 200 and 168 × 168. Thus, all PET slices were up-sampled in the axial plane, leading to the size of 512 × 512 via the bicubic interpolation algorithm (24). The reason that we choose the bicubic interpolation algorithm for interpolation lies in its advantage of conserving detailed information, which is vital in the segmentation step. As for the spatial resolution, we remain its diversity unchanged to enhance robustness of the segmentation network. Next, to improve the contrast between lesion area and surrounding soft tissue in CT images, pixel values outside of −150 to 150 were set to −150 and 150. Then PET and CT images were all normalized to the interval of [0, 1]. Last, though PET/CT images had been registered by the hardware of the PET/CT scanner (Siemens Biograph mCT Flow PET/CT), there is slight deviation caused by involuntary respiratory movement of the patient during the image acquisition process. As the focus of this work is not on the registration, here we simply use the multi-mode intensity registration algorithm to correct the deviation (25). Segmentation Model and Training After pre-processing, the obtained dual-modality images (PET and CT) were used to conduct the automatic segmentation of esophageal GTV based on the deep network PSNN (2). Jin et al. reversed the direction of deeply-supervised pathways in the progressive holistically-nested network (26), and then combined the structure of U-Net (27) to design a novel PSNN architecture (2). They have demonstrated that their proposed parameter-less PSNN could progressively aggregate the higherlevel semantic features down to lower-level space in a deeplysupervised way, achieving the SOTA segmentation performance for esophageal GTV. Hence, this work followed the setup described in (2) to build the PSNN model for the GTV autosegmentation task. For training, data cropping was first conducted. Due to the low occupancy of esophageal carcinoma in PET/CT images, it was necessary to crop each PET/CT volume scan to a region of interest to alleviate both the class imbalance issue and storage limit. Afterwards, we set the algorithm to randomly extract 16 training patches of size 64 × 64 × 64 from each region of interest and performed one of the data augmentations (rotate 90°, or flip left and right, or flip up and down, or flip lift and right first and then rotate 90°, or remain unchanged). The number of training volumes was 16 times increase after the data augmentation. The training was performed on a Windows server equipped with Nvidia GeForce GT 710 graphical processing units. The Adam Optimizer with an initial learning rate 10 -2 (reduced by 0.95 every 5 epochs) was applied to the gradient descent optimization. GTV Estimation Based on ETECIM The commonly used method for GTV prediction is to compute the sum of lesion voxel volumes (21,22). But since the tumor marginal area does not fill the voxel grids, the predicted GTV by this method is bigger than the actual value. As shown in Figure 1A, a cross-section view of this voxel volume summation method, the lesion mask is the middle white part, whereas the predicted area via computing the sum of pixels will extra cover the hatched section. Therefore, estimated GTV by this method will extra include the volume capacities in the corners of the cuboid voxels. According to a recent study, equivalent ellipse can get a good fitting of elliptical or circular aggregate particles (23). This motivates us to apply equivalent ellipse to fit lesion area, and adopt a geometric approach to estimate GTV value for avoiding the shortcoming of voxel volume summation method. To be specific, inspired by the method of Crehange et al. which roughly considered the tumor as two opposing truncated cones ( Figure 1B) (10), we deem the volumetric tumor between two adjacent slices as a truncated elliptical cone, and then take the integral technique to estimate GTV value. The detailed introduction of this proposed method is described as follows. Suppose that the foreground of the binarized ground truth or predicted mask is a system of N mass points. Due to the same gray value of each point, we assume that they have the unit quality, with coordinates from (x 1 , y 1 ), (x 2 , y 2 ),…, to (x N , y N ). Besides, we assume that a line, denoted as L, passes through the origin coordinates (0, 0). As the foreground (arbitrarily shaped lesion) in the binarized ground truth or predicted mask can be considered as a planar rigid, the moment of inertia of the foreground rotating about line L is defined, where d i is the vertical distance from point (x i , y i ) to line L. Suppose that the two direction cosines of line L are a and b, respectively, then formula (1) can be rewritten as, where , denoting the moments of inertia of the foreground rotating about the X-axis and Y-axis. I xy = S N i=1 x i y i , denoting the inertia product. Formula (2) will be interpreted in a simple geometric way. We know that a second-order curve C with its center at the origin of coordinates can be expressed as, where A, B, H and C are constants. If using r to represent the vector from the origin to the curve, with the cosines are a and b, we get x = ra and y = rb. Then, formula (3) can be rewritten as, Refer to formula (2), if setting A = I x , B = I y , and H = I xy , formula (4) is equivalent to, As the moment of inertia I is always greater than zero, r must be a finite value, that is to say, the second-order curve C is closed. Therefore, C must be an ellipse, which is called inertia ellipse. Hence, according to the moments of inertia of the foreground, a corresponding inertia ellipse will be obtained to simulate the distribution of pixels in the foreground. Due to the foreground and its inertia ellipse approximately have the same area, the inertia ellipse is also called the equivalent ellipse of the foreground (28). The orientations of the two principal axes of the equivalent ellipse can be calculated via solving the eigenvalues of the second-order curve C. Let k and l denote the slopes of two principal axes, respectively, then k and l are defined as follows, Let j 1 and j 2 respectively represent the sharp angles between the long and short principal axes and the positive X-axis, we can get j 1 = arctan(-k), and j 2 = arctan(-l). Accordingly, we can use the approximate area M (the number of all the pixels in the foreground multiplied by the unit pixel area) of the equivalent ellipse to calculate the half-lengths of the two principal axes as, As depicted in Figure 2, the esophageal carcinoma of a patient can be approximately assessed by the corresponding equivalent ellipses. Figure 2 shows that the equivalent ellipses accurately simulate the distribution of tumor pixels. Besides, for the adjacent slices, the sharp angles between the long principal axes and the positive X-axis (denoted by the intersection angles between the green line segment and the blue horizontal straightness) are not moving much. Therefore, we take the tumor volume between two adjacent slices as the volume of an equivalent truncated elliptical cone, and sum all the equivalent volumes of adjacent slices to get the final GTV estimate. For the sake of brevity, we call this proposed GTV prediction method as equivalent truncated elliptical cone integral method (ETECIM), which is defined as, where m and n respectively denote the sequence numbers of the cranial and caudal slices of the tumor. h is the axial resolution. a i and b i represent the half-length of the long and short principal axes for the equivalent ellipse in the i th slice. (ph/6)(2(a i b i + a i+1 b i+1 ) + a i b i+1 + b i a i+1 ) is the volume of equivalent truncated elliptical cone between the i th and i + 1 th slices. To sum up, we provide an overview to display the whole GTV definition process for ESCC patient, as shown in Figure 3. (29). The RECIST guideline defines that, at baseline, measurable tumor lesions must be accurately measured in at least one dimension (longest diameter) with a minimum size of 10 mm for CT scan with slice thickness no greater than 5 mm. For target lesion less than 10 mm (too small to measure), a default measurement of 5 mm should be recorded if the lesion is still present. Besides, the RECIST evaluation also states that using software tools to calculate the maximal diameter for a perimeter of a tumor lesion may even reduce variability. From this perspective, for esophageal tumor, this work proposes the ETECIM to refine the measurement of EC tumor. Further, the estimation of longest and shortest diameters deduces the volumetric assessment, which has been demonstrated as an important prognostic determinant for ESCC patients (9, 10). Therefore, the proposed software algorithm ETECIM refines the measurement of esophageal tumor relative to the RECIST guideline. Evaluation Parameters For the evaluation of segmentation performance, the Dice similarity coefficient (DSC), Hausdorff distance (HD), and mean surface distance (MSD) are used. DSC measures the spatial overlap between the predicted lesion and ground truth (30). HD and MSD respectively measure the maximum distance and the agreement between the predicted and ground truth contours (31). According to the predicted and ground truth tumor, the true positives (TP), false positives (FP), false negatives (FN), predicted contour (P), and ground truth contour (G) can be calculated. Then the DSC, HD, and MSD are defined, where d(p, g) denotes the Euclidean distance between surface mesh points p and g, |P| and |G| denote the total voxel number of contours P and G respectively. DSC takes value in [0, 1], and the closer to 1 means larger spatial overlap between the predicted lesion and ground truth. Both the HD and MSD values are greater than or equal to 0, the closer to 0 denotes better segmentation performance. For the comparison of predicted and ground truth GTVs, conformity index (CI), degree of inclusion (DI), and motion vector (MV) are used. Thereinto, CI and DI assess the spatial relationship, and MV measures the positional change (12,13,32). The definitions of CI and DI between volumes A and B are as follows, CI takes value from 0 to 1, and the value of 1 means that A and B are in complete agreement. For DI, if volume B is the reference for standard volume, and treatment planning is based on volume A, then [1-DI (A in B)] of volume A will be unnecessarily irradiated and [1-DI (B in A)] of volume B will be the missing irradiation part (13). For the calculation of MV, the centers of mass (COM) for volume A and B should be first measured. Afterwards, the displacement of COM for volume A and B in x (left-right (LR)), y (anterior-posterior (AP)) and z (cranial-caudal (CC)) directions will be obtained. Last, MV is calculated as, Statistical Tests Statistical analyses are performed using the software package of IBM SPSS Statistics 20.0. Pearson's correlation is performed to assess the degree of associations between the predicted and ground truth GTVs. The paired sample Student's t-test is employed for the comparison of GTVs and DIs. One sample ttest is conducted for the LR, AP, and CC. The descriptive statistics are presented in the way of mean ± standard deviation (M ± SD). P-values lower than 0.05 are considered to be statistically significant. Visual Comparison of the Predicted and Ground Truth Contours By using the SOTA esophageal GTV segmentation deep neural model PSNN, we report the 4-fold cross-validation results for DSC, HD, and MSD as 0.72 ± 0.02, 11.87 ± 4.20 mm, and 2.43 ± 0.60 mm (M ± SD) respectively. The segmentation visual results of two patients are shown in Figure 4. We can observe that, as a whole, the predicted red contours have good agreement with the blue ground truth contours. Although there are slight biases between the predicted and ground truth lesions, some predicted red contours can enclose the hot areas better compared to the blue ground truth contours in the PET images. Differences in GTV Pearson's correlation is performed to assess the degree of associations between the ground truth and predicted GTVs by ETECIM. For comparison, Pearson's correlation is also performed between the ground truth and predicted GTVs by voxel summation method. Results are shown in Figure 5. The obtained decision coefficients R 2 by ETECIM are 0.8434, 0.8004, 0.9239, and 0.7119 for each fold cross-validation, whereas R 2 by voxel summation method are 0.8125, 0.7567, 0.9159, and 0.7123 for each fold cross-validation. The comparison results indicate that the proposed ETECIM is more accurate than the commonly used voxel summation method to estimate the GTV values. Further, we conduct the paired sample Student's t-test for assessing the difference between the predicted GTVs by ETECIM and ground truth GTVs. For the first fold cross-validation, no significant difference is found (t = 0.036, p = 0.971). For the second fold cross-validation, no significant difference is found (t = 0.347, p = 0.731). For the third fold cross-validation, there is a significant difference (t = 2.388, p = 0.022). For the fourth fold cross-validation, no significant difference is found (t = 0.326, p = 0.746). Though there is a significant difference for the third fold cross-validation, when gathering the predicted GTVs by ETECIM and the ground truth GTVs for all fold crossvalidations to conduct the paired sample Student's t-test, there is no significant difference (t = 1.193, p = 0.235). Hence, these results indicate that the predicted GTVs by ETECIM are reliable. Besides, Figures 5A, C, E, G show that there are linear correlations between the ground truth and predicted GTVs. Hence, according to the corresponding fitted functions, we can reversely solve the ground truth GTV out if giving the predicted GTV value. CI and Differences in DI Using 4-fold cross-validation for evaluation, we report the M ± SD of CI as 0.60 ± 0.16, median CI as 0.63, lower quartile of CI as 0.52, and upper quartile of CI as 0.70, respectively. DIs between the predicted esophageal tumor and ground truth are shown in Table 1. The M ± SD of DI (PreT in GroT) and DI (GroT in PreT) are 0.72 ± 0.18, and 0.78 ± 0.20 respectively. There is a Differences in Position One sample t-test is conducted on LR, AP, and CC respectively, with a test value of 0. DISCUSSION 18F FDG PET/CT-guided precise diagnosis, treatment and prognosis rely on the accurate definition of esophageal carcinoma. The current manual definition manner is time consuming, operator dependent and fluctuant, indirectly leading to the problem of oversized IF. Thus, how to precisely and intelligently define the lesion area from the obtained medical images has become an urgent issue. Some studies have explored the fully auto-delineation of esophageal carcinoma by using deep learning-based methods. However, the estimation of GTV values and the relevant evaluation are missed. In the present work, we take the automatic segmentation task one step further, that is to say, we extra estimate the GTV of ESCC and assess whether the intelligent definition method is potentially applicable to help clinicians to further determine precise IF. We first employ the SOTA esophageal GTV segmentation deep model PSNN to conduct the automatic segmentation task, and obtained the DSC, HD, and MSD as 0.72 ± 0.02, 11.87 ± 4.20 mm, and 2.43 ± 0.60 mm respectively. From the visual results ( Figure 4), despite the existing slight biases between the predicted and ground truth lesions, good agreement is found as a whole, and some predicted red contours are more accurate to enclose the hot areas in PET images. Based on the segmentation results by PSNN, we next propose the ETECIM to estimate the GTV values. To provide reliable references for the potential clinical application, statistical analyses are conducted to evaluate the differences between predicted results and ground truth. Pearson's correlation is performed, and we get correlation coefficients of 0.8434, 0.8004, 0.9239, and 0.7119 for each fold cross-validation between the ground truth and predicted GTVs by the proposed ETECIM ( Figures 5A, C, E, G). For comparison, Figures 5B, D, F, H illustrate the correlation between the ground truth and predicted GTVs by the voxel summation method. Results demonstrate that the proposed ETECIM for GTV estimation is more accurate and closer to the ground truth GTV than the voxel summation method. When the paired sample Student's t-test was conducted, no significant difference was found (t = 1.193, p = 0.235) between the predicted GTVs by ETECIM and the ground truth GTVs. Besides, the good linear correlation can derive the true GTV value. For CI and DI, which synthetically reflect the geometrical differences between the predicted tumor and ground truth, we report the median CI as 0.63, the M ± SD of DI (PreT in GroT) and DI (GroT in PreT) are 0.72 ± 0.18, and 0.78 ± 0.20 respectively. According to the study of Shi et al. (13), the median CI approximated to 0.7 denotes that the predicted and ground truth tumor corresponds well. For DI (PreT in GroT) and DI (GroT in PreT), a significant difference is found (t = −2.636, p = 0.009). DI (GroT in PreT) is larger than DI (PreT in GroT), thus 1 − DI (GroT in PreT) is significantly less than 1 − DI (PreT in GroT) (t = 2.636, p = 0.009). This indicates that if the radiotherapy is based on the predicted tumor, the possibility of missing the lesion is low. In the meanwhile, there is a little unnecessary irradiation to the surrounding tissue. But in practice, GTV is contained in clinical target volume, which describes the extent of microscopic and un-imageable tumor spread (4). Clinical target volume is obtained via expanding and measuring the adjacent sub-clinical disease margins around GTV (Defined in China: GTV + 3 cm margins in the esophageal long axis superiorly and inferiorly, and GTV + 0.5 cm margins in the cross section to encompass potential submucosal invasions) (9). Therefore, the unnecessary little irradiation to surrounding tissue is acceptable. As for how much irradiation is suitable and how many margins need to be added on the basis of predicted GTV, detailed clinical treatment data are needed to study these problems. For differences in position, results of the one sample t-test show that there are no significant differences in LR (t = 0.102, p = 0.919), AP (t = 0.221, p = 0.826), and CC (t = 0.569, p = 0.57) directions, and the obtained MV is small. Hence, these results demonstrate that the segmented masks correspond well with the ground truth. CONCLUSIONS In this work, we have assessed the applicability of the artificial intelligence-based method for fully automatic GTV definition of ESCC on 3D 18 F-FDG PET/CT. The visual segmentation results indicate good agreement between the predicted and ground truth tumors. The quantitative results demonstrate that the proposed ETECIM is more accurate than the most commonly used voxel addition method to estimate GTV values. Statistical analyses demonstrate that radiotherapy planning based on the predicted tumor is potentially feasible, and radiologists can take artificial intelligence method to define GTV of ESCC patients, as an efficient auxiliary means to refine the manual definition to further determine a more precise IF. In the future, more studies based on the specific clinical treatment data need to be conducted to validate and push this application forward. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT This retrospective study was approved by the medical ethics committee of our institution. Informed consent was obtained from all individual participants included in the study. AUTHOR CONTRIBUTIONS YY conception, design, methodology, data assembly, software, writing (original draft). NL conceptualization, design, data collection and delineation, validation, writing (review and editing). HS writing (review and editing). DB writing (review and editing). XL supervision, conceptualization, writing (review and editing). SS supervision, data collection and delineation. DT supervision, writing (review and editing). All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.
2022-03-17T13:28:37.699Z
2022-03-17T00:00:00.000
{ "year": 2022, "sha1": "5b070a23b1a6f521a7a3bf7c2c19e0b66a27f625", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2022.799207/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "5b070a23b1a6f521a7a3bf7c2c19e0b66a27f625", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257219942
pes2o/s2orc
v3-fos-license
A collagen-based theranostic wound dressing with visual, long-lasting infection detection capability Continuous wound monitoring is one strategy to minimise infection severity and inform prompt variations in therapeutic care following infection diagnosis. However, integration of this functionality in therapeutic wound dressings is still challenging. We hypothesised that a theranostic dressing could be realised by integrating a collagen-based wound contact layer with previously demonstrated wound healing capability, and a halochromic dye, i.e. bromothymol blue (BTB), undergoing colour change following infection-associated pH changes (pH: 5-6 -->>7). Two different BTB integration strategies, i.e. electrospinning and drop-casting, were pursued to introduce long-lasting visual infection detection capability through retention of BTB within the dressing. Both systems had an average BTB loading efficiency of 99 wt.% and displayed a colour change within one minute of contact with simulated wound fluid. Drop-cast samples retained up to 85 wt.% of BTB after 96 hours in a near-infected wound environment, in contrast to the fibre-bearing prototypes, which released over 80 wt.% of BTB over the same time period. An increase in collagen denaturation temperature (DSC) and red shifts (ATR-FTIR) suggests the formation of secondary interactions between the collagen-based hydrogel and the BTB, which are attributed to count for the long-lasting dye confinement and durable dressing colour change. Given the high L929 fibroblast viability in drop-cast sample extracts (92%, 7 days), the presented multiscale design is simple, cell- and regulatory-friendly, and compliant with industrial scale-up. This design, therefore, offers a new platform for the development of theranostic dressings enabling accelerated wound healing and prompt infection diagnosis. change. Having previously demonstrated the wound healing capability of a collagen-based hydrogel in vivo [63,64,65], we hypothesised that a passive, visual early warning infection diagnostic system could be integrated through the application of a halochromic dye, i.e. bromothymol blue (BTB). Two different integration strategies, i.e. drop-casting and electrospinning, were pursued to ensure long-lasting dye retention in the dressing matrix, compatibility with the wound environment, and regulatory-friendly medical device classification. Experimental work was therefore carried out to assess dye loading, dye retention, and material colour change in simulated wound fluids. The microstructure of resulting prototypes was inspected by electron microscopy and complemented by analytical techniques and rheology to investigate the effect of BTB encapsulation on material properties. Ultimately, the impact of the dressing on cellular activity was analysed via culture of L929 murine fibroblasts with sample extracts. Materials Rat tails were provided post-mortem by the University of Leeds (UK) and employed to extract type I collagen via acidic treatment. [ Dulbecco's modified eagle medium (low glucose, DMEM), trypsin, foetal bovine serum (FBS), and penicillin-streptomycin were purchased from Sigma. Phosphate buffered saline (PBS) was purchased from Lonza (Slough, UK). Calcein-AM/ethidium homodimer Live/Dead assay and alamarBlue™ Cell Viability Reagent were purchased from Thermo Fisher. Calcein-AM and ethidium homodimer was diluted to 2 and 4 µM, respectively, before use. All other chemicals were purchased from Sigma-Aldrich unless specified. Preparation of drop-cast collagen-based materials The collagen-based contact layer of the dressing prototype was realised as previously reported. [64] Briefly, type I Collagen extracted in-house from Rat Tails (CRT) was dissolved in a 10 mM hydrochloric acid solution (0.25 wt.%, 10 mM HCl) under magnetic stirring at room temperature and the pH neutralised to pH 7.4. Following addition of PS-20, 4VBC, and TEA, the reaction mixture was stirred for 24 hours at room temperature, prior to precipitation in a 10-volume excess of pure ethanol. The functionalised collagen product (4VBC) was recovered by centrifugation (10,000 rpm, 30 min, 4 °C) and air-dried. The solution was cast onto a 12-well plate (1.2 g of solution per well) and UV irradiated at 365 nm (Chromato-Vue C-71, Analytik Jena, Upland, CA, USA) for 15 min on both the top and bottom sides. UV-cured collagen hydrogels were thoroughly washed with deionised water and dehydrated in an increasing series of ethanol-deionised water solutions (0, 10, 20, 40, 60, 80, 3 × 100 vol.%). The resulting film was air-dried prior to further use. Cast films prior to and following UV curing were coded as TF and TF*, respectively. For the preparation of Freeze-Dried samples (FDs), the previously mentioned I2959 supplemented solution of functionalised collagen (0.8 wt.% 4VBC, 1 wt.% I2959) was poured into a 12-well plate (1.2 g of solution per well), covered in aluminium foil and freeze-dried using an Alpha 2-4 LDplus (Martin Christ Gefriertrocknungsanlagen GmbH, Osterode am Harz, Germany). UV-cured freeze-dried samples were coded as FD*. After thorough vortexing, a drop of up to 100 µl of BTB solution (0.2 wt.% BTB in deionised water) was applied to the dry UV-cured samples and exposed to air for up to 12 hours. The resulting BTB drop-cast samples of TF* and FD* were coded as either D-TF* or D-FD*. Preparation of two-layer electrospun constructs Aforementioned samples of FD* were used as a fibre collector during electrospinning, to generate a two-layer composite structure of freeze-dried collagen and dye-encapsulated fibres. Electrospinning solutions were prepared in either HFIP (6 wt.% PCL) or a mixture of EtOH and DMF (2:1 ratio, 15 wt.% PMMA-co-MAA). [32] BTB was added (0.5 wt.% of the polymer weight) while stirring the solution magnetically. BTB-loaded solutions of PCL were electrospun with an applied voltage of 16 kV, a flow rate of 1.8 ml·h -1 , and a working distance of 10 cm. BTBloaded solutions of PMMA-co-MAA were electrospun with an applied voltage of 11 kV, a flow rate of 0.5 ml·h -1 , and a working distance of 18 cm. Constructs based on BTB-encapsulated fibres of either PMMA-co-MAA or PCL were coded as C-PMMA-co-MAA and C-PCL, respectively. The BTB-encapsulated fibres of either PMMA-co-MAA or PCL were coded as D-PMMA-co-MAA and D-PCL. The BTB-free fibre controls were coded as either F-PMMA-co-MAA or F-PCL. Circular dichroism Circular dichroism (CD) spectra were acquired using a Chirascan CD spectrometer (Applied Photophysics Ltd., Leatherhead, UK). The I2959supplemented solution of functionalised collagen (0.8 wt.% 4VBC, 1 wt.% I2959) was loaded into a Type 106 100 µm demountable cell (Hellma, Müllheim, Germany), whereby BTB (30 µl, 0.2 wt.% in deionised water) was then added. The sample was cured under UV for 30 minutes and loaded into the Chirascan CD spectrometer. CD spectra were obtained with a 3 nm bandwidth and a 20 nm·min -1 scanning speed. A BTB-free sample was prepared and analysed as a control. Differential scanning calorimetry Differential scanning calorimetry (DSC) was conducted on a DSC Q100 (TA Instruments, Newcastle, DE, USA), temperature scans were conducted in the range of -10-90 °C with a 10°C·min -1 heating rate on both drop-cast samples and BTB-free controls. The DSC cell was calibrated using indium with 20 °C·min -1 heating rate under 50 cm 3 ·min -1 nitrogen atmosphere. 5-10 mg of sample were applied in each measurement. Attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopy ATR-FTIR spectra were recorded on dry samples using a Spectrum One FT-IR Spectrometer (PerkinElmer, Waltham, MA, USA) with a Golden Gate ATR attachment (Specac Ltd., London, UK). Scans were conducted from 4000 to 600 cm −1 with 100 repetitions averaged for each spectrum. Rheology Rheological data from both BTB-free and drop-cast samples of TFs were recorded following rehydration in deionised water using an MCR 301 rheometer (Anton Paar, Graz, Austria). An amplitude sweep was initially conducted to identify the linear-elastic region using an angular frequency of 10 rad·s -1 , then a frequency sweep was conducted on a fresh sample at room temperature with a constant amplitude of 1% and angular frequencies from 100 to 0.1 rad·s -1 . Quantification of BTB loading To quantify the BTB loading content of drop-cast samples, a gravimetric method was employed (n=9). The mass of the individual samples (mi) was recorded using a precision balance before the BTB solution (100 µl, 0.2 wt.% BTB) was cast using a micropipette. After air-drying the samples for 48 hours, the final mass (mf) was recorded using a precision balance and the loading efficiency (LE) was calculated using Equation 1: where mBTB is the mass of BTB contained in the volume of the aqueous solution applied to the samples. To quantify the BTB loading in electrospun fibres (n=9), UV-vis spectroscopy was employed using a UV-Vis spectrophotometer (Model 6305, Jenway, Dunmow, UK). A calibration curve was built for each fibre group by recording the absorbance at 432 nm; either ethanol or acetone was selected as the solvent of choice for BTB-containing fibres of PMMA-co-MAA and PCL, respectively. The calibration curves and equations are shown in Figure S1. Colorimetry The colour of the dressings was recorded using an SF600 Plus-CT spectrophotometer (Datacolor, Lucerne, Switzerland). After calibration to standard black/white standards, samples of TF*, FD*, C-PCL, and C-PMMA-co-MAA were loaded into the Datacolor to be analysed. The Lightness, Chroma, and Hue were all recorded at 7 different locations using a 3 mm window on each sample to get an average reading. Scanning electron microscopy Samples were inspected via scanning electron microscopy using a Hitachi S-3400N (Hitachi, Tokyo, Japan). All samples were gold-sputtered using an Agar Auto sputter coater (Agar Scientific, Stansted, UK) prior to examination. The SEM was fitted with a tungsten electron source and the secondary electron detector was used. The instrument was operated with an accelerating voltage of 3 kV in a high vacuum with a nominal working distance of 10 mm. The cross-sectional morphology of BTB-encapsulated constructs was investigated after freeze-fracturing in liquid nitrogen and lyophilisation. Electrospun fibre and hydrogel pore diameters were recorded manually using ImageJ software, with 50 repeats. Extract cytotoxicity study To evaluate the cytotoxicity of the BTB-encapsulated samples, 48-hour extracts from either thin films or freeze-dried samples were tested with L929 murine fibroblasts and a resazurin assay and live/dead assay were utilised. The L929 fibroblasts were seeded in a 24-well plate with a seeding density of 1×10 4 cells per ml, with 1 ml in each well. Test wells contained extracts from BTB-loaded samples. Extracts of samples of FD* and TF* were obtained following incubation in McIlvaine solutions of either pH 5 or pH 8 (48 hours, 37 °C). BTB was then added to 1 ml of the resulting extract to produce solutions with a BTB concentration of 3.2 mM. 40 µL of this extract solution was added to each of the test wells, which contained an excess of BTB with respect to the values seen from the dye release measurements. For the resazurin assay, after incubation for 48 hours, the media was aspirated off, wells were washed with PBS, and fresh media was added with 10 vol% alamarBlue solution. After briefly shaking, the well plates were incubated for a further 6 hours. Cell viability was determined by incubating 150 µl of each well solution in a 96-well plate for fluorescence analysis, with an excitation wavelength of 560 nm and an emission wavelength of 590 nm, using Equation 2: For the live/dead stain, after incubation for 48 hours, the medium was aspirated off, wells were washed twice with PBS, and 100 µL of premixed calcein-AM and ethidium homodimer stain were added. Well plates were then incubated for a further 45 minutes before being washed twice in PBS and imaged under a confocal microscope using a TCS SP8 (Leica, Wetzlar, Germany). Statistical analysis Data normality tests were carried out using OriginPro 8.51 software (OriginPro, OriginLab Corporation, Northampton, MA, USA). Statistical differences were determined by one-way ANOVA and the post hoc Tukey test. A p value lower than 0.05 was considered significantly different. Data are presented as mean ± SD. Results and discussion In the following, the design, manufacture and characterisation of a theranostic wound dressing will be presented, aiming to safely integrate an infection-sensing colour change functionality with the previously demonstrated wound healing capability of a photoinduced collagen film ( Figure S3). [63] 3.1 Design and microstructure of theranostic dressings Figure 1A shows how the structure and colour of the halochromic dye bromothymol blue (BTB) change with pH. In an acidic environment (pH <7), the dye is yellow in colour and can reversibly shift to a blue colour if exposed to an alkaline medium (pH >7). This colour shift is associated with a variation in molecular configuration, whereby, below pH 7, BTB presents a monovalent anion with the sulfonate group; above pH 7, proton dissociation from the phenolic group results in a bivalent anion and increased negative electrostatic charge. [66] Leveraging this reversible molecular rearrangement, two manufacturing routes, i.e. drop-casting ( Figure 1B) and electrospinning ( Figure 1C), were pursued to accomplish a theranostic dressing enabling infection-sensing colour change and ensure retention of the dye in the wound dressing structure. The drop-casting method aims to produce a localised concentration of BTB onto the dry collagen film, whereby a small volume of a BTB-loaded aqueous solution is delivered onto the sample, followed by solvent evaporation for at least 30 minutes. Here, the negatively charged configurations of BTB are expected to generate electrostatic complexation with the remaining positively charged primary amino terminations of the collagen matrix [64], aiming to accomplish dye confinement in the dressing structure. On the other hand, electrospinning is employed to generate a two-layer composite, whereby the freeze-dried collagen network serves as the fibre collector, aiming to create a homogeneous layer of BTB-encapsulated fibres. With this strategy, the underlying collagen layer is intended to act as both the wound dressing contact layer, aiming to support wound healing in situ; as well as a structural barrier, to ensure minimal diffusion of the dye away from the dressing towards the wound. Overall, the two manufacturing routes are therefore expected to provide the resulting dressing with an integrated colour change capability through contact with the infected wound exudate (i.e., at pH >7), whereby either the molecular scale (electrostatic interactions) or the microscale (fibrous layer) are leveraged to ensure long-lasting dye retention. Following the manufacture of the theranostic prototypes, the microstructure of both the dressing film and electrospun construct was investigated (Figure 2). The cross-sectional morphology of a previously freeze-fractured sample was initially visualised using SEM to see if the addition of BTB influenced the matrix morphology. A series of irregular pores were observed (Figure 2A-B) typical of a crosslinked collagen network [67], with a pore diameter of 33±12 µm and 32±10 µm (n=50) for the BTB-free controls (TF*) and the drop-cast films (D-TF*), respectively. There was no statistical difference in pore diameter between the two samples, therefore it was concluded that the addition of BTB does not affect the microstructure of the film dressing. Together with the drop-cast samples, the BTB-encapsulated electrospun fibres were analysed to check the impact of BTB on fibre morphology and size. BTB-loaded solutions of PMMA-co-MAA and PCL generated uniform electrospun fibres with minimal bead formation, with comparable fibre diameters to those seen in previously electrospun PCL meshes. [68,69] The average diameter of D-PCL fibres was 507±166 nm, which proved to be comparable to the one measured in D-PMMA-co-MAA fibre (Ø =596±129 nm, n=50), in line with the minimal effect of soluble factor loading on fibre diameter. [70,71,72] Other than the individual mesh, electrospinning of the above-mentioned fibres onto the collagen film was successfully demonstrated by SEM, whereby the two-layer dressing structure consisting of the freeze-dried collagen wound contact layer below a dense mat of nonwoven fibres is clearly visible on both constructs of C-PCL and C-PMMA-co-MAA ( Figures 2D-E). Loading efficiency, infection-sensing colour change and dye retention capability Following microstructure investigations, the BTB loading efficiency in the samples generated via both manufacturing strategies was studied via either UV-Vis spectroscopy or gravimetric analysis. Both samples revealed an average loading efficiency of 99 wt.% ( Table 1), suggesting that both drop-casting and electrospinning are viable methods for integrating BTB into the theranostic wound dressing. Sample ID LE /wt.% L×C×h Consistent with the high loading efficiency, all samples displayed a colour change following incubation at pH 5 and pH 8 ( Figure 3A-B), which was visible by the eye even after 1 minute of response time ( Figure S4). This colour change proved to be reversible over multiple cycles, so that the yellow samples characteristic of an acidic environment switched to a blue colour when transferred to an alkaline solution ( Figure 3C). Quantitatively, the LCh colour values shown in Table 1 Having confirmed the dye loading efficiency and dressing colour change, it was critical to confirm that the dye was retained in the structure over a 96-hour incubation in simulated wound fluids, where 96 hours was selected as a clinically relevant dressing application time [73]. Retention of the dye in the dressing is key to ensure long-lasting infection-sensing capability and minimal risks of dressing-induced alteration of the wound environment. The drop-cast samples revealed significantly lower release values with respect to the dual layer constructs (Figure 4A-B), indicating that BTB was retained by the collagen matrix, an observation that agrees with the previously-observed localised distribution of the dye in the former samples ( Figure 3). As presented in Figure 4C, samples of D-TF* indicated a 96-hour averaged release of 3 wt.% and 13 wt.% in acidic and alkaline environments, respectively, with similar trends observed with the freeze-dried samples D-FD* (pH 5: R*= 5±2 wt.%; pH 8: R*=15±1 wt.%). Given that the aforementioned samples were drop-cast with different dye quantities (i.e. either with 200 µg for testing at pH 5 or 60 µg for testing at pH 8), a release of up to 10 µg dye was observed regardless of the pH tested. The reason for the aforementioned discrepancy in BTB loading for samples tested at pH 5 and pH 8 is due to the different calibration curves and UV-vis spectroscopy detection limits ( Figure S2). The significant retention of the dye by the dressing is attributed to the development of electrostatic interactions between the BTB and the primary amines in the collagenbased hydrogel. The protonated amine groups may also react with the sulfonate group, a well-known leaving group [74,75], on the BTB to form covalent bonds, with the release of H2SO3, [76] though this is not supported by the release profiles presented in Figure 4. As shown in Figure 1A, BTB bears either one or two negatively charged groups, which can interact with any remaining protonated amines (pKa ~9) in the collagen hydrogel ([Lys] ~1.6×10 -4 mol·g -1 ). [64] Evidence that the dye retention mechanism is through these amine-dye interactions is seen in Figure 3A-B. The dye is found to spread throughout the hydrogel at pH 8, where fewer protonated amines are expected in the collagen material. In contrast, the same dressing exhibits a localised dye distribution at pH 5, in light of the increased content of protonated amines. The significant retention of the dye by the collagen film is further supported by the reversible colour shift observed following alternating sample incubation steps in new acidic and alkaline solutions over a 120-hour time period ( Figure 3C). These observations highlight the durability of the infection-sensing capability and minimise the risk that a solubility limit was reached by the dye in the aqueous solution. Dye release testing in acidic conditions Under acidic simulated healthy wound conditions (pH 5), samples of C-PCL proved to release over 50 wt.% of the loaded BTB after the first hour ( Figure 4A), compared to just 4.5 wt.% and 2.5 wt.% of dye released from samples C-PMMA-co-MAA and drop-cast samples, respectively. After the 96-hour incubation period, 76 wt.% of BTB was released from samples C-PCL, compared to just 3 wt.% from TF*, 5 wt.% from FD* and based samples, an observation that supports the previously described electrostatic complexation mechanism between BTB and the collagen hydrogel. Accordingly, these results speak against the fact that the observed dye retention in the drop-cast collagen samples is due to a solubility limit being reached by the released dye in the McIlvaine solutions. [80], meaning that the polymer chains have increased mobility so that diffusional release of additives is promoted. This observation is also supported by the close fitting of the release data of C-PCL with the Korsmeyer-Peppas model (R 2 =0.97, Figure 4C), whereby a release exponent lower than 0.5 was determined in both experimental conditions, in line with a diffusion-driven release mechanism. Unlike PCL, the low release of BTB observed in the acidic medium with samples of C-PMMA-co-MAA is mainly attributed to the incompatibility of PMMA-co-MAA with water at pH <7 [81], as well as its high glass transition temperature (Tg ~170°C). [82] The polymer chains of PMMA-co-MAA are therefore expected to be more rigid and immobile compared to the ones of PCL in the tested experimental conditions, leading to a slower rate of diffusion from the fibres [82], as indicated by the low fitting (R 2 ≥0.45) of the release data with the Korsmeyer-Peppas model ( Figure 4C). No statistical difference in dye release was observed after 96 hours (pH 5) between drop-cast samples that were left to dry for 30 minutes and those left for 12 hours (p > 0.05) (Table S1 and Figure S5). For this reason, further analysis was solely conducted on samples produced using the less time-intensive method. There was also no statistical difference in dye release after 96 hours between the thin film and freeze-dried variants (p > 0.05), further indicating that the reduction in dye release is a function of molecular interactions rather than an aspect of the sample geometry. Samples of D-TF*, D-TF*12, D-FD*, and D-FD*12 all had mean dye release values after 96 hours that were statistically insignificant (p > 0.05) when compared to D-PMMA-co-MAA ( Figure S5), which is known to provide a low rate of diffusion due to its glass transition temperature and rigid chains. It was also of interest to determine the extent of dye release when samples were completely submerged in the simulated wound fluid (pH 5), compared to previously discussed release experiments where samples were in contact with, but not submerged by, the medium, as a more clinically descriptive scenario of the dressing environment. As expected, significant differences were observed with samples D-FD* and D-TF*, on the one hand, and their submerged counterparts, on the other hand (Tables S2 and S3) In terms of the fibrous constructs, the C-PCL samples afforded a statistically significant reduction in dye release after 96 hours when compared to the D-PCL fibres (p > 0.05, Figure S5, Table S1), indicating that the collagen wound contact layer retards the release of BTB from the top fibrous layer. On the other hand, there was no significant difference in dye release between C-PMMA-co-MAA samples and the D-PMMA-co-MAA fibres after 96 hours (p >0.05), which agrees with the incompatibility of this polymer in acidic aqueous environments. Dye release testing in alkaline conditions Under alkaline simulated infected wound conditions, sample C-PMMA-co-MAA had significant dye release (84 wt.% after 1 hour), in line with the well-known solubility switch of PMMA-co-MAA above pH 7 [83], and the consequent complete solubilisation of the corresponding fibres. As was the case under acidic simulated healthy wound conditions (pH 5), samples of C-PCL released over 50 % of the loaded BTB after just 1 hour. Here, the movement of the rubbery PCL chains enables the dye to diffuse out of the fibres and into the McIlvaine solution, as supported by the close fitting of release data by the Korsmeyer-Peppas model and low release exponent (R 2 =1, n =0.07; Figure 4C). In comparison, the dye release is significantly lower in the drop-cast samples, with only up to 9 wt.% dye release observed after 1 hour ( Figure 4C). After the 96-hour incubation period, 76 wt.% and 91 wt.% of BTB is released from C-PCL and C-PMMA-co-MAA, respectively, in contrast to the 13 wt.% and 15 wt.% release observed with samples of TF* and FD*, respectively. Consequently, over 80 % reduction in dye release is afforded by the drop-cast samples compared to both C-PCL and C-PMMA-co-MAA samples. To further understand the release kinetics, the release data were fitted with the zero-order and Korsmeyer-Peppas models ( Figure 4C). According to zero-order kinetics, a drug is delivered from the carrier at a constant rate, while a non-linear release is described by the Korsmeyer-Peppas model. The zero-order model had a good correlation (R 2 >0.7) in the case of release from samples of TF* at pH 5 and from samples of C-PCL at both pH 5 and pH 8. Comparatively, the Korsmeyer-Peppas model had a good correlation for seven out of eight release data collected, with an average R 2 value of 0.84, and an R 2 value of 0.89 when the anomalous result is discounted ( Figure 4C). The values of the release exponent, n, are also lower than one in all cases, indicating a reduced concentration gradient over time, thereby suggesting that the release follows first-order (or pseudo-first-order) kinetics, according to a diffusion-driven release. Chemical composition and thermomechanical properties ATR-FTIR spectroscopy was subsequently employed to elucidate the chemical composition of the drop-cast and electrospun samples and the development of any secondary interactions between the halochromic dye and the carrier polymers ( Figure 5). The ATR-FTIR spectra of TF* and D-TF* confirmed the main spectroscopic bands associated with type I collagen ( Figure 5A), i.e. amide I (1629-1655 cm -1 ), amide II (1541-1598 cm -1 ) and primary amines (3290-3320 cm -1 ). Comparable ATR-FTIR spectra were also observed with samples of F-PCL, D-PCL, and C-PCL ( Figure 5B), whereby the presence of the ester bonds was detected at 1719 cm -1 . At the same time, a slight red shift was detected in D-TF* samples compared to TF* samples. These red shifts occurred in the amide I peak, from 1655 to 1629 cm -1 , amide II peak, from 1598 to 1541 cm -1 , and primary amine peak, from 3320 to 3290 cm -1 . This observation supports the formation of secondary interactions, mainly hydrogen bonding and electrostatic interactions, between the collagen network and the BTB, in agreement with the retention of the dye in the network during the 96-hour time period investigated (Figure 3, Figure 4C, and Figure S5). This contrasts with the spectra seen in Figure 5B, where there is a lack of red shift after the encapsulation of BTB in the PCL fibres. This indicates the absence of secondary interactions, which is in agreement with the significant dye release revealed by the electrospun constructs ( Figure 4). The significantly increased dye retention in D-TF* samples in both acidic and alkaline simulated wound environments supported the further characterisation of this material aiming to establish the structure-property relations responsible for the above finding and assess its feasibility as a safe wound theranostic (Figure 6). Circular dichroism (CD) spectra were recorded for both TF* and D-TF* samples to elucidate the short-range protein organisation in the crosslinked network and discern any interactions between the dye and the protein ( Figure 6A). Typically, CD spectra of type I collagen solutions display a positive peak at about 221 nm, which is attributed to the presence of right-handed triple helices, and a negative peak at about 197 nm, which is attributed to the left-handed polyproline-II chains. [84] Both these peaks were detected in the CD spectra of both TF* and D-TF* ( Figure 6A), supporting the validity of CD in assessing the protein organisation in insoluble crosslinked networks [85,86], similar to the case of diluted collagen solutions. [64,87] Furthermore, the above results also suggest that the addition of BTB does not affect the 4VBC-functionalised triple helix organisation in the UV-cured collagen network. To quantify the triple helix organisation, the magnitude ratio between positive and negative CD peak intensities (RPN) of the TF* sample was measured to be ~12. Following dye drop-casting, the RPN was measured to be ~9, suggesting that there has been a 25% loss in collagen triple helix organisation, following UV curing in the presence of BTB. In Differential scanning calorimetry (DSC) was also conducted on TF* and D-TF* samples for further confirmation of dye-protein interactions. Figure 6B shows a thermogram of the two samples from -10°C to 90°C, whereby the lack of a large endothermic peak around 0°C suggests that the hydrogel samples were prepared correctly, i.e. as a water-swollen network with minimal excess of free water. Since the bound water forms hydrogen bonds with the collagen network rather than with other water molecules, the bound water is unable to freeze when the temperature is lowered below 0°C. [89] Other than that, a denaturation peak was seen in the TF* sample around 60°C, whereas the D-TF* sample presented a denaturation peak at 70°C, suggesting that the addition of the dye contributes an increased thermal stability of the protein. The long trailing endothermic peak from 75°C seen in the samples D-TF* is likely due to the onset of disassociation of the water molecules. Here, the presence of multiple hydrogen bonds in the hydrogel network delays the evaporation of water and is likely responsible for the broad endothermic transition over 100°C. [90] Other than the thermal behaviour, the mechanical properties of TF* and D-TF* hydrogels were determined by rheology (Figure 7). The oscillatory shear data show that the storage modulus (G') is dominant in both samples across the range of strain ( Figure 7A) and frequencies ( Figure 7B) observed, with values of G' and loss moduli (G'') of up to ~8.5 kPa and ~1 kPa, respectively, in samples of D-TF*. This more than 8-times increase in G' compared to G'' indicates that both hydrogels behave as an elastomeric material regardless of the presence of BTB. This, together with the absence of a crossover frequency, agrees with the fact that both drop-cast and BTB-free hydrogels are permanently chemically crosslinked following UV curing. [81,87] The similarity of frequency sweep profiles in both samples of TF* and D-TF* reveals that the addition of BTB does not impact the rheological properties of the hydrogel. While the introduced BTB molecules have been shown to mediate dye-matrix secondary interactions ( Figure 5A) leading to increased thermal stability ( Figure 6B), these appear to be relatively weak or localised to impart a detectable change in macroscale mechanical properties. Extract cytotoxicity study A resazurin assay was subsequently conducted to evaluate the cytotoxicity of the drop-cast collagen samples. For completeness, the cytotoxicity of both D-TF* and D-FD* with murine L929 fibroblasts was examined over a 7-day cell culture period to determine if the processing route had any impact on cell viability (Figure 8). Extracts of the samples were supplemented with an addition of 80 µg of BTB, corresponding to an ~8-time increase in dye content compared to the quantity of BTB released from the samples after 96 h (Figure 4). The average viability of the cells exposed to the extract of D-TF* from the pH 5 McIlvaine solution was 97±0 % after 1 day, 101±7 % after 3 days, and 90±1 % after 7 days. Further evidence of cytocompatibility is seen in Figure 9, which shows the Live/Dead micrographs on day 7 of cell culture. L929 fibroblasts were found to proliferate and displayed a healthy phenotype when exposed to extracts of samples from both acidic and alkaline McIlvaine solutions, further indicating the tolerability of the BTB and any chemical residues extracted from the drop-cast collagen material. shift following alternating sample incubation in either acidic or alkaline conditions. This surprising dye retention is attributed to electrostatic interactions between the protonated amines of the collagen network and the negatively charged dye molecule, which were not observed in the fast-releasing electrospun construct. The addition of BTB did not affect the morphology and mechanical properties of the collagen hydrogels, while extracts of the drop-cast samples afforded an averaged cell viability of 92 % following a 7-day culture of L929 mouse fibroblasts. The simple, to support wound healing, reduce hospitalisation time, and enable informed and personalised variations in clinical care. Having demonstrated the infection detection functionality of this collagen-based dressing prototype, the next steps of this research will focus on systematic colour measurements over small pH changes, aiming to develop a reliable calibration curve to quantify and predict infection levels.
2023-02-28T06:42:16.822Z
2023-02-26T00:00:00.000
{ "year": 2023, "sha1": "c02674374ed379aee5ca76c4e0eda5912587eec1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c02674374ed379aee5ca76c4e0eda5912587eec1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
225614765
pes2o/s2orc
v3-fos-license
Building a multilingual ontology for education domain using monto method ABSTRACT INTRODUCTION Ontology enables the natural language processing of the data in an efficient way.It retrieves the information based on the knowledge and conceptualizes the information in formal way.Enormous information is available over the internet in a specific language.It is essential to provide the information in different natural languages to benefit multi-language users.Ontologies play vital role in providing knowledge based information systems.Ontology is a "formal, explicit specification of a shared conceptualization" [1].It is a collection of set of concepts, properties, relations, instances, axioms and rules which can be represented as, (Ontology) O = {C, P, R, I, A}. 'C' represents the classes or concepts of the domain.'P' signifies the properties of the concept.'R' denotes the binary relations between the concepts (1-1, 1-M, M-M).'A' represents axioms and rules which are used as a basis for reasoning [2].In ontology a set of terms for describing a domain is arranged hierarchically that can be used as a skeletal foundation for a knowledgebase [1].This nature of ontology enables the developer to implement semantic based personalized learning applications. The ontology developed for the educational domain contains the knowledge for developing an intelligent learning system.Monolingual ontology applications for learning system are be developed by adopting the methodology [3].Ontologies are used to represent knowledge which reflects the relevant information of the concepts and relations.There were many methodologies proposed to build ontology applications which have their own pitfalls.Modeling, evaluating and maintaining ontologies are a complex tasks in most applications such as healthcare, business, commerce and many other.Many domains necessitate satisfying the different language users.For example the users of government services, learning sites, education domains, healthcare domains demands to access information in their local language.In such scenario, ontology plays a vital role to provide knowledge based information.Numerous methods and tools are proposed for building monolingual ontologies.Very few methods like collaborative platform are proposed to build multilingual ontologies but they are limited to some languages.This chapter proposes new methodology to build multilingual ontologies.Rapid development of internet users demands on information in their natural languages which leads to the development of multilingual applications.The aim of this paper is to give an idea to develop multilingual ontologies for education domain using the proposed MOnto methodology.New algorithms are proposed for merging and mapping ontologies developed in different natural languages.The paper organized as follows: an overview of ontology based learning systems are narrated in section 2. Section 3 proposes a new methodology to build multilingual ontologies Conclusions are proposed in section 4. STATE-OF-THE-ART OF ONTOLOGY-BASED LEARNING Learning ontologies are used in software agents, language independent applications and problem solving methods.Ontology applications are be developed using ontology development languages (OWL, RDF, TURTLE, triple and so on) and ontology development tools (Protégé, OntoEdit, Chimaera and so on).Learning ontology application are be implemented in two different strategies: i) ontology of learning resources and ii) ontology of teaching strategy [4].The ontology of learning resources is used for teaching knowledge modeling in e-learning system.The ontology of teaching strategies exhibits a series of macro teaching design and micro teaching activities.Ontology for learning may have personalized learning paths [5] which are used to improve the effectiveness of learning system.Personalization of e-learning process for the chosen target group will be achieved by setting up the learning path for each user according to their profile.Some models have been proposed to develop web based e-learning systems [6].These model have been developed based on semantic web technologies and e-learning standards.These models provide two kinds of contents to the learners, they are: i) learning content and ii) assessment content and provides learning service and assessment service respectively.These models use the knowledge based information retrieval approach to repossess learning resources.The learning resources are described by means of metadata to implement the knowledge base.Some ontology based learning systems have been developed to store and retrieve semantic metadata to provide better results to the learner along with personalized learning [7].A systematic approach is proposed towards the development of semantic web services for e-learning domain.The following steps [8] are used to develop ontology for e-learning: i) determining the scope of domain, ii) reusing existing ontologies, iii) enumerating important terms in the ontology, iv) defining the classes and its hierarchy, v) defining the class properties, vi) defining the facets, vii) creating instances and viii) checking anomaly.The ontologies can be evaluated using software risk identification ontology (SRIONTO) to identify the problem and risk in it [9].The required concepts, the semantic description of the concepts and the interrelationship among the concepts along with all other ontological components have been collected from various literatures.E-learning resources can be collected using some frameworks [10].These frameworks used to collect e-learning multimedia resources from the internet and automatically link them with topics. Ontology-based approach can be used to develop personalized e-learning [11].It is used to create an adaptive content based on learner's abilities, learning style, level of knowledge and preferences.In this approach, ontology is used to represent the content model, learner model and domain model.The content model describes the structure of courses and their components.The learner model describes the characteristics of learner's that are required to deliver tailored content.The domain model consists of some classes and properties to define domain topics and semantic relationships between them.It is used to assess the learner's performance by conducting the tests and the results are evaluated.The system recognizes changes in the learner's level of knowledge as they progress and the learner model is updated based on the learner's progress accordingly.However, most of the learning applications are developed either in English or in the developer language which become the hurdles of different language users to learn.Nowadays users of internet prefer to share their knowledge in their natural languages which emerges the technologies to support different natural languages.In a current scenario, enormous learning materials are available over the web which allows the user to benefit from anywhere in the world.Though the user gets large amount of information still they are longing for the information in their own languages.This motivates us to develop multilingual ontology applications to benefit different natural languages.In order to do that, MOnto methodology is proposed to build multilingual ontologies. MONTO METHODOLOGY TO DEVELOP MULTILINGUAL ONTOLOGIES A methodology is a "comprehensive, integrated series of techniques or methods creating a general systems theory of how a class of thought-intensive work ought to be performed" [12].Methodology consists of methods and techniques where method is a process of performing some task and technique is a procedure used to achieve given objective.This research work proposes MOnto methodology to build multilingual Comput.Sci.Inf.Technol.  Building a multilingual ontology for education domain using MOnto method… (Merlin Florrence) 49 ontology applications.This methodology consists of five phases as given in Figure 1.Viz, input, building MO, ontology mediation, retrieval and visualization of ontology.Figure 1.Monto methodology for building multilingual ontology Phase 1: Input This phase initializes the content to be considered for building ontologies.A set of methods and techniques are used for building ontology from distributed and heterogeneous knowledge and information sources.Information can be retrieved from different sources like, open corpus, closed corpus and existing ontologies.All the sources are under three categories: Unstructured sources, semi-structured source and structured source.Unstructured sources involve neuro linguistic programming (NLP) techniques, morphological and syntactic analysis, etc. Semi-structured source elicits ontology from sources that have some predefined structure, such as extensible markup language (XML) schema.Structured data extracts concepts and relations from knowledge contained in structured data, such as databases.Closed corpus is a text from the text books, study materials etc. Open corpus refers to the information available on the web.Corpus is used to represent the represents ontology by using a set of techniques to extract the knowledge from the text.In this phase, the scope and domain for building MO is identified.In order to build a new ontology for the specified domain, it is important to make sure that there is any ontology already available to the particular domain.In that case, the ontology has to be considered for reusing and re-engineering for building MO.The sources for building MO is collected as given in Table 1.The developer has to identify the domain to develop MO and has to collect the information from various sources in different natural languages.The collected resources are analyzed and classified in this initial phase. → The relations between the terms are established and vocabularies of the terms are formulated.Using the laddering structure ontologies are developed in different natural languages (OL1, OL2 … OLn where ontology language (OL).'N' Ontologies !, ! … !" # are developed for natural languages using the terms that are hierarchically structured as shown in Figure 2. Phase 3: Ontology mediation methods Ontology mediation enables reusing of data across applications on semantic web, and sharing of data between heterogeneous knowledge bases.Major kinds of ontology mediation are mapping and merging.Ontology mapping is to identify the correspondence between the terms and ontology merging is creating new ontology which is the union of existing two or more ontologies.In this phase, ontologies developed in different natural languages are merged into single ontology and the correspondences between the terms of different natural languages are established.For example, OL1, OL2 … OLn are the ontologies developed in different natural languages for the selected domain.where, Ontologies developed in different natural languages are merged into a single ontology. ML = {OL1 U OL2 U … U OLn} Correspondences between the terms in different natural languages are created Lnti  Lntk where i and k vary from 1 to i terms in different languages. Comput.Sci.Inf.Technol.  Building a multilingual ontology for education domain using MOnto method… (Merlin Florrence) 51 Ontologies that are developed in different natural languages are merged into single ontology to structure multilingual ontology application.In formal, it can be represented as, Here, MO : Multilingual Ontology L : Language X : Set of elements X is a collection of elements or terms which are integrated the sources of the same domain in different natural languages.Many tools like OntoClean, FCAMerge, and Observer are available to merge ontologies.The merged ontology composed of set of terms in different natural languages.Ontology merging can be done by using smart algorithm [13].This algorithm deals with merging and aligning of monolingual ontology of the domain.In order to overcome this, the algorithms for ontology mediation methods are proposed for merging and mapping ontology [14]- [21].The research adapted those algorithms for merging and mapping multilingual ontologies. Phase 4: Multilingual information retrieval using SPARQL Information retrieval is the process of retrieving or extracting the information from the repository based on the user's need and query.Retrieving information in various languages can be named as multilingual information retrieval.In ontologies, SPARQL query is used to extract the knowledge from the ontology repository.RDF tags are used in SPARQL query to filter the results by means of language.This phase enables the users to extract knowledge in their own languages using SPARQL.SPARQL provides the functionality to retrieve the information in different natural languages.The sample SPARQL query is given as follows: PREFIX scs: <http://www.shctptcs.org#>SELECT?Subject?Object WHERE { ?subject scs: verse ?object.FILTER (Lang (? object) ="ta") } The given SPARQL used 'FILTER' to sort the result and give the results of information in a specified language. Phase 5: Visualizing multilingual ontology Visualization is a representation of text or object in the form of image or chart.It enables the readers to capture the knowledge effectively.Ontology is a hierarchically structured model which has numerous visualization tools (OWLGrEd, NavigOWL, IsAViz etc) and plug-ins (OntoGraf, OWLviz, CropCircles and so on).All the existing ontology visualization tools are lacking in visualizing non-English languages.Some of them require additional configuration to support different natural languages.In this phase, the new plug-in known MLGrafViz is proposed to visualize ontology in different natural languages.For example, the passage given in Figure 3 is represented diagrammatically in Figure 3 this depicts that the graphical representation of the text is clearer than the passage where the user may feel vague while reading a passage. MLGrafViz is developed using java and graphviz algorithms.Initially, it allows the user to create a new ontology or to import an existing ontology into Protégé workspace.The imported ontology will be displayed in a class browser.MLGrafViz enables the user to select the language to visualize the ontology.The request is submitted to Google translate API which performs statistical machine translation and then the terms are translated into the desired natural languages.Google translate API is an open source translator used to translate text, speech, images and videos from source language to target language.It provides an API which allows the developer to build an extension and software to translate the source.Google translate uses statistical analyses instead of rule based analyses.Since ontology is hierarchically structured terms, statistical machine translator provides better result than the rule based translator.Rule based machine translation is used in translating the passage grammatically.Finally, the translated terms are displayed in MLGrafViz panel.MLGrafViz facilitates the user to visualize the ontology in different natural languages without changing the core ontology structure as depicted in Figure 4 CONCLUSION We have proposed MOnto methodology to develop multilingual ontology application for education domain.New algorithms are proposed to perform merging and mapping of multilingual ontologies.This method allows the user to learn the subject from their own natural language which gives better understanding of the subject.This research work identifies the need of building multilingual application which plays vital role in educational domain.If the learning materials are in different natural languages, the learner will feel comfortable in learning.Learning through the natural languages is an essential thing which encourages the learner to learn many things.In future, multilingual applications can be implemented for different domain like healthcare.It is important to provide the evaluation metrics and methods to validate multilingual ontologies. Figure 2 . Figure 2. Illustration of building ontologies for two natural languages (Tamil and English) (a), (b). Figure 3 .Figure 4 . Figure 3. Graphical representation, (a) Steps involved in programming -text, (b) visualization of steps involved in programming -diagrammatic representation Table 1 . Document matrix for collecting resources in different natural languagesCollected terms are analyzed and irrelevant terms are filtered.The terms are classified hierarchically and the relations between the terms are established as, Comput.Sci.Inf.Technol., Vol. 1, No. 2, July 2020: 47 -53 50
2020-06-18T09:10:34.540Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "e18a14679347bfb8da0249c38a693bfa80a6f65d", "oa_license": null, "oa_url": "https://doi.org/10.11591/csit.v1i2.p47-53", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "3f3238955580c84ff57d28564286dbcc5c30f4f0", "s2fieldsofstudy": [ "Education", "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
229495112
pes2o/s2orc
v3-fos-license
Exploring foreign entrepreneurial ecosystems through virtual exchange This case study reports a Virtual Exchange (VE) between students at Cracow University of Economics (Poland) enrolled in business courses and students from the High Institute of Technological Studies of Béja (Tunisia) enrolled in an entrepreneurship course. The main aim of the project was to enhance students’ awareness of similarities and differences between the Polish and the Tunisian entrepreneurial ecosystems. The goals also included improving language skills, Information and Communication Technology (ICT) literacy, teamwork, and increasing their self-confidence. The chapter describes the schedule of the project, tasks that students accomplished, and the technological and communication tools that were used. Finally, the study includes conclusions and suggestions for future initiatives. Context When it comes to business studies in general and entrepreneurship courses in particular, it is important for students to acquire knowledge about business practices in other countries. That is how they can broaden their horizons and get inspired. Moreover, in the globalized and interconnected modern business world, it is of great importance that students develop their cultural intelligence to improve their employability skills. Cultural intelligence is the capability of an individual to function effectively in situations characterized by cultural diversity (Ang, Van Dyne, Koh, & Ng, 2004). Thomas and Inkson (2017) defined it as "the capacity to interact effectively across cultures" (p. x). This is the context in which the VE project described here was developed, with the main aim of enhancing students' awareness of similarities and differences between the Entrepreneurial Ecosystems (EEs) of the two countries involved: Poland and Tunisia. Aims and description of the project The VE between Cracow University of Economics (Poland) and the High Institute of Technological Studies of Béja (Tunisia) took place from April 29th till June 13th, 2019. Twenty-two students took part in this exchange, 11 students from Cracow and 11 students from Béja. The participation was voluntary for students from both sides to ensure that we worked with highly motivated students who are eager to cooperate with students from other countries. At the same time, participation would count toward their final grades. The students from Cracow were both undergraduates and postgraduates studying management, international business, tourism, and recreation, while the students from Béja were undergraduates studying computer system networks, and the exchange was part of the entrepreneurship course. As teachers, we both took part in the training course provided by Unicollaboration in the context of Erasmus+ VE, which helped us become familiar with the world of VE and join the VE community. We therefore decided to develop a VE project together, with the support of the trainers. While Gosia Marchewka wanted to do a VE to let her students experience work in an international virtual team, as usually they do not have such an opportunity during regular classes, Nadia Cheikhrouhou wanted to offer her students a cost-effective international experience which they lack in their studies. In fact, all the students in Béja are from Tunisia and have never experienced international mobility because of the absence of this opportunity in their institution. As mentioned above, the main aim of the VE was to enable students to familiarize themselves with the EE in a different country to their own. That objective fits into the entrepreneurship course in Béja, since a part of this course is dedicated to exploring the EE in Tunisia. The VE was important because students cannot develop their critical thinking unless they know how foreign EEs function (in this case the Polish EE). For students of Cracow University of Economics, the case was different as they were all participating in a variety of courses (with different learning goals). For them it was an extracurricular activity they could participate in for selfdevelopment. It is also important to mention that, beyond achieving the above-mentioned learning goals, this VE aimed at improving students' employability skills. In the 2019 World Development report entitled The changing nature of work, the World Bank Group (2019) mentions that "three types of skills are increasingly important in labor markets: advanced cognitive skills such as complex problemsolving, socio-behavioral skills such as teamwork, and skill combinations that are predictive of adaptability such as reasoning and self-efficacy" (p. 3). That is important especially given that in Tunisia 28% of higher education graduates were unemployed in the year 2019 (National Institute of Statistics in Tunisia 3 ), which is a relatively high rate. By offering the students such an opportunity, we expected them to gain the following competencies required on the job market in both Poland and Tunisia: • enhanced language skills, reflected in students' ability to speak English during conference sessions and to write messages in English to project partners; • enhanced ICT literacy, reflected in students' ability to post content, images, videos, and different files on a virtual platform called Padlet; • enhanced teamwork skills reflected in students' abilities to work on a cross-cultural project: sharing tasks, coordinating efforts with project partners, cooperating, helping each other, and respecting the deadlines; and • increased self-confidence reflected in students' ability to communicate in an intercultural environment and to build an international network. Pedagogical design and tools The VE between Béja and Cracow lasted one month and a half: it started at the end of April and ended by mid-June 2019. All tasks and students' deliverables were posted on a private Padlet that was created by the teachers and shared with the participants. Students were expected to accomplish five tasks. • Task 1: Introducing themselves; posting a photo, and commenting and/or liking other posts. Of course, both instructors were the first to introduce themselves on Padlet and kick off the VE project. • Task 2: Team building; the list of six teams (five teams of four students and one team of two students) was posted on Padlet by the instructors with the names and emails of each team member. The teams were formed heterogeneously in terms of gender and nationality to allow the maximum of diversity. Team members were asked to get in touch via email to set an appointment for a video-conference using a videoconferencing tool of their choice. During the video-conference they were supposed to get to know each other in a deeper way, choose a team name, and prepare a flower (teams of four) or a butterfly (team of two). In the center of the flower or the butterfly they were asked to put what the team had in common (personality traits, hobbies, interests…) then each student had to put what is unique about her or him and different from the others in the team, either on a petal (Figure 1) in the case of the flower, or on a wing in the case of the butterfly. • Task 3: EE; each team had to work with the Isenberg model (Isenberg, 2010), which describes the EE of a country based on six interrelated aspects: policy, finance, culture, supports, human capital, and markets to discuss differences and similarities between the Polish and Tunisian EE. • Task 4: Facilitated session; participating in a facilitated dialogue session. These are a form of intercultural conversation between the students in which a trained dialogue facilitator, provided by the Unicollaboration team, helps them overcome communication barriers and engage in a productive conversation. This facilitated session aimed to prepare them for the challenges of the next task related to decision making involving all voices and negotiating choices within an intercultural team. • Task 5: Presentation of the results; preparing a poster or a video to summarize each team's findings about similarities and differences between Tunisian and Polish EE. Table 1 below summarizes the learning goals of each task, the deliverables, the tools used to accomplish these tasks, and the targeted competencies to be acquired by students. Regarding General Data Protection Regulation (GDPR) and student privacy, when registering for the project, students were informed about the processing of their personal data to the extent necessary to participate in the project and were asked to give their consent. Evaluation, assessment, and recognition Polish students who successfully accomplished the project got one grade higher from the workshop part of one of the courses they attended. Students in Béja were graded for their participation in the project and this grade contributed to 20% in their final mark for the entrepreneurship course. This percentage of 20% is attributed in the Tunisian system to projects accomplished by students in the frame of their different courses. The grade earned by the students took into consideration the quality of the poster assessed based on its appeal (design, layout, neatness), richness of its content, integration of graphics related to the topic, and presence of the different requirements assigned in the task. Commitment of the students during the whole project and their participation in the facilitated session were also taken into consideration in this grade. Moreover, students who successfully completed all the tasks earned Erasmus+ VE badges. Tunisian students were happy with their badges, and intended to display them on their LinkedIn profiles and in CVs. They do believe that such a recognition is an added value to their CVs and they plan to mention this experience during their job interviews to stand out from the crowd. For the students in Poland, the badges are also of a great value in case of applying for international exchange programs at Cracow University of Economics since they gain extra points in their application process. Lessons learned and conclusion After the VE, both teachers had informal conversations with their students to reflect on this experience. According to Tunisian students, this exchange opened their eyes to cultural differences and how these impact the way people perceive things and the outer world. They experienced challenges of intercultural communication, and learned new practices typical of the education system in Poland. For example, they appreciated the fact that Polish students work after school, which is rare in the case of Tunisian students. Additionally, they were happy with the new relationships they built. The greatest disadvantage for them was the absence of Polish students during the facilitated session. Indeed, Tunisian students did their best to be present even though they were busy preparing the final projects of their different courses and therefore felt that these efforts were worthless, which caused their frustration. In addition, the absence of Polish students was perceived as a sign of disrespect to their Tunisian partners. The overall benefits listed by the students in Poland were similar. They believed that the project was a good opportunity to learn what real life cross-cultural communication may be like. For many of them, delivering tasks on time was extremely important and some were stressed that they could not meet deadlines. In some cases, they were disappointed with the lack of response from their Tunisian teammates, as they were expecting prompt reactions and, since the notion of time is perceived differently across cultures, a prompt reaction could take longer for Tunisian students especially during the Ramadan month because they had extra chores like preparing special meals and going to the mosques after breaking the fast to pray for the Tarawih. This conflictual situation put their problem-solving competencies to the test and though both teachers had to intervene to clarify misunderstandings, it was a learning point for students: not to jump into conclusions and to set up ground rules from the beginning within teams to avoid conflicts. Based on the feedback from students and a follow-up conversation between both instructors, we decided to implement our second VE, which is running at the time we are writing this case study, despite the very unusual circumstances due to the Coronavirus epidemic resulting in the reorganization of the work and social isolation. We drew upon the first exchange to improve this experience for the students, implementing the following changes: • students have more time for VE, and this year our project will last three months compared to the one month and a half of the first VE; • tasks are as simple as possible, the value is in the process of collaboration, less is more; • more tasks were implemented at the introductory stage to help in the development of the teams: this year, Tunisian and Polish students will be presenting their countries, regions, and institutions, as well as business etiquette, food, and important holidays/celebrations in their respective countries; • different conflictual situations may occur, the difficulties encountered in the previous VE helped us discuss in advance with students how to avoid them and how to face them; and • introduction of the e-portfolio provided by the Unicollaboration team: although we need to customize it, it is a useful form of assessment of the whole experience, as well as a record of the competencies gained by students throughout the project. An additional benefit to organizing a VE for our students is the possibility of scientific cooperation. Working on the materials for students, we discovered topics that we are both interested in and we have expanded our cooperation at an academic level. Right now, we are working on the problems of EEs and their impact on start-ups.
2020-11-19T09:14:53.338Z
2020-11-23T00:00:00.000
{ "year": 2020, "sha1": "cabff69204ae97bf9547cc1fbe793ba9702714f8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.14705/rpnet.2020.45.1117", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "584df3c0f4c351822c00adb726972549cfc70a1d", "s2fieldsofstudy": [ "Business", "Education", "Computer Science" ], "extfieldsofstudy": [ "Business" ] }
12477394
pes2o/s2orc
v3-fos-license
Effects of Different 1-34 Parathyroid Hormone Dosages on Fibroblast Growth Factor-23 Secretion in Human Bone Marrow Cells following Osteogenic Differentiation The importance of fibroblast growth factor (FGF)-23 as part of a hormonal bone-kidney-axis has been well established. Lately, FGF-23 has been suggested as an independent risk factor of death in patients on chronic hemodialysis. Hyperparathyroidism is a common feature of advanced kidney failure or end-stage renal disease. The independent effect of elevated parathyroid hormone (PTH) levels on FGF-23 secretion is still a matter of debate and has not yet been studied in an in vitro model of human bone marrow cells (BMC) during osteogenic differentiation. BMC from three different donors were cultivated for 4 weeks in cell cultures devoid of vitamin D either without 1-34 PTH or with PTH concentrations of 10 or 100 pmol/L, respectively. After 28 days, protein expression of the cells was determined by immunocytochemical staining, whereas real time-polymerase chain reaction served to analyze gene expression of several osteoblastic (osteocalcin, RANKL, Runx-2 and ostase) and osteoclastic markers (RANK, TRAP-5b). The concentrations of FGF-23, ostase and TRAP-5b were determined by ELISA at weeks 2, 3 and 4. We found a basal expression of FGF-23 with no increase in FGF-23 secretion after stimulation with 10 pmol/L 1-34 PTH. Stimulation with 100 pmol/L PTH resulted in an increase in FGF-23 expression (14.1±3.6 pg/mL with no PTH, 13.7±4.0 pg/mL with 10 pmol/L, P=0.84 and 17.6±3.4 pg/mL with 100 pmol/L, P=0.047). These results suggest a vitamin D and PTH-independent FGF-23 expression in human BMC after osteogenic stimulation. As only higher PTH levels stimulated FGF-23 expression, a threshold level might be hypothesized. Introduction Fibroblast growth factor (FGF)-23 is composed of 251 amino acids and has a molecular weight of 26 K-Dalton. It was first isolated in the antero-lateral thalamic nucleus of the brain. 1 FGF-23 was shown to be a critical pathogenic factor in several rare genetic disorders or tumor-induced osteomalacia (TIO). 2,3 The main source of synthesis is thought to be in bone cells of the osteoblast-osteocyte lineage. 4 One of its major physiological actions is maintaining a normal phosphate and vitamin D balance. The hormone binds to a family of FGF-receptors requiring the transmembrane protein klotho as a co-factor to facilitate receptor activation. 5 Animal studies in mice as well as experiments in rat and bovine parathyroid cell cultures have disclosed major effects of FGF-23 in target tissues. [6][7][8][9] The net effect of FGF-23 expression leads to increased urinary phosphate excretion and diminished calcitriol and PTH synthesis. In contrast to the effects of FGF-23 in target tissues, the physiological stimuli of FGF-23 secretion have not been elucidated as clearly so far. 1-25-hydroxyvitamin D increases FGF-23 expression as part of a hormonal feedback loop. This effect is mediated via vitamin-Dreceptor (VDR) action. Mice lacking VDR display markedly decreased FGF-23 levels. 10 The role of phosphate in humans on FGF-23 expression is still subject to controversial debates. 11,12 On the contrary, PTH has long been recognized as inducing elevations in 1-25-hydroxyvitamin D via induction of 25-hydroxyvitamin D-1ahydroxylase (Cyp27b1) in the kidneys, exerting the opposite effect on 1-25-hydroxyvitamin D synthesis as opposed to FGF-23. In a transgenic mouse model of primary hyperparathyroidism it was postulated that PTH exerts a direct effect on FGF-23 expression in bone cells of mice calvaria and that osteoblast activation might be important in the regulation of FGF-23. 13 After parathyroidectomy FGF-23 levels decreased to normal levels in this animal study but changes in calcium, phosphate and calcitriol were also noted, potentially confounding the effect of PTH on FGF-23 secretion. However, an effect of PTH on FGF-23 secretion could not be proven definitively in humans suffering from primary hyperparathyroidism, 14 which might point to the existence of potential species differences. Furthermore, FGF-23 has been proposed to represent an independent risk factor of mortality in end-stage renal disease patients. 15 Many patients on renal replacement therapy or with advanced renal insufficiency develop secondary hyperparathyroidism. Therefore, an independent effect of PTH on FGF-23 secretion would be of relevant clinical interest. So far the physiological role of chronically elevated PTH levels on FGF-23 secretion in osteoblasts independent of vitamin D hormones has not yet been studied in a human cell model in vitro. Therefore, this study sought to investigate the effect of three different dosages of 1-34 PTH fragment (0, 10 and 100 pmol/L) on FGF-23 expression in a cell culture lacking of vitamin D of human bone marrow cells (BMC) during osteogenic differentiation in vitro. Cell culture The approval of the institutional board review of the Heinrich-Heine Universität Düsseldorf, Germany, was granted prior to the beginning of the study. Human bone marrow was obtained by Jamshidi vacuum aspiration from the posterior iliac crest of three different donors each with normal kidney function (40and 33-year-old male and 21-year-old female individuals). The bone marrow cells were cultivated in Dulbecco's modified Eagle's low glucose medium (DMEM, PAA Laboratories, Cölbe, Germany) with 20% fetal bovine serum (PAA), 100 U/mL penicillin, 100 mg/mL streptomycin (PAA) in culture flasks and incubated in 5% CO 2 at 37°C as described earlier. 16 The medium was changed every three days. After reaching confluence, the cells were harvested with EDTA/trypsin and transferred into 24-well culture plates (Nunc, Wiesbaden, Germany) in a density of 5000 human bone marrow cells/cm 2 . These cell cultures were cultivated for 28 days either without PTH, 10 pmol/L or 100 pmol/L 1-34 PTH in cell culture medium (DMEM) with osteogenic supplements (0.1 µmol/L dexamethasone, 50 µmol/L ascorbic acid phosphate, 20 mmol/L glycerolphosphate, Sigma, Taufkirchen, Germany) under standard cell culture conditions. Defined amounts of cell supernatants were harvested twice a week. Therefore, the medium was incubated with the cells for 3 or 4 days, respectively. These media were pooled according to the incubation times of 1, 2, 3 or 4 weeks and used for the ELISA measurement. Since the culture media contained 20% FCS, serum was included in the samples. Cell viability curves were calculated to exclude a confounding effect of PTH on cell proliferation ( Figure 1). Immunocytochemical staining After 28 days of PTH stimulation, cell monolayers were stained with several antibodies, as described previously, 17 Real time-polymerase chain reaction m-RNA was isolated and a one-step RT-PCR was performed according to the manufacturers protocols (RNeasy ® kit and OneStep RT-PCR kit, Quiagen, Hilden, Germany). GAPDH served as housekeeping mRNA. Table 1 shows which primers were used. ELISA-detection of FGF-23, Ostase and TRAP 5b Three sandwich-ELISAs were performed according to the manufacturers instructions [FGF-23: Immutopics Inc, San Clemente, CA, USA; Ostase and TRAP-5b: Immundiagnostic Systems Ltd. (IDS), Boldon, UK] for quantification of FGF-23, ostase (bone-specific alkaline phosphatase) and the active isoform 5b of the tartrate-resistant acid phosphatase (TRAP-5b) in cell culture supernatants. The concentrations were calculated using standard curves following the manufacturers' respective assay protocols. The FGF-23 assay by immutopics was used to measure intact FGF-23 in human serum, plasma and other biological fluids. Photometric detection of PO42- Phosphate concentration was measured by its reaction with ammonium molybdate and sulphuric acid forming an anorganic phosphormolybdate-complex, which can be detected at 340 nm (DiaSys Diagnostic Systems). The concentration was calculated using standard curves as described by the manufacturer. pg/mL were determined once a week for week 2, 3 and 4 in the cell culture supernatants in each donor. Statistics The Wilcoxon paired rank sum test was used to compare data from weeks 2, 3 and 4. A Pvalue 0.050 was considered significant. Results For all three patients a positive staining for CD 105 could be detected without any influence of increasing PTH-concentrations. While only very few cells were stained positive, the majority of the cells were negative for the hematopoetic marker CD 34 independent on the 1-34 PTH concentration added. The transcription factor Runx-2 and the osteoblast products RANKL and osteocalcin were identified in all experimental approaches without any noticeable influence of PTH. In contrast, elevated ostase levels were detected in 2 out of 3 patients. Here, bone marrow cells at higher PTH concentrations produced significantly increased ostase concentrations. The average dose-dependent levels of ostase (µg/L) were 5.75±1.76 without PTH, 7.25±0.29 with 10 pmol/L PTH and 13.29±1.82 with 100 pmol/L PTH respectively. The osteoclastic receptor RANK was present in all cell cultures and the levels were independent from the PTH concentration. However, the amount of the osteoclastic product TRAP-5b (U/L) increased with higher PTH concentration ranging from 2.98±0.72 without PTH, 3.36±1.0 with 10 pmol/L PTH and 4.74±0.50 with 100 pmol/L PTH respectively. As further evidence of at least a partial differentiation towards the osteoclastic lineage, RANK was detected semi-quantitatively in immunohistochemistry (RANK data not shown). FGF-23 (pg/mL) concentrations were similar following incubation with 0 and 10 pmol/L 1-34 PTH ranging from 14.1±3.6 without PTH to 13.7±4.0 with 10 pmol/L PTH (P=0.84). In contrast, stimulation with 100 pmol/L PTH led to an increase in FGF-23 levels to 17.6±3.4 pg/mL (P=0.047 and 0.040) (Figure 3). Individual cell cultures of each donor also demonstrated an increase in FGF-23 levels during incubation of the cell cultures with 100 pmol/L PTH at weeks 2, 3 and 4 ( Figure 4). Phosphate levels were significantly increased due to the stimulation protocol, yet were stable at these levels at all time points in all cell cultures; 37.6±2.1 mg/dL (no PTH), 37.6±2.5 mg/dL (10 pmol/l PTH) and 38.5±1.6 mg/dL (100 pmol/L PTH). Discussion FGF-23 has been described as a pathogenic factor in various genetic syndromes and plays a major role in phosphate and vitamin D metabolism in patients suffering from chronic kidney disease. 18 We used 1-34 PTH concentrations of 10 and 100 pmol/L, as these levels are frequently encountered in patients with chronic kidney disease or end-stage renal disease on hemodialysis. Our in vitro results suggest a basal FGF-23 production in BMC after osteogenic differentiation independent of changes in vitamin D and PTH levels. Furthermore, a dose-dependent stimulatory effect of 1-34 PTH on FGF-23 secretion might be suggested. Based on the presence of Runx-2 and osteocalcin, osteoblastic differentiation could be identified in all cultures. We could also detect expression of RANK as evidence of an osteogenic differentiation towards the osteoclastic lineage. There was no correlation between these proteins and the PTH concentrations. Detectable increases in the marker proteins ostase and TRAP-5b imply an increased bone metabolism for both osteoblasts and osteoclasts, depending on the PTH concentration. So far, PTH has been postulated to exert an indirect effect on osteoclast activity via osteoblastic stimulation of the RANKL-RANK pathway. The increased osteoclast activity did not seem to be signaled via the RANKL-RANK pathway since both markers did not correlate with the three different PTH concentrations. In our model, only higher 1-34 PTH concentrations led to an increase in FGF-23 levels in all three of the donor cell cultures over a period of four weeks. Vitamin D or any other vitamin D derivatives were not part of the stimulation protocol, suggesting a vitamin D-independent and possibly dose-dependent effect of high PTH dosages on FGF-23 expression. We could also measure a basal FGF-23 secretion in the cell cultures not stimulated with exogenous 1-34 PTH, possibly due to elevated phosphate levels in the culture milieu. This is in line with a stimulatory effect of phosphate on FGF-23 production, although a study by Miyagawa and co-workers did not show an effect of phosphate on FGF-23 expression in an in vitro model of 10 week old mice osteocytes. 19 Therefore, changes in phosphate levels were avoided to minimize any potential confounding on FGF-23 expression. The steady increase of FGF-23 levels in the cell cultures of all three donors, might potentially be attributed to a constantly growing differentiation of BMC towards an osteoblastic lineage after osteogenic stimulation. It has been known for a long time that full-length (1-84) PTH is processed after secretion and the accumulation of different PTH fragments in ESRD patients is well documented. Although discussed controversially, some PTH fragments have been suggested to exert physiological effects in different cell systems. A 7-34 PTH fragment has been described but was not shown to exert physiological effects at least on bone cells. 20 We chose 1-34 PTH fragment instead of full-length 1-84 PTH in our cell culture model to measure only the biologic intact molecule and minimize any potential confounding on FGF-23 secretion by unexpected degradation of 1-84 PTH or actions of PTH fragments. Due to the close relationship between PTH, calcitriol and phosphate the effects of PTH on FGF-23 secretion have been discussed controversially so far. Studies in rats after parathyroidectomy or induction of renal failure have demonstrated a positive effect of PTH on FGF-23 secretion. 21,22 However, different other investigations in rodents could not demonstrate a stimulatory effect of PTH on FGF-23 secretion. [23][24][25] Human studies have been more in line with a stimulatory effect of PTH on FGF-23 production so far. An experimental approach using intravenous infusions of 1-34 PTH over several hours in 20 healthy human subjects was in line with a physiological stimulation of FGF-23 by calcitriol, but did not shed more light on the potential independent effect of PTH on FGF-23 secretion. 26 Kobayashi et al. investigated FGF-23 secretion patterns in 50 patients with primary hyperparathyroidism. 27 In this study, FGF-23 levels also declined after parathyroidectomy on the first postoperative day, but in contrast to different other investigations on the regulation of FGF-23 secretion, 28,29 phosphate and calcitriol levels were not associated with changes in FGF-23 levels. Instead, calcium seemed to be positively correlated with increased FGF-23 levels suggesting different secretion patterns in patients with primary hyperparathyroidism and patients with advanced renal failure. Another clinical approach, using a biointact FGF-23 assay, investigated the effect of parathyroidectomy in 15 patients on dialysis and found a significant fall of FGF-23 postoperatively. 30 However, phosphate levels in this study fell also, which might potentially confound the effect of PTH on FGF-23 secretion in this study. In another study in postmenopausal women, daily subcutaneous injections of (1-34) PTH resulted in an increase in FGF-23 levels within 3 months of initiating therapy. 31 Nevertheless, opposite to the aforementioned studies in humans, a recent investigation by Gutiérrez and coworkers, applying a 6 hour infusion protocol of (1-34) PTH in healthy adult volunteers, not only failed to demonstrate an increase in FGF-23 levels, but also described a significant decline in FGF-23 concentrations despite of increasing calcitriol levels. 32 Limitations and strengths There are several limitations to our study. Firstly, our cell milieu contained supra-physiological, though stable phosphate levels. Secondly, we investigated BMC only from three healthy donors. Therefore, further studies using this approach with more subjects and different PTH concentrations are clearly warranted. Thirdly, PHEX and Dmp1, other proposed local regulators of FGF-23, have not been studied in our current model as well as the possible effects of post-secretory proteolytic processing of FGF-23 by e.g. furin-like convertase or the glycosyl transferase. Nevertheless, we tried to minimize any confounding by measuring only full-length intact FGF-23 levels as this molecule is thought to be the only biologically active variant. Suggested major strengths of our in vitro model are the use of human cell lines to avoid confounding by potential species differences and the definite lack of vitamin D compounds in the culture milieu to investigate an independent effect of PTH on FGF-23 expression. Conclusions FGF-23 is expressed in human bone marrow cells during osteogenic differentiation over four weeks in vitro independently of 1-34 PTH and vitamin D. Compared to stimulation with no PTH and only moderately elevated PTH levels, stimulation with high dosages of 1-34 PTH increases FGF-23 expression, suggesting a dose-dependent PTH effect on FGF-23 production. Further studies are necessary to elucidate an independent effect of PTH on FGF-23 secretion and different PTH concentrations might be tested.
2016-05-13T21:41:05.168Z
2014-04-22T00:00:00.000
{ "year": 2014, "sha1": "f58df91865c62a353f4a5ea6aa4a070f397835c0", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4081/or.2014.5314", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f58df91865c62a353f4a5ea6aa4a070f397835c0", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
256597674
pes2o/s2orc
v3-fos-license
Ways to study changes in psychosocial work factors Unfavorable psychosocial work factors are associated with poorer worker health, such as depression and cardiovascular disease (1). Most evidence is based on studies linking the exposure level at baseline to worker outcomes at follow-up (1). A next step to better understand the impact of psychosocial work factors on worker health is to investigate changes in exposure levels (2). There is no consensus on how to operationalize changes in psychosocial work factors. This is not surprising for three reasons. Firstly, psychosocial work factors are most often measured by self-report (eg, 3, 4). This makes interpretation and generalization more difficult compared to other, more objectively measured exposure–response relationships such as, eg, exposure to chemicals. Due to these self-reports, measured differences in psychosocial factors may be biased and thus not always reflect a change in the work environment. Several attempts have been made to counteract this challenge in the field by using indicators from registers (5), cluster unit analyses (6), and job exposure matrices (7). Secondly, there is no consensus on which psychosocial work factors in what combinations are unfavorable for workers. For example, the Danish Psychosocial Work Environment Questionnaire (DPQ) covers as much as 38 psychosocial constructs (4). Knowledge is still limited: which ones are most important for worker health and which ones are of less importance. Because of this knowledge gap, we still do not know what the exposure–response relationship looks like. What is clear, though, is that exposure to high job strain (ie, high job demands in combination with low job control) is detrimental for worker health (1, 8). Thirdly, there is no consensus on cut-off scores for unhealthy psychosocial work factors, which makes it difficult to interpret changes in these factors and, thus, to compare results across studies. To examine to what extent and in what manner changes in psychosocial work factors are being studied, we conducted a search in Pubmed on 25 January 2023 using the search terms ‘change’, ‘employees or workers’ and ‘work factors’ in the title or abstract, which yielded 7461 hits (table 1). The Scandinavian Journal of Work, Environment and Health (SJWEH) published 82 papers including these search terms. Of these 82 published papers, 11 studies analyzed changes in psychosocial work factors (9–19). In these SJWEH publications, we found two different approaches to investigate change in psychosocial work factors. Most studies focused on membership of a high risk group (9–14). First, they defined cut-offs based on the distribution of data points, and subsequently divided respondents into four groups based on their scores over time: stable unfavorable, worsening, improving, and stable favorable psychosocial work factors (9–12, 14), in which the ‘stable unfavorable’ and ‘worsening’ groups can be considered as high-risk groups. Moreover, many studies combined job demands and resources into one measure for job strain and analyzed changes in this composite score (11, 12), which was subsequently related to worker outcomes. This work is licensed under a Creative Commons Attribution 4.0 International License. Ways to study changes in psychosocial work factors Unfavorable psychosocial work factors are associated with poorer worker health, such as depression and cardiovascular disease (1).Most evidence is based on studies linking the exposure level at baseline to worker outcomes at follow-up (1).A next step to better understand the impact of psychosocial work factors on worker health is to investigate changes in exposure levels (2). There is no consensus on how to operationalize changes in psychosocial work factors.This is not surprising for three reasons.Firstly, psychosocial work factors are most often measured by self-report (eg, 3,4).This makes interpretation and generalization more difficult compared to other, more objectively measured exposure-response relationships such as, eg, exposure to chemicals.Due to these self-reports, measured differences in psychosocial factors may be biased and thus not always reflect a change in the work environment.Several attempts have been made to counteract this challenge in the field by using indicators from registers (5), cluster unit analyses (6), and job exposure matrices (7).Secondly, there is no consensus on which psychosocial work factors in what combinations are unfavorable for workers.For example, the Danish Psychosocial Work Environment Questionnaire (DPQ) covers as much as 38 psychosocial constructs (4).Knowledge is still limited: which ones are most important for worker health and which ones are of less importance.Because of this knowledge gap, we still do not know what the exposure-response relationship looks like.What is clear, though, is that exposure to high job strain (ie, high job demands in combination with low job control) is detrimental for worker health (1,8).Thirdly, there is no consensus on cut-off scores for unhealthy psychosocial work factors, which makes it difficult to interpret changes in these factors and, thus, to compare results across studies. To examine to what extent and in what manner changes in psychosocial work factors are being studied, we conducted a search in Pubmed on 25 January 2023 using the search terms 'change', 'employees or workers' and 'work factors' in the title or abstract, which yielded 7461 hits (table 1).The Scandinavian Journal of Work, Environment and Health (SJWEH) published 82 papers including these search terms.Of these 82 published papers, 11 studies analyzed changes in psychosocial work factors (9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19).In these SJWEH publications, we found two different approaches to investigate change in psychosocial work factors.Most studies focused on membership of a high risk group (9)(10)(11)(12)(13)(14).First, they defined cut-offs based on the distribution of data points, and subsequently divided respondents into four groups based on their scores over time: stable unfavorable, worsening, improving, and stable favorable psychosocial work factors (9)(10)(11)(12)14), in which the 'stable unfavorable' and 'worsening' groups can be considered as high-risk groups.Moreover, many studies combined job demands and resources into one measure for job strain and analyzed changes in this composite score (11,12), which was subsequently related to worker outcomes. results Filter: last 10 years: 3698 results As no golden route exists to analyze change in psychosocial work factors, in the following, we present some options, and describe pros and cons of these options.This editorial is a call not only to carefully think through how to define change in a study but also to share arguments for the choice of definition in the methods section and to be transparent about the underlying assumptions.The key issue to consider is the approach to study change in exposure.There are two different ways to study change.Firstly, studying changes in exposure levels (eg, an increase of 1 point) that will lead to an increase of the risk at individual level.Secondly, studying transitions in membership of high-risk groups.Below we will explain both options that we illustrate with examples. Option 1: Changes in exposure levels When a linear exposure-response relationship is assumed, every increase in exposure is associated with an increase in risk (15)(16)(17)19).For example, Milner and colleagues analyzed how the psychosocial quality of a job was associated with mental health outcomes using longitudinal fixed effects regression models (16).The disadvantage of this method is that every change is considered equally relevant.This means that every unit of change (eg, 1 point or 1 standard deviation) in score has a similar effect on the outcome, independent of the baseline value.However, since psychosocial factors are often measured as ordinal variables, we know that the differences between the answering categories are not equal.For example, answering options in dimensions of the Copenhagen Psychosocial Questionnaire (3) are on a scale from 1 'always' to 5 'never/hardly ever', and a change in exposure from 1 'always' to 2 'often' versus from 2 'often' to 3 'sometimes' may have a different effect on the outcome (3).Most studies on changes focused on within-person changes, in our search we found only one study applying a population approach to study trends in psychosocial work factors over time (17). Option 2: Changes in high-risk group membership The second approach involves membership of a high-risk group by crossing a threshold or cut-off score.A worker can enter a high-risk group when crossing the threshold, even with a minor change in score.Yet, another person can remain in the low risk group with a larger change in score.When the threshold for high risk is for example 3.5, a worker with an increase of 2 points from 1.0 to 3.0 will not become at risk, whereas a worker with a score increase of 0.5 points from 3.1 to 3.6 will become at risk.The increase itself, 2 points versus 0.5 points is not informative here; what matters is whether the threshold from the low risk to the risk group is crossed.In this case, it is important to consider that not all workers crossing the threshold will have a relevant change in score, as the group of workers very close to the threshold will only need a little change to cross it. When taking the high-risk approach, it is important to define what we consider a high risk.Ideally, the relevant cut-off scores are known.Alternatively, the high-risk group can be defined based on the distribution of scores within the population, based on eg, the median (9-11, 14, 18), (upper) tertile (13) or any exposure (versus no exposure) (10).This method using the median as a cut-off is most often used in studies on change published in SJWEH.For example, Saastimoinen and colleagues performed a median split for job demands and control variables separately, which enabled them to construct a variable for job strain based on high/low demands/control (9).Too and colleagues analyzed data from four waves of the Whitehall study based on a tertile split and showed how job control could vary between low, medium, and high levels over time (13).With the cut-off, one takes into account the distribution of scores within a population, and the researchers define which percentage of the population is at high risk.However, depending on the population, the cut-off score for being at high risk may differ largely between populations.This makes it difficult to compare exposure-response associations between studies.When demand and control variables are combined into a job strain measure (eg, 9), the comparison becomes even more challenging.Moreover, how sure can we be that a specific percentage of our research population is at risk?By assuming that by definition 50% (median split), 33% (tertile split), or 25% (quartile split) of the sample is at risk, the exposure prevalence is defined by the researcher and a similar exposure prevalence across studies may hide completely different exposure patterns. Both the changes in exposure levels and the changes in high-risk group membership approaches have their pros and cons (table 2).In both approaches, the comparability across studies is an issue as (i) different questionnaires and rating scales are used to measure psychosocial work factors and (ii) different choices are made for defining levels of psychosocial work factors as unfavorable (eg, different cut-off values). Analyzing changes in psychosocial work factors requires clarity on the approach.When taking the changes in exposure levels approach, an interpretation of the meaning of a certain change (eg, from 1 to 2) is needed.When taking the high-risk approach, transparent choices have to be made to define the high risk group(s).This is important because these choices can heavily influence the conclusions.Hence, we strongly recommend to explain the choices and the underlying assumptions made to enhance the interpretation of the results.In that way, the evidence becomes more comparable which will improve our understanding of exposure-response associations in studies on psychosocial factors at work and worker health. Table 2 . Overview of pros and cons of the change in exposure levels approach and the high-risk group approach
2023-02-06T06:17:49.034Z
2023-02-05T00:00:00.000
{ "year": 2023, "sha1": "4a946865e014da375db84525c96494136025f77f", "oa_license": "CCBY", "oa_url": "https://www.sjweh.fi/download.php?abstract_id=4081&file_nro=1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eb675707670c08e1f148a6bf7841f3d1802482c6", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
139789894
pes2o/s2orc
v3-fos-license
Rylene Dielectrophores for Capacitive Energy Storage Main design principles of the potent rylene-based class of dielectrophores are established in the present article. The proposed class of dielectrophores comprises polarizable aromatic core, conjugated with aromatic subunits, and bears resistive groups on the periphery. The aromatic subunits might comprise donor and acceptor groups for the desired level of polarizability of the molecule. Appropriate positions for donor and acceptor groups are established by quantum chemistry modeling. The design principles are demonstrated on the molecular design of an efficient rylene-based dielectrophore. Materials Sciences and Applications speak, is still being expected to arrive. Currently produced volume of electricity-about 25,000 Terawatt hours per year-with reasonable requirement of 24 hours storage cycle demands about 10 Megatons of energy storage material. Assuming that 1 kWh is stored in 1 kg of energy storage material, it can take more than 10 years to produce the required amount of material. Design principles start with very basic assumption that there are no chemical transformations and moving parts in future energy storage devices. Electrons are moved in the device and electrons are moved out of the device-it is capacitor as it was designed in 1745 by the Prussian scientist Ewald Georg von Kleist and independently by the Dutch physicist Pieter van Musschenbroek [2]. Setting up the design rules of energy storage, we apply general principles of efficient capacitors at the molecular level. Dielectric species in high energy storage capacitor should be polarizable and, at the same time, should be resistive, keep low leakage current, and maintain the polarization energy without breakdown. These properties seem to contradict each other, as the polarization is caused by the motion of the charges, but this motion must be stopped at some certain points to prevent conduction and charge recombination at the electrodes. Hence, our molecular units should contain at least two parts, the inner being responsible for the polarization, and the peripheral one having required resistance, in other words, polarizable core in resistive envelope. In this paper, we discuss a new class of molecules, dielectrophores, with polarizable core and resistive peripheral structure. Recently, [3] we have presented rylene-based dielectrophore structure that combines the required capacitors properties. This class of aromatic molecules is known to be highly polarizable [4]. The full spectrum of colors that is easily achieved with rylene-based molecules can be a great representation of the unique controlled polarizability of these species [4]. This exclusive feature has been recognized and applied in rylene-based dye industry for many years, however, it has no application in the energy storage. In addition to the polarizability, rylenes have convenient physical properties and good mechanical flexibility that provides an opportunity to create a particular form of a capacitor for each application [5]. provide solubility to the molecular system and simplify the synthesis and processability. Perylene-based species (Figure 1) are among the planar polycyclic molecular systems which form column-like supramolecular stacks (self-assemblies) by π-π interactions, see Figure 2 [6]. Aliphatic alkyl or fluoroalkyl chains in such ordered structures of perylene-based dielectrophores are sufficient to form so-called "insulating envelope" for the useful resistivity of >10 16 Ω·cm [7]. For the optimal formation of the described self-assemblies shown in Figure 2, the aromatic core needs to have at least a two-fold core symmetry. Therefore, perylenediimides (PDI) and their derivatives can be convenient starting materials for the modular synthesis of our dielectrophores. Synthesis of the cores with higher symmetry (e.g. porphyrins, triphenylenes) represents a significant challenge. Notably, working with the PDI, lateral or longitudinal extensions, as well as the substitution in the bayand ortho-positions (see Figure 3) could be easily performed to optimize the polarizability. Meanwhile, manipulating with the imide function, we can modify the PDI into greater conjugated aryl-pyrimidine or aryl-imidazole derivatives, and at the same time, add the insulating subunits. This modularity of the rylenes makes them unique candidates, as compared to the other π-stackable cores, such as porphyrins and benzocoronenes. In this article, we demonstrate fundamental principles of the polarizability tuning, using the scope of the perylene derivatives. Going through the main molecular elements, we determine their impact on both linear polarizability (α) and hyperpolarizability. In the approximation that we suggest here, we only consider first hyperpolarizability (β) to have an opportunity to involve a significant number of molecules. Hence, our screening mainly focuses on the calculated values of α and β, instead of focusing on their individual components (ground state dipole moment for α, the energy gap between the two states, the transition dipole moment between the two states, and the difference in dipole moment between the two states for β). Since the dielectrophores that we consider have a generally linear shape, their polarizabilities are mainly dominated by the longitudinal tensor components (along the x-axis). During the self-assembly, this molecular direction would be aligned with the electric field produced by the electrodes, and, therefore, we only report the longitudinal tensor components α xx and β xxx with the induced dipole moment of the dielectrophore given by where we keep only the first two terms in the power expansion. In addition, contributions of the resistive groups to the total polarizability is minimal, therefore, we use methyl substituents as resistive groups in our calculations. Results and Discussion We begin our study by the determination of the dependence of α and β upon the electronic effects of the donor and acceptor groups, and upon longitudinal extension of the rylene aromatic core, since increased π-π stacking has been seen for higher rylenes (n > 2; Figure 3). At this step, we do not consider substitutions at peri-positions (Figure 3), since these positions are mainly used for addition of solubilizing chains. Substituents at peri-positions have a negligible influence on the absorption and emission of the core (at least within a single molecule), because the nodes of the HOMO and LUMO orbitals are located at the imide nitrogens [8]. In our study, we propose further change of the polarizability when adding donor/acceptor functional groups. These groups can be considered at various positions within the general structure of the rylene-based dielectrophore (R 1 -R 8 ; Figure 4). In our screening, we take advantage of the amino (as well as alkylamines) and nitro groups, since they are among the strongest donor and acceptor groups. In addition, amino group provides a convenient spot for the insertion of resistive groups (long alkyl chains) without interfering directly with the electronic properties of the conjugated core. The suggested molecules are analyzed with Gaussian09 software [9], their polarizabilities and hyperpolarizabilities are determined using the B3LYP method with 6-31G+ basic sets. Initial screening of the perylene-based dielectrophores (n = 2, Table 1) shows, that electronic properties and positions of the functional groups have minimal effects on linear polarizability (entries 1 -6). Only a single order difference in α values was achieved within these first entries. On the other hand, entries 7 to 13 demonstrate that longitudinal extension greatly affects non-linear polarizability by distorting the electronic distribution. One should keep in mind that these three orders leap in the β values (from entry 7 to entry 13) would not be reached without electron-donating and electron-withdrawing groups at opposite sides of the molecules (R 1 , R 2 , R 7 , R 8 ; Figure 4). Generally, the first-order hyperpolarizability (β) is expected to be higher, when the donor and acceptor groups are at the opposite sides of the conjugated system. The discovered polarizability trend falls within well-established methodologies of push-pull chromophores design, developed for the nonlinear optical (NLO) properties [10]. The internal charge transfer (ICT) is responsible for the polarization of the chromophore, and the generation of the molecular dipole. At the second step of our screening ( α xx /a.u. β xxx /a.u. common donor and acceptor groups on linear and nonlinear polarizability of less synthetically challenging PDI-benzimidazole dielectrophores shown in Figure 5. At this step, we compare SYN and ANTI PDI-based dielectrophore structures ( Figure 5), since these structures are possible synthetically made regioisomers. At this step, α values again stayed within the same order of magnitude ( Table 2). The notable leap in β-values can be observed in the entry 5, when we double the numbers of donor and acceptor groups at opposite sides of the molecule (donor group as R 2 , R 3 and acceptor group as R 5 , R 6 ; Figure 5). In the next entry (entry 6), we can see that β-value of our rylene-based structure significantly drops when having both donor and acceptor groups at each side (donor group as R 3 , R 5 and acceptor group as R 2 , R 6 ; Figure 5). This way we confirm, that it is crucial for the molecule with high non-linear polarizability to have all donor groups at one side and acceptor groups on the opposite side (as in the entry 5). Later, we found that having only a single donor and acceptor groups correspon- Our examples possess simple donor and acceptor groups, as well as relatively short conjugated systems. Replacing the donor and acceptor groups with more complex subunits, together with the extended longitudinal axis (rylenes with n > 4) can provide even higher polarizable molecules. However, we must seriously take the synthetic aspect into our consideration, since it is an inevitable element of the molecular design. The synthesis of more complex molecules would be quite challenging. In addition, we should not forget about much higher resistivity requirements of such systems. In our ideal perylene-based dielectrophore, resistive groups should be added symmetrically at both ends of the molecules, close to the terminal donor and acceptor groups. Adding two long alkyl chains at both ends, via a phenyl linker, allows for maintaining the relative symmetry of the molecule to favor columnar arrangements (Figure 2), other than alternative micellar structures [11]. Considering the calculation results, as well as the synthetic flexibility, price, and the resistivity requirements, we worked out the structure of the proposedly efficient dielectrophore molecule, as shown in Figure 6. This suggested dielectrophore follows the principles discussed above and contains both donor and acceptor groups at corresponding positions of R 1 and R 4 substituents, as well as resistive alkane chains on the periphery of the molecule. Conclusions We have discussed the feasibility of heavy-duty capacitors based on dielectrophores. Using quantum chemistry calculations, based on the series of perylene-based molecules, we demonstrated that introduction of donor and acceptor groups at the certain positions increases hyperpolarizability. This result is expected based on literature and data produced in non-linear optics studies. Alternatively, a rise of polarizability can be achieved by expansion of the aromatic core, i.e. by increasing the number of electrons available for displacement along the molecular structure. In addition to the calculations, we suggested a structure for the future dielectrophore. It includes the highly polarizable aromatic core and highly resistive aliphatic tails. π-π interaction of conjugated cores would lead to the column-like molecular self-assembly. Proposed molecule is expected to have large linear and non-linear polarizabilities, high molecular density, and low leakage current, which makes it applicable for capacitive energy storage industry.
2019-04-30T13:07:38.522Z
2018-05-31T00:00:00.000
{ "year": 2018, "sha1": "f47de3dcf97e1c5abdfba257448b646fffb3bec2", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=85008", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "8cee5d016bcbb6555aa660180ebcc06992813c5c", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
251820609
pes2o/s2orc
v3-fos-license
Machine Learning for Predictive Analytics in the Improvement of English Speech Feature Recognition The use of deep learning to improve English speaking has seen tremendous development in recent years. This study evaluates the noise that is present in the English speech environment, employs a two-way search method to select the optimum feature set, and applies a quick correlation filter to remove redundant features in order to increase the accuracy of English voice feature identification. In addition, this article designs a low-pass filter in the complex cepstrum domain to filter the room impulse response in order to obtain the estimated value of the complex cepstrum of the original speech signal. After doing so, the authors transform this estimated value into the time domain in order to obtain the estimated value of the original speech signal. In addition, this paper proposes a corresponding noise elimination model for the purpose of eliminating noise from English speech in a reverberant environment. It also designs a complex cepstrum domain filter in order to conduct simulation research on the different characteristics of the reverberation signal and the pure speech signal in the complex cepstrum domain. In conclusion, this study develops an English voice feature recognition model that is founded on a deep neural network. Furthermore, this paper uses experimental research to validate the validity of the algorithm model that was developed in this study. Introduction English speech enhancement based on the regression DNN network is proposed, and the experiment proves that the algorithm can achieve better performance than traditional English speech enhancement algorithms. However, although the English speech enhancement algorithm based on deep learning uses many noise types and training corpus in the training data preparation stage, there are still many problems in its promotion ability on real data, such as the distortion of English speech under low signal-to-noise ratio, the unstable effect of processing mismatched noise types, and mismatched speaking styles [1]. In the system environment disturbed by noise, the correct rate of English speech recognition is significantly reduced, resulting in the failure to achieve the ideal effect in practical applications, and the system is disturbed even more under the condition of low signal-to-noise ratio. In order to make the English speech signal detection system work normally, it is necessary to extract as much pure English speech as possible from the English speech signal contaminated by noise when the noise source is unknown. That is, under the premise of suppressing noise, the purpose of improving and protecting the quality of perceived English speech is achieved. This kind of English speech processing technology has great research significance and application value for the related fields of English speech signal processing. As far as the current English speech signal processing technology is concerned, the effect of English speech detection in a weak noise environment is relatively ideal. However, the detection performance drops sharply in a strong noisy environment. Therefore, the detection of English speech signals under the condition of low signal-tonoise ratio is still a subject to be studied in depth [2]. Analog signals are used to represent the English voice signal. However, because of the cut-off frequency, the English voice is only present in the storage device as a digital signal as far as the English voice receiver is concerned. As a result, it starts by analysing the analogue English speech that has been digitally transformed, which typically entails amplification and gain control, prefiltering, sampling, quantization, and coding [3]. At present, English speech signal processing technology is developing rapidly in the field of information research, and its research scope involves cutting-edge scientific research projects, which has important research and application value. Moreover, informatization has become a basic requirement of modern society. In the civilian field, microphone array English speech signal processing technology is widely used in multimedia exhibition halls with large spaces and the hearing aid market. The English speech processing of the microphone array can adaptively control the beam direction, suppress interference signals in unknown directions in multiple directions, and have higher resolution. Therefore, in recent years, the development of adaptive processing technology has become more rapid, and the technology has also been used in other fields. However, the related algorithms of the English speech signal processing of the microphone array require a lot of floatingpoint operations. In current applications, most of them use DSP processors to perform operations on the collected signals. Although DSP has strong floating-point operations, it has disadvantages such as poor real-time serial operations and susceptibility to interference. Therefore, it is not competent for the more demanding processing system. This paper employs an FPGA-based English voice signal processing design to achieve this. The fact that the processor chip is inexpensive, compact, and capable of multichannel synchronous high-speed operation is a benefit. The development of FPGA-based English speech signal processing can thereby address the inadequacies of the current processing system and has significant implications for a wide range of applications. In view of this, based on the deep neural network, this paper studies English speech feature recognition technology and proposes a reliable English speech feature recognition algorithm to provide a reference for subsequent English speech feature recognition. Related Work Research on endpoint detection and speech enhancement of noisy speech signals has been conducted for more than 50 years, and significant progress has been made during this period. Voice endpoint detection technology is proposed by [4], which is mainly applied to the time allocation of communication channels in the communication transmission system developed by it. The literature [5] proposed a system for reducing noise in the communication environment. The system introduces the concept that the input voice signal with noise is superimposed by the pure voice signal and the noise signal and divides the sample voice signal into multiple subbands for processing and analysis. The system is actually a spectral subtraction technique for now, but it is only implemented in the analog domain. Thanks to the rapid development of digital signal processing algorithms and DSP (digital signal processing) hardware, speech signal detection methods based on spectral improvements have been greatly developed, so speech signal noise reduction technology has made great progress. The literature [6] proposed a "spectrum shaping" method, which uses amplitude clipping in the filter bank of the speech signal preprocessing stage to remove lowlevel excitations. This low-level excitation is considered a noise signal. The literature [7] proposed spectral subtraction, which is implemented in the digital domain. Spectrum subtraction was applied to statistical spectrum estimate in [8]. Nearly and simultaneously, a technique that combines noise reduction and speech enhancement was suggested in [9]. The literature [10] proposed a voice endpoint recognition technique that establishes distinct thresholds to identify the starting point and ending point of the signal by combining the short-term energy of the speech signal with the short-term zero-crossing rate. The literature [11] explored endpoint detection performance in greater detail and developed algorithms for performance comparisons using several energy characteristics of the signal, including square energy, logarithmic energy, and absolute value energy. The optimum spectrum amplitude estimation and the best spectrum phase estimation are suggested by [12] using statistical prediction theory. The study's findings are frequently referenced in noise reduction studies, but, at the same time, the primary approach to noise reduction has changed to focus on the challenge of foreseeing the spectrum amplitude of pure speech signals. More statistical spectrum estimation approaches have been created by researchers, such as the minimum mean square error (MMSE) logarithmic spectrum amplitude estimation method, the maximum likelihood (ML) spectrum amplitude estimation method, and the maximum a posteriori (MAP) method. The Linear Predictive Coding (LPC) model and Kalman filter were utilised in [13] to reduce noise and raise the signal-tonoise ratio of speech signals. The literature [14] provided more endpoint detection algorithms through the frequency domain spectrum analysis of the voice signal after using the Fourier transform to get the frequency domain information of the voice signal. The literature [15] advocated for the speech signal's short-term stationarity and held that its parameter properties would be true over a brief period of time. Segmentation methods based on LPC coefficients, methods based on speech parameters, and segmentation algorithms based on parameter filtering have been successively proposed. The literature [16] proposed an algorithm based on artificial neural network, through fast convergence to determine the different weights of the signal; its detection performance is significantly improved compared with the early statistical decision-making algorithm. Literature [17] proposes applying wavelet transform technology to speech signal detection, which greatly reduces the computational complexity of the algorithm. The literature [18] researched the least square method. This blind system identification method uses the method of decomposing eigenvalues in the frequency band for processing. The literature [19] developed an adaptive filtering 2 Mobile Information Systems method. This method can combine Least Mean Square (LMS) and adaptive filtering methods. However, the disadvantage is that there are many restrictive conditions, the common zero point between channels will hinder this method, and the rank of the correlation matrix of the sound source signal is required to be maximized. The literature [20] studied the use of multichannel methods for linear prediction. This method is to diagonalize the covariance matrix of the speech signal to obtain the correlation characteristics of the signal. The literature [21] proposed using a virtual model to simulate the impulse response of the room. This method is based on the stability of the channel. However, under normal circumstances, the environment will change randomly, and it is difficult to meet this requirement, so this method is more difficult to implement. English Speech Feature Recognition Algorithm Based on Deep Learning This paper introduces the data set, data preprocessing, and extracted features, and two effective feature selection methods are used in feature selection. In addition, this paper uses three different classifiers and compares the classification effects. We normalized all the data, as shown in the following formula: where a(n) is the original sample, μ(n) and σ(n) are the sample and standard deviation of the nth segment of data, each segment is 1 minute long, and a(n) is the normalized sample. After preprocessing, each piece of data is equally segmented, and each segment is 1 minute long, and then features are extracted from each segment of the data. In this paper, 16 features are extracted from the single-channel ECG signal. Time Domain Characteristics. The mean value of the RR interval without detrending, the mean value of the detrending RR interval, the standard deviation of the RR interval, the maximum value of the RR interval, the minimum value of the RR interval, and other features are extracted in this study based on the time domain. The fraction of RR intervals where the distance between two adjacent RR intervals is greater than 50 ms, the range of RR intervals, the root mean square of the distance between adjacent RR intervals, and the standard deviation of the distance between adjacent RR intervals are all factors to consider. Frequency Domain Characteristics. In addition to the time domain, this paper also extracts a set of important frequency domain features. In order to extract the spectral characteristics of the RR signal, this paper performs fast Fourier transform (FFT) processing on the RR sequence and obtains four frequency domain characteristics: the power value of the extremely low frequency band, the power value of the low frequency band, and the power of the high frequency band. Nonlinear Characteristics. In addition to time domain features and frequency domain features, this paper also extracts two nonlinear features: sample entropy and spectral entropy. Multiscale entropy (MSE) is used to describe the structural complexity of time series. Many kinds of entropy can be used to calculate multiscale entropy, such as approximate entropy and fuzzy entropy under various time granularities. Multiscale entropy is increasingly used in sleep analysis. In this paper, sample entropy (SampEn) is used as the core of multiscale entropy calculation. After the signal x i , i � 1: N of N data points is given, a coarse-grained time series y(t) is first generated, where t is the scale factor. The ECG signal is divided into a nonoverlapping window of length t 1 :1, and the average value is calculated. Therefore, y (1) is the original signal, and y (t) is the coarse-grained sequence obtained by dividing the original sequence into windows of length t. The calculation steps of sample entropy (SampEn) are as follows: First, the coarse-grained time series form a set of m-dimensional vectors in order (m is the number of mode bits, and m is set to 2 in this paper): We define the distance between x(i) and x(j) as d[x(i), x(j)], which is the largest difference between the two elements; namely, For each value of i, we count the number calculate the ratio of it to the total number of distance N − m, denoted by Then, the average value of C m i (r) is The algorithm adds 1 to the dimension to become m + 1 and repeats the previous steps to count C m+1 (r). Finally, the calculation formula of sample entropy SampEn is Mobile Information Systems Spectral SpecEn describes the flatness of the power spectral density (PsD) and indirectly reflects the irregularity of the time series. Therefore, the larger the value of SpecEn, the flatter the shape of the PSD, and, accordingly, the more irregular it is distributed in the time domain. Conversely, the smaller the value of SpecEn, the denser the frequency spectrum and the lower the degree of irregularity of the PSD in the time domain distribution. It is also necessary to extract the spectral entropy as a feature. In the sample training process, as the number of features increases, the length of time it takes to evaluate the features and train the model, as well as the model's complexity and promotion ability, all decreases. By removing unnecessary and duplicate features, feature selection can lower operating complexity. This study divides the feature selection process into two phases. The optimum feature set for classification is first selected using the bidirectional search (BDS) algorithm, and the redundant features are then removed using the quick correlation filter. Sequence forward selection (SFS) and sequence backward selection (SBS) are combined in the first step of the bidirectional search (BDS) method. Bidirectional Search (BDS) Algorithm. Sequence forward selection (SFS) : add each feature to an empty set A one by one in turn. Each time a feature is added, the accuracy of the feature classification in A is calculated. If the accuracy is higher than before adding, the feature is valid and is kept in A; otherwise, the feature is invalid, and the feature is removed from A. Sequence backward selection (SBS) : remove each feature one by one from the full set S and calculate the accuracy of the feature classification in s after removing a feature. If the accuracy is higher than before adding, continue; otherwise, keep the feature in S. Bidirectional search (BDS) : use forward and backward sequence selection methods to search at the same time. When the results of the two process searches are the same feature subset, the search stops. mRMR Algorithm. In the second stage, in order to evaluate the synergy between features and construct a set of optimal features, this paper adopts a filtering method based on mutual information and minimum redundancy and maximum correlation (mRMR) criteria. The mRMR algorithm is based on mutual information. When two random variables x and Y are given and their probability density functions are p(x), p(y), and p(x, y) respectively, the mutual information is The goal of the algorithm is to find a feature subset containing m(x i ) features. The biggest correlation is where x i f is the i-th feature, C is the categorical variable, and S is the feature subset. The minimum redundancy is Objective function addition integration: That is, Among them, X represents the complete set of feature x j , s represents the set of selected feature x i (size m), C represents the class, and I represents the mutual information. The definition of I is as follows: Among them, p(x), p(y), and p(x, y) are probability density functions. These three functions are estimated by a kernel density estimator based on adaptive diffusion. This paper uses support vector machine (SVM), Ada-Boost, and random forest three classifiers to classify English speech features. AdaBoost Method. In addition to SVM, this paper also uses the AdaBoost (AB) method. Boosting algorithm has a good classification effect. Boosting is an iterative algorithm whose purpose is to combine several classification models and integrate them into one classification model. This integration method is based on the weighted voting of the same classifier. AdaBoost (AB) is a widely used boosting algorithm, which was first proposed by Freund and Schapire. AB can be used with other classifiers, but if AB is applied to a complex classifier, the prediction performance of new data will be greatly affected; that is, the ability of promoting it will be lost. Therefore, when the weak classifier is applied to the AB algorithm, the effect will be better. After every m iterations, the AB algorithm reassigns a new weight w m k for each feature vector x k in the training set. Therefore, the m-th weak classifier will use the corresponding weights for training. Then, its classification performance is estimated with the error ε m . This error is used to determine the weighted voting result of the m-th weak classifier. Therefore, the smaller the error ε m in these classifiers, the greater the contribution to the final classification. At the end of the iteration, the weight of the misclassified sample will be updated to w m+1 k . Then, the weights of all samples will be standardized to maintain the original distribution. In this algorithm, the error ε m of the m-th iteration is defined as the sum of the weights of the misclassified samples divided by the sum of the weights of all the samples in the current iteration. Random Forest. Random forest (RF) is a combination of multiple decision tree classifiers, each of which depends on an independently sampled random vector. Every decision tree in a random forest has the same distribution. As the number of decision trees in the random forest increases, the error of the random forest generated results gradually converges. The error of the random forest generated results depends on the strength of each independent decision tree in the forest and the relationship between the trees. English Speech Feature Recognition System Based on Deep Neural Network When performing English speech recognition in a classroom or in a relatively closed place, some of the sound waves emitted by the sound source are directly received by the microphone, and the other part will be reflected and absorbed after reaching the indoor walls, ceiling, ground, and other obstacles [22]. The attenuation of the sound signal after reflection is relatively small. Due to the different materials of various obstacles, the reflection coefficient is also different. In addition, the strength of the sound energy received by the obstacle is different, the signals received by the microphone will have a large amplitude compared with the original signal, and the phase will be different. From the reverberation process shown in Figure 1, it can be seen that reverberation is different from irrelevant external interference signals such as noise. The reverberation signal originates from the sound source signal and is a regular interference signal [23]. According to research on the complex cepstrum of the speech signal, the positions of the complex cepstrum of the sound source signal and the room's impulse response are different when the reverberant speech signal is translated into the complex cepstrum domain. While the latter is concentrated at both ends, the former is mostly concentrated closer to the midway point [24]. The estimated value of the complex cepstrum of the original speech signal must therefore be obtained by designing a low-pass filter in the Mobile Information Systems complex cepstrum domain to filter the room impulse response, and this estimated value must then be transformed into the time domain to obtain the estimated value of the original speech signal. Figure 2 depicts the extensive cepstrum dereverberation procedure in this work. Designing a complex cepstrum domain filter is an important part of the process of speech signal dereverberation. The complex cepstrum domain filter is a low-pass filter in a broad sense. Moreover, its parameters determine the performance of dereverberation, including three parts, namely, the pass band, the transition band, and the stop band. Figure 3 shows the filter schematic diagram. Among them, L is the length of the filter, M is the cut-off point of the passband, h is the length of the transition band, and h(n) is the transition band function. When M is 1/16 of h and h is 1/8 of L, the best dereverberation evaluation index is obtained, and the dereverberation effect is the best. This paper downloads an English voice from the officially recognized voice library. The sampling frequency is 44100 Hz, and the length, width, and height of the room used in the experiment are 5m, 4m, and 3m, respectively. Moreover, this paper uses the mirror image method to simulate the room impulse response, and the room impulse response function is shown in Figure 4. The collected voice is convolved with the simulated impulse response function to obtain the reverberant voice, and the reverberant voice is framed and then a Hamming window is added. Among them, the frame length is 1024, and the frame shift is 1/4 of the frame length. As seen in Figure 5, this filter is a low-pass filter appropriate for the cepstrum domain. When the highest cut-off point for the filter is 1/256 of the frame length and the bandwidth of the transition band is 1/16 of the frame length, it is discovered that good evaluation results for the speech signal obtained after dereverberation may be obtained. According to the distance from the sound source to the microphone array, it is divided into a near-field model and a far-field model of the microphone array. When the signal source is far from the array, the wave path difference of the signal reaching each element is relatively small, and the signal can be treated as a plane wave model. The difference is that when the signal source is close to the microphone array, the signal reaches the array element in the microphone array with a larger amplitude difference. At this time, the waveform arriving at the array should be a spherical wave model. Figure 6 shows the near-field and far-field models of the microphone array. Sound source Near field Far field Figure 6: The near-field and far-field models of the microphone array. Mobile Information Systems The overall implementation scheme of the FPGA-based microphone array signal processing system is shown in Figure 7. First, a microphone array is designed as the voice signal collection terminal. This paper uses 4 low-cost omnidirectional electret microphones as the elements of the microphone array to convert the voice signal into an analog signal output. Then, a signal acquisition system with signal acquisition and AD conversion functions is designed. The model in this paper is based on the foundation of deep neural network. The results of the deep neural network in this paper are shown in Figure 8. Performance Verification of English Speech Feature Recognition Model Based on Deep Neural Network This study uses deep neural networks to construct a model for English speech feature recognition. This model can perform English voice denoising using a neural network approach in order to accomplish the recognition of English speech features even in situations when there is classroom reverberation. As a result, this work initially assesses the impact of English speech denoising before counting the impact of English speech feature recognition in the system performance test. In order to determine the denoising effect of English speech, this study collects numerous sets of English speech data via the network and runs tests with the system that it has built, as shown in Table 1 and Figure 9. From the analysis results of the above chart, it can be seen that the English speech feature recognition model based on the deep neural network constructed in this paper has a better effect. After that, this paper conducts the evaluation of the English speech feature recognition effect of the system constructed in this paper. The results obtained are shown in Table 2 and Figure 10. From the above experimental research results, it can be seen that the English speech feature recognition system constructed in this paper has a certain effect. Conclusion This paper studies the English speech detection algorithm based on the nonstationary strong noise environment. The windowing of the English speech signal can make the speech signal processing easier, and different window functions have different effects. Linear predictive analysis includes autocorrelation method and covariance method. The covariance approach is less reliable than the autocorrelation method, which is better suited for interpreting English voice signals. In this study, the filter bank addition and overlap addition are introduced for the short-term synthesis of English voice signals. Additionally, the concatenation and addition approach is chosen to handle the voice signal due to its simplicity after evaluating the two methods' degree of complexity. This work also conducts simulation research on the various properties of the reverberation signal and pure speech signal in the complex cepstrum domain, examines the basic idea of complex cepstrum domain filtering, and builds a complex cepstrum domain filter. Finally, this paper constructs an English speech feature recognition model based on deep neural network and verifies the reliability of the algorithm model through experimental research [25, 26]. Data Availability The data used to support the findings of this study are included within the article.
2022-08-26T15:17:25.987Z
2022-08-23T00:00:00.000
{ "year": 2022, "sha1": "3e94d6d467ace617ec8fbcccbe91acf3b20d178f", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/misy/2022/3541667.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5ab8ae2217404b0e41ca7010fe4d3a08a9a131ed", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
258110421
pes2o/s2orc
v3-fos-license
Genetic ancestry is associated with asthma, and this could be modified by environmental factors. A systematic review , Genetic ancestry is associated with asthma, and this could be modified by environmental factors. A systematic review To the editor, Certain ethnic populations have a higher prevalence of asthma, a phenomenon which can be independent of socio-economic status. 1 Ethnicity is a product of genetic and cultural and behavioural factors. Using identified alleles in reference to genetic repositories, 'genetic ancestry' is a way of quantifying an individual's genetic composition inherited from ancestors of a particular geographic origin. Kumar et al. demonstrated that increased African genetic ancestry is independently associated with lower lung function. 2 Genetic ancestry can lead to disease via epigenetic mechanisms. Evidence suggests DNA methylation can significantly differ between ethnic groups and this is induced by genetic ancestry and also environmental factors, such as tobacco smoke, air pollution and airborne pathogens. 3 We systematically reviewed the evidence on associations between specific genetic ancestries and asthma, and environmental factors that could mediate or modify these associations. The study protocol was prospectively registered with PROSPERO (CRD42021222527). We searched MEDLINE, EMBASE and EBSCO Global Health for published studies that examined the association between genetic ancestry and asthma, to December 2022. We searched full-text peer-reviewed articles using a pre-specified search strategy (exp asthma/OR asthma*.tw.) AND (exp ethnic groups/ OR ethnic*.tw. OR racial*tw. OR race.tw) AND (ancestr*. tw.). To be included, manuscripts defined asthma by physician diagnosis, ≥2 episodes of childhood wheeze, or using validated questionnaires, and quantified genetic ancestry on a continuous scale. We excluded manuscripts in which asthma severity or phenotypes were the outcome, rather than asthma risk. We included manuscripts that estimated the proportion of genetic ancestry using admixture mapping of participants' genome-wide data. Genetic ancestry can be determined by quantifying the number of previously identified alleles in reference populations that strongly correlate with ancestral origins of a specific geographic origin (e.g., Europe, Africa, etc.)-known as ancestry informative markers (AIMs). Analysis of an individual's genetic data allows quantification of AIMs and, therefore, estimation of proportions of genetic ancestry. Screening, data extraction and assessment of bias were con- Genetic ancestry was associated with asthma in high, rather than low, socio-economic strata. One possible explanation is that high socio-economic status may lead to increased hygiene and reduced levels of protective microbiologic exposure in early life. This has been described among the Hutterites with decreased childhood exposure to airborne animal endotoxins and higher levels of asthma compared to the genetically similar Amish population. 6 Our systematic review has identified certain populations vulnerable to asthma, owing to a combination of genetic and sociodemographic factors, likely to induce gene-environment interactions between genetic ancestry, country of residence, cultural affiliation and socio-economic status. These findings could help inform future case findings and public health interventions by identifying high-risk sociodemographic profiles, supported by genetic ancestry data. In addition, further genetic analysis within such high-risk profiles could yield future genetic targets for personalized medicine. For instance, recent data among Hispanic Americans demonstrate that Native American genetic ancestry at chromosomal region 18q21 (upstream of the SMAD2 gene) is associated with excess asthma risk. 7 Given that SMAD2 has also been implicated in transforming growth factor beta (TGFβ) signal transduction in asthma, 8 it is possible that Hispanic Americans bearing this genotype may respond to therapies targeting TGFβ. One major limitation of this review is the paucity of genetic cohorts available, small sample sizes and heterogeneity of findings. Rapid advances in genome-wide genotyping methods have improved the understanding of complex trait illnesses such as asthma. However, the majority of genetic cohort studies are from European ancestry populations, representing a significant knowledge gap. 9 Empirically applying disease-predictive algorithms to ethnic populations is not only inappropriate but has also been shown to give rise to spurious results. 9 Heterogeneity of effect estimates in our review could be explained by differences in sampled sub-ethnic subpopulations and/ or genetic reference measurements. For instance, Hispanic American subpopulations differed between studies, such as Puerto Rican versus Mexican American, who could exhibit different environmental exposures. Furthermore, genetic ancestry measurement was not standardized across studies which could impact comparability. For example, African genetic ancestry was drawn from different populations, such as the Yoruba in West Africa (Nigeria) and Luhya in East Africa (Kenya). In summary, genetic ancestry is associated with asthma and this could be modified by cultural affiliation, country of residence and socioeconomics. Our findings identify socio-demographic profiles which, supported by genetic ancestry data, could be targeted for additional public health intervention and case finding. Further research to identify genetic targets for therapeutic intervention, catering for at-risk genetic profiles, may alleviate asthma inequality across ethnic populations. Key messages • While Native American genetic ancestry appears protective against asthma, African ancestry increases risk. • There is indirect evidence that the latter is modified by self-identified ethnicity, socio-economics and country. • Complex gene-environment interactions contribute to asthma in ethnic populations, highlighting the need for nuanced risk stratification.
2023-04-14T06:18:21.716Z
2023-04-13T00:00:00.000
{ "year": 2023, "sha1": "808a24dbe79f89fd4d71b34c8189d2d14138751d", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cea.14308", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "396bb4d89408e1cb5a5543a87f8e30820edfac4a", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
105396588
pes2o/s2orc
v3-fos-license
Optimised accelerated solvent extraction of hexahydro-1,3,5-trinitro- 1,3,5 triazine (RDX) from polymer bonded explosives An Accelerated Solvent Extraction (ASE) method was developed and optimised to extract hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) from a polyurethane matrix. The ASE method development was benchmarked against Soxhlet extraction with a view to improving extraction efficiency in terms of time and solvent volume. Key parameters for the ASE method development involved selecting the most appropriate solvent, optimising static time, ensuring a safe oven temperature for explosives, determination of a sufficient number of rinse cycles and effective sample preparation. To achieve optimal extraction, cutting the PBX samples to maximise solvent exposure was essential. The use of acetone with a static time of 10 minutes at 100°C with three rinse cycles allowed to achieve 97% ± 10% extraction of RDX from PBX in 40 minutes using 72 mL solvent. Extraction time was reduced from 48 hours and solvent use by half compared to the standard Soxhlet extraction. To validate the developed ASE method, two other PBX samples containing different quantities of explosive were also fully extracted using the same parameters. Overall, ASE efficiency was comparable to Soxhlet, which places the ASE as a good alternative and shows potential for implementation as a standard method for other polymer based explosives. Introduction Robust and reproducible extraction methods to determine the presence and residual concentration of Polymer Bonded Explosives (PBX) are important to inform risk decision making and land management at impacted sites. 1 PBXs typically consist of a nitramine high explosive combined with a polymer. The most commonly used nitramine explosives include 1,3,5-trinitroperhydro-1,3,5-triazine (RDX) and octahydro-1,3,5,7-tetranitro-1,3,5,7tetrazocine (HMX). 2 Currently, Soxhlet extraction is a commonly used method to extract nitramines from PBX, however this can take up to 48 hours and use over 150 mL of solvent per 1 g extraction. 3,4 Accelerated Solvent Extraction (ASE) has been successfully used to extract chemical components of polymers and was initially developed to replace extraction methods such as Soxhlet, bath sonication and shaking. 5,6 ASE uses solvent at up to 200°C and 1500 psi pressure in a closed system to increase extraction efficiency, while minimising preparation time and solvent volume. 6 Under these conditions the solvent may reach temperatures above its boiling point, which makes it less viscous and increases the solvent's capacity to dissolve the analyte. In addition, the increased pressure forces solvent into the pores of the sample material, making the analyte more readily available for extraction. 7,8 The use of ASE to extract chemical components of polymers has not been widely reported in the literature, but the extraction of monomers and oligomers and low concentration chemical additives from polymers has been successful. [9][10][11] Additives pose a particularly challenging analytical problem, as they are present at very low concentrations, often less than 1% of the mass of the polymer. ASE has been shown to successfully extract very low concentrations (0.02 to 0.1%) of additives with a relative standard deviation of 20%, which is acceptable for such low concentrations. 12 To extract a chemical from a polymer it must diffuse from the polymer to the surface followed by transfer through the static solvent layer into the bulk solvent. The rate of mass transfer from the polymer core to the bulk solvent is dependent on the structure and properties of the polymer and solutes, the extraction temperature and the type of solvent 10 . Selecting an appropriate solvent is essential for ASE of polymers as it must be able to dissolve the chemical of interest, while leaving the polymer intact at high temperatures. 9,13 It has been shown that the most effective way of increasing extraction efficiency is to increase temperature. If it is not possible to increase temperature, mass transfer must be increased by other means such as reducing the size of polymer particles by grinding to reduce the distance from the core of the polymer to the surface. 10 Achieving high temperatures and grinding the sample present a challenge when extracting high explosive due to safety concerns as the temperature should be kept well below the explosive auto-ignition temperature, and samples should not be ground. 14 To date, there have been no studies regarding the extraction of explosives from solid explosive matrices using ASE, such as PBX. ASE of explosive residues (RDX and HMX) from soil samples has been developed and successfully used to completely extract RDX and 2,4,6-trinitrotoluene (TNT) and associated degradation products from contaminated soils 15,16 . This method has also been used to extract RDX from animal liver tissue to assess toxicity, demonstrating the flexibility of ASE. 17 Another study comparing four different extraction methods (ASE, Soxhlet, Microwave Assisted extraction and supercritical fluid extraction) for HMX, RDX and TNT from soil showed that ASE was the most efficient with 90% recovery. For all tested explosives, ASE extraction efficiency was comparable or better than Soxhlet extraction and was reproducible with a maximum of 10% standard deviation. 18 The aim of the paper was to develop and optimise a robust analytical method for the extraction of RDX from PBX. The optimised ASE method was successfully validated using two additional PBX compositions to demonstrate its broader applicability. PBX composition PBX samples were obtained from an industrial supplier as 100 g slabs (10 × 5 × 2 cm) as described in Table 1. The PBX samples contain small amounts other additives for stability and performance, although this research focussed specifically on the extraction of the energetic component. Optimisation of Accelerated Solvent Extraction Method for PBXA A slab of PBX-A was scored using a ceramic knife and cut into 1 g cubes (1 cm 3 ). To decrease the volume and depth of solvent penetration required the 1g samples were cut into smaller cuboids of similar volume (approximately 0.5, 0.125, 0.065 and 0.008 cm 3 , respectively). Samples of PBX-A were placed into stainless steel solvent extraction cells (33 mL) filled with Ottawa sand, or inside a cellulose thimble with and without sand to reduce solvent volume. Cells were placed in the ASE, and an initial 14 minute method was programmed according to the Application Note for extraction of traditionally used explosives, such as TNT from soil. 22 Briefly, using the standard mode with a system pressure of 1500 psi, oven temperature 100°C, oven heating time 5 minutes, static time set at 5 minutes, the rinse volume was 60% of the cell volume and a rinse cycle with a 200 s purge (Table 2). These conditions were used as the baseline for further optimisation. The PBX-A extraction method was systematically optimised by changing one perimeter at a time e.g. solvent, static time, rinse cycles, sample volume and cell preparation, as summarised in Table 2. The oven temperature, pressure, flush volume and oven heating time were not optimised. 1 1.0, 0.5, 0.13, 0.07 and 0.008 ASE cell preparation Ottawa sand Ottawa sand/ thimble and sand/ thimble only Extracts (approximately 60 mL) were diluted by 100 using the extraction solvent, filtered (0.2 µm syringe filter) and analysed by HPLC immediately. Accelerated Solvent Extraction of PBXB and PBXC PBX-B (1 g) and PBX-C (1 g) were scored and cut into cuboids (0.008 cm 3 ) using a ceramic knife on filter paper. The PBX-B and PBX-C pieces were then transferred to a thimble, which was placed in a stainless steel extraction cell (33 mL). The cells were loaded into the ASE and the PBX were extracted with acetone at 100°C, 1500 psi for 3 ten minute rinse cycles with a 5 minute oven heating time, 200 second purge time and a rinse volume of 60%. Extractions were completed in triplicate. A sub-sample of the resulting extracts were diluted accordingly, filtered and analysed by HPLC. Soxhlet Extraction PBX-A (1 g) was cut into 2 mm 3 pieces and placed in a glass thimble in a Soxhlet extractor. 3 The RDX was extracted from PBX-A with acetone (150 mL) at 70°C for 48 hours. The resulting RDX extract was made up to 200 mL, diluted accordingly and analysed by HPLC. Calibration solution preparation The 50 ppm stock solutions of RDX and HMX respectively were prepared by dissolving accurately weighed standards in acetonitrile. Calibration standards, made daily by subsequent dilutions of the stock solution in acetonitrile, ranged from a concentration of 5 to 50 ppm. Instrumental Analysis HPLC analyses were performed on a Waters Alliance 2695 separation module coupled to a Waters 996 photodiode array detector (PDA). Chromatographic separations were carried out isocratically with a NovaPak C8 (3.9 mm × 150 mm, 4 μm) column from Waters maintained at 35°C. The mobile phase consisted of 50 % ACN and 50 % H2O with 0.1% formic acid with a flow-rate of 1.5 mL min-1 and the injection volume was 10 µl. RDX peak was identified by comparing its retention time and absorption spectrum in the samples with those of the standard solution. RDX was monitored at 235 nm. The HPLC method was validated by assessing: (i) Specificity (analysis of solution of PBX); (ii) Linearity (measure of the correlation coefficient for each standard from the linear regression analysis in the concentration range of 0.05 to50 µl/ml); (iii) Limit of Detection (LOD), Limit of Quality (LOQ) (measure of the residual standard deviation of the responses and slopes of the regression equation of the calibration curve (root mean square error approach)) and (iv) Precision (measure of the relative standard deviation of ten injections of each compounds at two concentrations). The results of the method validation are displayed in Table 3. ASE method development and optimisation The ASE method was based on a Dionex Application Note designed to extract traditional explosives from soil, using methanol or acetone (Table 1). 22 However, polymers are more difficult to extract as the matrix must first swell to allow the solvent access to the RDX. Therefore, test solvents were chosen based on their use in the application note (i.e. methanol and acetone) and their ability to dissolve RDX and to swell the polymer. RDX is most soluble in acetone and acetonitrile especially at elevated temperatures. 23 In addition, acetone promotes swelling of the polymer. 24 Methanol was also tested as it was used in the Application Note. RDX solubility in the chosen solvents is summarised in Table 3. PBXA samples were extracted with the three solvents for five minutes (static time) at an oven temperature of 100°C, and 1500 psi. The percentage recovery of RDX from 1 g of PBX-A was low for all tested solvents. Acetone was the most efficient with a recovery of 11%. Of the three solvents RDX is the most soluble in acetone and it is the preferred extraction solvent for Soxhlet, 3,4 which is often a good indication of solvents that may be successful for ASE. 25 Recovery of RDX using acetonitrile was 6%. Methanol was the least efficient achieving only 2% recovery of RDX, probably due to a combination of poor RDX solubility and limited swelling effect (Table 4). Therefore, acetone was selected as the preferable solvent for the extraction of RDX from PBXA and was used in all following extractions. Following solvent selection, the recommended next step in the optimisation process is to vary oven temperature. 26 However, ASE has not previously been used to extract explosive formulations and the high temperatures may increase the likelihood of thermal decomposition 15 . Therefore, the temperature was held constant at 100°C to ensure that it remained below the auto-ignition temperature of RDX (197°C) and PBXA (206°C). 27,28 Unable to alter the temperature, the next stage was to optimise the static time. Increasing the static time exposes the sample to solvent for longer, allowing more time for the polymer to swell and the RDX to dissolve. The static time was increased progressively from five to thirty minutes. When the static time was increased from five to ten minutes RDX recovery improved from 11% to 32%. Increasing the static time from ten minutes to thirty minutes had no further effect on extraction efficiency of RDX, as it is likely that the system had reached equilibrium between extracted RDX in the solvent and RDX remaining in the polymer ( Figure 1). Therefore, the optimal static time used in all following experiments was ten minutes. In order to shift the equilibrium, the ASE method was further optimised to include rinse cycles during the extraction. This allows fresh solvent to be added, which helps to maintain a favourable extraction equilibrium and encourage further dissolution of the sample. 26 Up to this point, each extraction only had the one rinse cycle as per the Application Note method, ( Table 1) which meant that at the end of the ten minute static time the sample was rinsed with 60% of the cell's volume of solvent e.g. 20 mL rinse for a 33 mL cell. The ASE enables the user to choose the number of rinse cycles for each extraction, which splits the volume of rinse solvent between the number of cycles. The PBXA extraction was split into three ten minute rinse cycles, distributing an equal proportion of the solvent between each rinse cycle, which increased the percentage of RDX recovered by 16% to a total of 48% ( Figure 2). It was clear that exposing the sample to fresh solvent increased extraction efficiency, although not achieving 100% suggesting that the solvent cannot ingress further into the polymer. However, due to the explosive nature of the sample cutting the sample further to increase surface area was avoided at this time. Therefore, the volume of solvent introduced to the cell was increased by reducing the amount of Ottawa sand in the cell surrounding the sample by half. The PBXA sample was placed in a porous cellulose thimble, which leaves more space in the cell for solvent. The thimble was used with and without sand to achieve optimum solvent efficiency. As expected, increasing the volume of solvent in the cell by removing the sand improved the extraction efficiency to 58%. Placing the sample in a thimble with sand resulted in a similar percentage recovery to the samples placed in a cell with sand only (48% and 50%) (Figure 3). The improvement in the extraction efficiency by removing Ottawa sand was minimal, and may be overcome by further optimisation to minimise solvent use. However, it may be necessary to compromise the volume of solvent for safety when extracting explosive formulations as the coarse sand may pose a friction hazard for some formulations. For all further extractions samples were placed in a cellulose thimble. Having followed the recommended ASE method development for optimisation and improved extraction to 58% by optimising method parameters, the final step was to optimise the sample preparation. It is usually recommended grind samples to increase surface area and availability of the material to be extracted, however this would not be suitable for explosive formulations. Therefore, the PBXA sample was cut into smaller uniform pieces to determine the optimal volume. The smallest size was limited by practicality, cutting the pieces smaller than 0.008 cm 3 was difficult, and resulted in RDX particles escaping the matrix. The cut PBXA was transferred to a cellulose thimble. Increasing the surface area resulted in a significant increase in extraction efficiency. The smallest pieces (0.008 cm 3 ) resulted in complete extraction achieving 100% recovery of RDX. To confirm that 100% recovery was achieved, the extraction was repeated twice. Repeating the extraction enabled 100% recovery of RDX from the slightly larger 0.07 cm 3 pieces as well Figure 4). Overall, increasing the surface area achieved the greatest percentage improvement in extraction efficiency although this was the least preferred optimisation route due to the necessity of cutting the explosive. Once the conditions to achieve 100% extraction has been determined, the optimised method was repeated on 6 samples of PBXA to determine reproducibility. The extraction efficiency averaged 97% with a standard deviation of 13%, which is similar to the standard deviation of other ASE methods found in the literature. 15,17 Whilst this is often acceptable in the literature for other methods, it should be possible to improve the precision by continued optimisation. For example, RDX mass may have been lost during cutting of PBXA although care was taken to ensure all material was account for. As a control measure, at least three replicates of each sample should be extracted to overcome any reproducibility issues. The optimised ASE method for the extraction of explosive from PBX used acetone, a ten minute static time, with three rinse cycles, using 60% rinse volume at 100°C and 1500 psi with a 200 second purge summarised in Table 5. This method achieved 100% extraction of RDX from PBX in forty minutes, and used approximately 60 mL acetone. Soxhlet extraction of RDX from PBX To ensure the novel ASE extraction method was comparable to current best practice, PBX-A was extracted by Soxhlet, which is an established method in the literature. 3,4 The PBX-A sample was acquired from an industrial source, and the accompanying data sheet stated that it contained 64% RDX. Sub-samples for extraction were cut from the slab and were assumed to contain 0.64 g of RDX per 1 g of PBX-A. The calculated mass of RDX extracted was based on the actual mass of the cut sample. The Soxhlet extraction of PBXA took forty eight hours and 150 mL of acetone, and achieved an average of 90% ± 0.5% recovery of the RDX (2 replicates). The two methods are compared in Table 6. During the extraction RDX precipitated around the edge of the round bottomed flask as solvent evaporated, which created an explosive hazard and therefore required close monitoring. The ASE method was more efficient, achieving an average of 97% RDX extraction compared to 90% using Soxhlet. Even though the improvement in extraction efficiency appears small, the reduction in time was significant with the ASE only taking forty minutes in total, compared to forty-eight hours for Soxhlet. Also, the ASE used only 60 mL of solvent, compared to at least 150 mL Soxhlet. The ASE also provides broader benefits, in addition to resource efficiency. For example, the ASE is a closed system where solvent is automatically dispensed to the sample, whereas Soxhlet is more manual and requires frequent solvent refill. This causes a potential explosive hazard when solvent evaporates causing solid explosive to precipitate in the round-bottomed flask adjacent to the heating mantle. Overall, the ASE significantly reduces extraction time as it can automatically run up to twenty-four samples. It also requires less intensive monitoring compared to Soxhlet, which enables faster sample throughput. Validation of optimised method The optimised method was applied to two additional PBXs to determine the applicability of the method. PBXB contained 87% HMX; PBXC contained 20% RDX, 43% ammonium perchlorate, and 25% aluminium. The method enabled the successful extraction of HMX from PBXB, with an average of 99% recovered in a single extraction ( Figure 5). Recovery of RDX from PBXC was not as efficient with an average of 92% recovered after two extractions, however the consistency of PBXC made it difficult to cut and the pieces used were slightly larger than the optimal 0.08 cm 3 possibly contributing to the slightly lower recovery ( Figure 5). In addition, PBXC is denser than the other two PBX's which may make it more difficult for solvent to ingress into the polymer to aid explosive extraction. Conclusion The first reproducible method for the complete extraction of RDX/HMX from PBX using ASE was successfully optimised and validated. The method development aimed to follow the recommended optimisation procedures, however the extraction of explosives was more complex due to additional safety concerns e.g. auto-ignition and friction hazard. It was considered unsafe to optimise the temperature beyond 100°C, and placing the samples in a cellulose thimble reduced friction hazard whilst maintaining extraction efficiency. It was not considered ideal to cut the PBX smaller than necessary for the extraction therefore this optimisation step was consciously left as the last option. However, it was necessary to carefully cut the PBX to achieve full extraction. The optimised extraction method for the extraction of RDX/HMX from PBX using ASE was 100°C, 1500 psi, with a 10 minute static cycle, three rinse cycles in acetone and a 200 second purge achieving an average of 97% extraction. The samples were prepared by cutting to increase surface area, and placing in cellulose thimbles. The total extraction time was 40 minutes, and used 60 mL solvent, which is a significant reduction when compared to Soxhlet extraction, which takes 48 hours and over 150 mL solvent. It may be possible to reduce the extraction time further by reducing the static time between each rinse cycle. The success of this method for three different PBX suggest that it would be applicable to other polymer based explosives. In addition, further work should include extraction of all components of the PBX, and it may be possible to develop a method with consecutive extractions to separate the different components.
2019-04-10T13:12:18.321Z
2018-10-16T00:00:00.000
{ "year": 2018, "sha1": "28f5c4c7601c81c9cf1927741956fb6faff863c5", "oa_license": "CCBYNC", "oa_url": "http://dspace.lib.cranfield.ac.uk/bitstream/1826/13565/4/Optimised_accelerated_solvent_extraction-2018.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "c966c11951d3f973e3ad9d3d2c21cc08e9619812", "s2fieldsofstudy": [ "Chemistry", "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Chemistry" ] }
4632692
pes2o/s2orc
v3-fos-license
STUDY ON BOW WAVE BREAKING AROUND ULTRA LARGE BLOCK COEFFICIENT SHIP Due to the increase of maritime transportation volume day by day it is necessary to design a ship's hull having a large carrying capacity with low resistance. In case of slow-moving ships, usually wave breaking occurs in front of a bow. A considerable portion of resistance occurs due to the energy dissipation of such as wave breaking in case of Ultra Large Block coefficient Ship (ULBS) suggested by the authors. The key objective of this research work is to investigate the relationship between bow wave breaking and free surface disturbance function that may be used as a parameter for numerical prediction of bow wave breaking. In this regard, the experiments and numerical calculations have been carried out for six models of ULBS. From the results, it can be concluded that the wave breaking area in front of bow increases with the increase of the surface integral of the square of free surface disturbance function, Froude number and block coefficient. Introduction Strategy in the world economy has been changed significantly, as the world's business is moving towards more globalization than ever before.As a result, it becomes indispensable to improve the maritime transportation efficiency with a higher carrying capacity.One of the possible ways to improve the transportation efficiency is to increase the power efficiency for large ocean-going vessels.Improved power efficiency demands a ship hull form should be optimized having a large block coefficient from the hydrodynamic point of view, i.e., with low wave making resistance as well as wave breaking resistance. At a low speed, wave breaking resistance is the most important component of wave resistance, which occurs in front of a bow in case of large ocean-going vessels.Ship types like oil carriers, bulk carriers having a full hull form, produce short waves with unstable crests in the bow region at a low speed.With the decrease of a ship draft, those short waves gradually transformed into breaking waves.From wave and wake measurements of tanker models, Baba (1969) found that the resistance component due to wave breaking in front of a bow occupies a considerable portion of the total ship resistance in ballast loading condition. From the hydrodynamic point of view, the wave resistance of a body near the free surface can be split into two components: the former related to the waves radiated far behind the body, the latter associated with the wave energy dissipated by wave breaking.To understand the wave breaking phenomena in front of a bow for full hull form, Baba (1969) showed that the increase of wave breaking resistance is due to the expenditure of energy in generating turbulence due to breakdown of waves at the bow of ships.Baba (1975) also showed that the effective horse power due to wave breaking is about 25% of total effective horse power at design speed (19 knots) for the model with normal bow, while for the model with protruded bow, this component is reduced to 10% of total effective horse power.Baba (1975Baba ( , 1976) ) showed from analytical calculations of semi-submerged ellipsoid that steeper waves give a higher peak value of Free Surface Disturbance (FSD) function.It is considered that the wave breaking phenomena will be suppressed by reducing the values of FSD function in front of a bow.Protruding bow works in cancelling FSD function values induced by the main body in front of a bow, i.e. the protruding bow is effective in reducing the steepness of local bow wave.The objective of present study is to correlate between wave breaking and FSD function.Compare to slow-ship method, the FSD function can be used as a key parameter for prediction of the bow wave breaking because of its' capability to calculate the slope and velocity of wave at a point on the free surface.FSD function is calculated by using Hess & Smith method according to Baba's low-speed theory.It is mentioned here that both Baba's theory and Rankine source method is based on low speed assumption, and their basic double model flow can be obtained by using Hess & Smith method.In the present study, wave elevations and wave making resistance coefficients are obtained by using Rankine source method.The flow diagram of present numerical calculation is presented in Fig. 1. In the present study, to understand the wave breaking phenomena, i.e. wave breaking area on the free surface in front of bow, experiments have been carried out for six ULBS models of full hull form (C b ≳0.95). Baba's Low Speed Theory and Free Surface Disturbance (FSD) Function In the following derivation of the free surface disturbance (FSD) function is cited from Baba (1976).Taking the rectangular coordinate system fixed on the body with the origin on a still water plane, the x-axis is set along the direction of the uniform flow U and z-axis directing upwards as shown in Fig. 2. Assuming the ship is floating on an inviscid, irrotational, incompressible fluid, the velocity potential for free surface problem is expressed as a sum of two parts: where, is the potential for the rigid-wall problem and so that the sum satisfies the free surface conditions. According to Ogilvie (1968), the wave height is assumed as the sum of two parts, i.e.     where,   y x, 0  is the wave height due to double body potential, i.e. The boundary value problem for the present study is written as follows: , n is a normal vector on the body surface, and By the substitution of Equations ( 1) and (2) into Equations ( 5) and ( 6), the free surface conditions are written as: Based on Ogilvie's [8] assumptions, (a) The Taylor expansions at z = 0 are derived for   Where the following relations are used, On the other hand, the Taylor expansions at By substituting those expansions into Equations ( 8) and ( 9), and taking the lowest order terms, we have By the use of the relation , the Equation ( 17) can be written: By neglecting the terms of O (U 4 ) and substituting   into Equation ( 9), we finally have is the Free Surface Disturbance (FSD) function. Rankine Source Method The origin of the coordinate system is taken at the center of the hull on the free surface, where xaxis is considered positive in the direction of uniform fluid velocity U, y-axis in the direction of starboard and the zaxis in upward direction as shown in Fig. 2. In Rankine source method, the fluid is considered inviscid and irrotational.The total velocity potential on the free surface,  is the sum of velocity potential due to double model flow, 0  and the perturbed velocity potential representing the effect of free surface, 1 Here, the velocity potential, 0  due to double model flow can be represented as follows with the source density, 0  for the flow distributed on the body surface, S 0 . where, In addition, the velocity potential, which represents the effect of free surface, 1  on the undisturbed surface can be expressed by where, The boundary conditions for hull surface require that the normal velocity on the hull must be zero.If the outward normal, n on the hull surface then the hull surface boundary conditions are, Free surface boundary conditions are represented as follows with free surface conditions of Dawson (1977) (double model linearized free surface condition) is, Wave elevation and the pressure around the hull can be determined from Bernoulli's equation by neglecting the higher-order terms of 0  and 1  .The equation can be expressed as follows, And the wave profile is given as follows, Model Tests of ULBS A schematic plan of ULBS suggested in Yokohama National University is shown in Fig. 3.For the practical goal of ULBS, various new ideas should be introduced to reduce fluid resistance and to improve propulsive performance.In the present paper, as one of the investigations for ULBS, fundamental studies on bow wave breaking are discussed. For the study of wave breaking phenomena and FSD function, six ULBS models of different block coefficients (C b ≳ 0.95) are considered for the experiments, which have been carried out by the authors.Table 1 shows the principal particulars of ULBS models; Table 2 represents the test cases. Merits of flat bottom WJ or duct type propeller Flow guided channel Tab type rudder Formulations for Model Ship Formulations of ULBS models in this study are as follows.Symbols used in these formulations are given in Figs. 4 and 5. The midship section coefficient, From Equation (32), the length of elliptic section for the model ship, The maximum half breadth in the elliptic section of the hull, The half breadth at midship, And the half breadth at the elliptic section, The shape of bow and the lines plan of tested models of ULBS are shown in Figs. 6 and 7 respectively.For numerical calculation, the hull is defined by using the quadrilateral panels.The hull surface consists of 3402 panels and 3608 points.The free surface consists of 4800 quadrilateral panels having the elliptical boundary.For an example, panel distribution on hull and free surface for a ULBS model is shown in Fig. 8. Experimental Visualization Method Ship model is fixed on the free surface in the testing part of the circulating water channel during experiments.Usually capillary waves are observed in front of the model, as the surface tension effect becomes greater and greater relatively for small-scale free surface phenomena.In order to reduce the surface tension effect, a water solution of surface activator (surfactant) is sprayed on the free surface at upstream of the model.In experiments, the surface activator is very convenient to change the surface tension on the free surface.According to the experimental method suggested by Suzuki et al. (2008), the wave breaking area in front of bow, S WB is visualized by using a flat plate with longitudinal white and black stripes placed on the bottom of the circulating water channel.Electric lamps are used over the free surface for lighting.Therefore, wave patterns can be recorded easily by digital camera, which is placed above the free surface.With this experimental technique, it is possible to visualize the wave breaking clearly.Fig. 9 shows the wave breaking area of a ULBS model.The effects of surfactant are described in Appendix A1. Results and Discussion Baba (1969) showed that wave breaking resistance components can be separated by using wake survey analysis behind a ship model.This is because of wake distributions, which are influenced by the head loss due to the wave breaking in front of the bow.However, wake survey cannot be easily applied for practical applications.Therefore, in the present study, the wave breaking area is used as the experimental parameter instead of the wave breaking resistance coefficient based on the wake survey analysis. One-half of the computational domain is used for numerical treatment since the hull surface of ULBS is symmetrical about the xz-plane.The free surface panel distribution starts from 1.5L upstream to 2.5L downstream (L = ship length) having the elliptical boundary. In present study, ship models with two different drafts are used.Froude number (F nd ) is defined based on the draft of the model (d) to normalize the effect of the draft in numerical calculation.Since wave making resistance is related the square of wave amplitude, the parameter 2  I is introduced, For wave breaking, the parameter 2 D I , is introduced as the integral of the square of FSD function in front of the bow, since FSD function can be used as a measure of wave breaking inception according to Baba's consideration (Baba, 1975).In future works, both parameters are expected as the objective function in ULBS bow form optimization problems. To obtain the value of numerical parameter 2 D I , FSD function is calculated using the mathematical procedure described in Akima (1978Akima ( , 1984) ) and Hess & Smith (1964). Effect of Depth and Block Coefficient on Free Surface Wave and FSD function The wave heights are calculated using Rankine source method for the six ULBS models.The wave height increases with the increase of depth of the model as shown in Figs.10(a) and 10(b).The wave height also increases with the increase of the block coefficient of ULBS models having the same drafts, which are shown in Figs.10(b) and 10 (c).The increase in angle of entrance leads the increase of wave height.Figs.11(a) to 11(c) show calculated FSD on the free surface using Baba's low speed theory.The FSD function also increases with the increase of draft and block coefficient of ULBS models. Effect of Block Coefficient on Wave Resistance Coefficient The effect of block coefficient, C b on wave resistance coefficient, C w is shown in Fig. 12.It is seen from Fig. 12 that at low speed block coefficient, C b has less effect on wave resistance coefficient, C w whereas at high speed, C b has significant influence on C w .This may be due to the blunt bow form of ULBS models.In Fig. 16, the wave breaking surface area, S WB is plotted against wave resistance coefficient, C w .From Fig. 16, it is seen that for a low wave resistance coefficient, the wave breaking surface area increases remarkably with the increase of the wave resistance coefficient.However, at the higher range of the wave resistance coefficient, the rate of change in wave breaking surface area, S WB gradually decreases. Correlations Between Experimental and Numerical Parameters For six ULBS models having different block coefficients, the wave breaking surface area, S WB is plotted against 2  I and shown in Fig. 17.From Fig. 17, it is noticed that with the increase of 2  I and block coefficients, the wave breaking surface area in front of bow, S WB increases. Conclusions To measure the wave breaking area, S WB on the free surface in front of the bow, the experiments have been carried out for six ULBS models of full hull form (C b ≳0.95) by the authors.The numerical calculations are carried out using Rankine source method together with Hess & Smith method to establish the relationship between wave breaking and FSD function.The following conclusions can be drawn based on the interpretation between experimental and numerical results; 1.With the increase of F nd , the intensity of FSD function increases. 2. With the increase of C w , the rate of the change in S WB gradually decreases.Based on the present study, the FSD function can be used as a parameter to predict the wave breaking area and wave making resistance. Current research survey indicates that the relationship established between wave breaking and FSD function for ULBS in the present study is the first of this kind.Since basic hull form without any appendages like bulbous bow, skeg or bilge keel is considered for present numerical calculation, the following future works need to be carried out: 1. Interpretation of FSD functions on the entire free surface. 2. Numerical calculation for hull with bulbous bow and other appendages. Fig. 9 :Fig. 8 : Fig. 9: Definition of wave breaking surface area, S WB in front of bow for model having C b = 0.974 at F nd = 0.50 Fig. 13 shows the relation between F nd and 2 I . The value of 2 I 2 DI increases with the increase of 2  2 IFig. 15 :between 2  Fig. 13 shows the relation between F nd and 2  I .The value of Fig. 18 2 DI 2 DI 2 I 2 DI Fig. 18 shows the relation between wave breaking surface area, S WB and 2 D I for the six ULBS models.From 3 . 2  I and 2 DI With the increase of block coefficient, , the rate of change in S WB increases. Fig. 18 :between 2 DI Fig. 18: Relationship between 2 D I and S WB Table 1 : Principal particulars of model ships
2018-04-06T18:53:23.372Z
2013-12-24T00:00:00.000
{ "year": 2013, "sha1": "ceffaed59b96e3efdca0530f58ef74c8532611b3", "oa_license": "CCBYNC", "oa_url": "https://www.banglajol.info/index.php/JNAME/article/download/16104/12239", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ceffaed59b96e3efdca0530f58ef74c8532611b3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Engineering" ] }
197468259
pes2o/s2orc
v3-fos-license
The Optics and Optimal Control Theory Interpretation of the Parametric Resonance The aim of the article is the elaboration of parametric resonance theory at piecewise constant frequency modulation. The investigation is based on the analogy with optics and optimal control theory (OCT) application. The exact expressions of oscillation frequency, gain/damping coefficients, dependencies of these coefficients on the modulation depth, duty ratio and initial phase are derived. First of all, the results obtained on the basis of the energy behavior analysis (at the conjunction conditions execution) in frictionless systems are presented. The well-known parametric resonance triggering condition is revised and adjusted. The heuristic feedback introduction (based on the energy behavior analysis) in the oscillation equation permits one to prove that the frequency modulation satisfying the parametric resonance condition is not necessary and sufficient condition of the oscillations unlimited increase. Their damping/shaking up formally corresponds by the frequency and duty ratio to the condition of the equality of optical paths to the quarter-wavelength characteristic of the interference filter or mirror. The unity of space-time coordinates shows itself in this specific form of the optical-mechanical analogy due to the general Hill’s equation description. It is marked that this equation theory underlies most of metamaterials advantages because all transport phenomena imply different wave – electromagnetic, acoustic, spin etc. propagation one way or another. The question about control uniqueness arises that is modulating frequency, duty ratio and signature sign uniqueness. Another question of characteristic index extremum at different controls is tightly bound with the former. The answers to these questions are obtained on the basis of OCT. The similarity of the optimal control problem solution and the one obtained at the heuristic feedback introduction through fundamental solutions product permits one to introduced the new form named general or mixed Hamiltonian along with the ordinary and OCT Hamiltonians. Besides this mixed Hamiltonian equality to zero together with the Wronskian constancy (almost everywhere) is the useful analogous in form to the Liouville’s theorem equation. The nonlinearity accounting using the OCT formalism is described too. Introduction The parametric resonance theory of single degree-offreedom oscillating systems is based on the Hill's equation analysis [1]: 2 (1 ( )) , 1, ( ) ( ) x w a t x w a t T a t ω = − + ⋅ ⋅ << + = ɺɺ (1) The main conclusions of this analysis deal only with the solution dependence on the modulating function frequency (or wave vector for space oscillations) but don't determine this function kind. For example, the equality of modulation frequency to doubled eigenmodes frequency (analogous to the Bragg condition in optics) isn't completed by any other function's characteristics such as duty ratio, modulation depth (rate) etc. At this even the parametric resonance condition in the form of the modulating function frequency equality to doubled -"Bragg's" -is not always expressed explicitly (see, for instance, [2]). On assumption of the modulation depth smallness (w<<1) the condition of parametric resonance onset is obtained for the case when the deviation ε from the fundamental frequency ω 0 satisfies [3]: The relation (2) is derived though in the initial assumption that "…parametric resonance occurs if the modulating frequency ω(t) is near the doubled frequency ω 0 " ( [3], p. 83). That's why the obtained result cannot serve as a strict proof of the parametric resonance onset just at doubled modulating frequency and, moreover, doesn't give any clear physical interpretation of the effect. Another more rigorous parametric resonance analysis though executed as well on assumption of modulation depth smallness is presented in a study [4]. It is based on the geometric interpretation of equation (1) properties -selfadjoint differential form. At that the constancy and equality of Wronskian (W(x)) to unity is interpreted as the equality of the matrix A (linear plane self-mapping) determinant to unity -reflection upon the period conserving the area: 11 12 1 2 This implies that the unsteady systems set may really approach the axe ω (on the w, ω parameters plane) only in the points ω=k/2, k=1,2,…, where the spur is equal to 2. The unsteadiness or parametric resonance is interpreted as the unlimited oscillation amplitude and speed rise (of course in the linear approximation and friction absence) at the argument going to infinity. It follows from the fact that in the general case the characteristic equation solution at trA>2 has one root bigger modulo 1 (the second correspondingly smaller) that is the characteristic index is positive so that one of fundamental solutions is exponentially growing. Thus, the parametric resonance theory (presented in manuals) is reduced in fact to the determination of its onset condition at the modulation depth smallness without its thorough explanation. Whereas calculations of the true oscillation frequency (differing from the eigenmode one), characteristic indexes (gain/damping coefficients), their dependence on the modulation depth and duty factor (control -in OCT terms [5]) are absent even for the simplest case of piecewise constant frequency modulation. (Although this case is never realized due to the fundamental reason: the instantaneous frequency of an unsteady process is the derivative of the corresponding analytical signal and is continuously differentiable [6]). The exact description of the phenomenon needs the energy balance analysis. It's well-known that a swing may be either shaken up or braked at the periodic length changing. However, the control changes at that are apriori unclear. In other words, it's necessary to investigate what will be the control frequency and duty factor in both cases and calculate all other process characteristics. The Energy Behavior at Frequency Change The energy balance analysis is made in many articles and monographs concerning different oscillating systems including the mentioned above swing [7], electric vibration LRC contour [8], torsional vibration spring oscillator [9][10][11][12]. The energy balance condition consists of the equality to zero the sum of losses (due to a friction for instance) and useful (that is needed to change periodically the control parameter) works. So main efforts were made to correctly calculate these works in particular cases. Neglecting these particular peculiarities one may note some general aspects. The oscillating system with time-depending frequency is the Hamilton's but not autonomic system. It means that the system Hamiltonian may be written in a standard form (the argument is denoted as x for time and space oscillations unity): At the sharp frequency alteration the energy may either change or not. This interesting oscillating system peculiarity expresses explicitly at the conjugation condition realization that is the amplitude and its derivative (oscillation speed) continuity in the point where the frequency value changes. In a "node" that is in a zero amplitude point from the derivatives equality follows the inversely proportionality of amplitudes to corresponding frequencies so that the energy is conserved: In a "antinode" that is in a zero derivative point the conjugation conditions at the piecewise frequency change lead to the piecewise energy change too: The oscillation damping is possible if the frequency change -from bigger to smaller -occurs when the amplitude is near an antinode. In the general case the conjugation conditions for the oscillation at the frequency ω1 of unit amplitude with the oscillation at the frequency ω2 of amplitude В may be written in the form (compare with the similar conditions in [9]): Although the energy changes absolute values are always equal for both cases their relative values are never equal. For example, if the frequency switching occurs in antinodes the ratio of relative energy increase to relative decrease is equal >0. It, of course, does not mean that the oscillation shaking up occurs -if switching occurs strictly in the function or its derivative extrema the energy during the period is conserved. In the general case the relative energy increase at the transition to bigger frequency is always greater than the relative decrease of the energy in the opposite case. Such energy behavior explains the oscillation shaking up at the modulating frequency equal to doubled main one and duty factor equal to 2. It is clear that if the transition to miner frequency occurs always at damping because the energy decrease will not be compensated by its increase at the transition to the larger one. It is clear too that the whole stop will never occur at this modulation but it is quite possible to approach the phase plane origin arbitrarily near during the finite time. This origin in this case will be the steady focus whereas in an arbitrary case it will be the unsteady focus. The Feedback Heuristic Introduction From the previous results unambiguously follows the possibility of oscillation damping in the case of switching points strict coincidence with oscillation equation's solution singular points. To prove it let us present the solution of the following equation (the first fundamental solution is denoted by the index 1 and the heuristic control -by the index "e"): The solution, control (modulating function) and phase trajectory for w=0.1, ω 0 =0.2π (T=10) are presented on figures 1, 2. These curves testify that the phase plane origin is the steady focus in this case. It is clearly seen that the frequency switching occurs just in the points where the solution itself or its derivative (amplitude or speed) changes its sign. At that the first switching occurs in the origin where the amplitude of "undisturbed" oscillations is maximal and the speed is zero and changes its sign. The sign before the control signature is chosen positive to switch the frequency from high to low in antinodes that is to decrease energy. The inverse switching will occur in nodes where energy change is absent. Thus, the "negative feedback" (if one may say it) is introduced in the equation whereas any driving force is absent. Let us note that for this control the oscillations damping will occur at any initial conditions and any modulation depth. The negative sign before the signature at (7) corresponds to the control providing oscillations shaking up independently upon the initial conditions and modulation depth as well whereas the condition (2) may not be executed at that. For instance, at w>0.5 oscillations will shake up or damp depending on the control sign the faster the larger is the modulation depth (to the certain limit -see later). Of course, such a feedback may not be obligatory piecewise but may be continuous. Still the damping will be rather slow (see, for example [7][8][9]). To estimate the damping efficiency (during 10 periods) let's introduce two criteria: K and S according to the formula: The first criterion determines the speed of the phase plane origin approaching, the second equal to the action according to its sense and dimension -the oscillation energy decrease efficiency for the chosen kind of the control. They are equal in this particular case to: Ке=8,943; Se= 2,447 (index "e" means "heuristic" as previously). It seems that the oscillation frequency is equal to the fundamental one and the control's -the Bragg's one. This frequency undoubtedly satisfies the parametric resonance condition, but oscillations nevertheless are damping. The characteristic index may easily be approximately calculated from the conjugation condition on assumption of near equality of the oscillation frequency to the fundamental one. During the half-period the amplitude's decrease will be equal according to (4): The oscillation frequency specification and correspondingly the characteristic index may be evaluated using the frequency switching condition just precisely in nodes and antinodes. It means that different frequencies oscillation times may be determined from the condition of their equality to quarter-period for corresponding frequencies -equality of optics ways to quarter-wavelength if one use the optic analogy. Let's denote these times τ + and τand evaluate the expressions for oscillations frequency -ω, modulatingω т , duty ratio -DR and the characteristic index α: The modulation frequency is really equal to the double true oscillation frequency ω smaller than the fundamental in ( ) 2 1 w − times. The duty ratio occurs to be more than 2. These parameters provide oscillations shaking up/damping (depending on the control sign) independently on the modulation depth. The reduced characteristic index Tα (modified attenuation index) that is divided on the fundamental frequency or multiplied on the fundamental period plot is given on figure 3 (the modulation depth taken variable is denoted as ξ and called "modulation rate"). In the 0 to 0.4 interval the reduced index is almost linear function of the modulation depth: ( ) 3,56 Tα ξ ξ ≅ ⋅ . It's interesting that this dependency has the minimum in the point 0, 648 w ξ ≡ = . The reduced characteristic modulo in it is equal to 1.791 that is at period equal to 10 the characteristic index is equal to 0.1791. Thus, at this frequency modulation method the maximal oscillations increase/decrease is obtained at the strictly determined modulation depth. The feedback introduction method considered above is obviously not unique. As a "key" one may take not the solution and its derivative product, but the product of the fundamental solutions. The oscillating systems corresponding to different fundamental solutions are conjugated (not in the sense of the conjugation conditions (6) but in the sense of the belonging to conjugated differential forms). Because the oscillation equation differential form is self-conjugated the difference in the solutions is expressed only in the initial conditions difference. In this case the 2 nd order 2 equations system is solved (or 4 equations of the 1 st order; index 1 corresponds to the 1 st fundamental solution as before, and by letter s we denote solutions at heuristic feedback introduction through fundamental solutions product): Depending on the signature sign the control will provide the damping of one solution and the rising of the other in full accordance with the analysis given in [3]. The control sign alteration ( ) ( 1 ( ) 2 ( )) us x sign y s x y s x = − ⋅ in comparison with (7) is explained by the fact that ( 1'( )) ( 2( )) sign y x sign y x = − . Returning to the Arnold's analysis we note that the matrix А' becomes diagonal according to the period definition at the feedback introduction: 12 So, the feedback introduction procedure may be considered as the physical analogy to the mathematical procedure of matrix diagonalization reduced to its eigenvalues and eigenvectors determination. To finish with this section, we have to illustrate the case when the frequency switching occurs not at doubled frequency but at the fundamental one (k=2) that is in the points corresponding to maxima or antinodes. It's clear that there's no energy changes in this case (see the previous section), but the oscillations will not be strictly harmonic. That is why the phase trajectory represents the somehow deformed ellipse -the steady limit cycle -figure 4. The steadiness consists in the fact that any initial conditions changes will not change principally the solution's type but only slightly deform the phase trajectory. Thus, the statement about the parametric resonance existence at k=2 for this kind of modulation is strictly wrong. At the approximate coincidence of the modulation frequency with the oscillation one and switching in the points symmetrical relative to the origin these oscillations are strictly periodic (with doubled control period) due to the energy changes absence during the period. Due to the importance of this section let us list its main results. The heuristic (based on the energy behavior analysis) feedback introduction in the oscillation equation (at the conjunction conditions execution) permits one to prove that the frequency modulation satisfying the parametric resonance condition is not necessary and sufficient condition of the oscillations unlimited increase. In other words, the statement: "Thus, at ω≈ k/2, k=1,2,… the lowest position of the idealized swing is extremely unsteady and it shakes up at arbitrary small periodic change of length [4, p. 107]" is true not for all periodic laws of the length alteration. The oscillations damping/shaking up formally corresponds by the frequency and duty ratio to the condition of the equality of optical paths to the quarter-wavelength characteristic of the interference filter or mirror. So the optical-mechanical analogy shows itself not only in the Fermat Principle but the parametric resonance too due to the general Hill's equation description. However, the investigation of light amplification/decay needs the complex wave vector in this equation (see, for instance [13]) which is outside the range of the article. Returning to mechanics one may note that major properties of super lattices hard coatings (see, for instance [14]) are explained by acoustic waves (optical phonons) behavior in them also described by the Hill's equation. Generally, this equation theory underlies most of metamaterials [15] advantages because all transport phenomena imply different wave -electromagnetic, acoustic, spin etc. propagation one way or another. The difference of solutions types is determined exclusively by the control signature signs. Naturally, the question about the control uniqueness (that is the modulating frequency, duty ratio and signature sign uniqueness) arises. The question about the existence of characteristic index extreme value in different controls is tightly adjoined to the former. The Optimal Control Theory Application The easiest way to answer the questions put in the end of the previous section is to apply the optimal control theory (OCT) [5]. It permits one to receive rather easily damping solutions for piecewise control even for the simplest problem definition -quick-action problem. For such problems the figure of merit is the time needed to hit the phase plane origin. Let's give the similar problem's definition for the 2 nd Hill's equation fundamental solution. This problem is defined this way: one needs to find out the control u(x) which provides the quickest phase plane origin hitting. In the OCT the state equations -are the 1 st order ordinary differential equations (ODE) that's why let's rewrite the Hill's equation in the system form using underfoot to differ the 2 nd variable from the 2 nd fundamental solution: The optimal control is found from the Pontryagine's maximum principle [5] according to it the optimal control corresponding to the figure of merit minimum corresponds to the "optimal Hamiltonian" maximum. Due to the linear dependence of the "optimal Hamiltonian" h(x) on the control u(x) it's necessary that it changes its sign at the product's p 2 (x)y 1 (x) sign alteration to maximize the "optimal Hamiltonian": Besides the maximal value of the optimal Hamiltonian must be zero everywhere that is the state equations vector is normal to the conjugated variables one because the optimal Hamiltonian is their scalar product: However, this condition permits one to determine only p 1,0 from (13): The initial speed (derivative) value is not zeroth instead of "heuristic feedback introduction through the fundamental solutions product" but coincides with the initial speed value y 2 (0) in (12). As for p 2,0 , that is the initial value of the conjugated function itself one may determine only its sign not magnitude. Because at y 1 (0)=0 its sign is positive than the p 2,0 sign -negative. It follows from the optimal Hamiltonian maximum need at arbitrary small initial value of y 1 (x) -the control must be negative. Thus, at any negative initial values of p 2,0 oscillations must damp but the damping efficiency will be naturally different. It's clear that to draw an analogy with the heuristic feedback introduction the initial condition for the conjugated coordinate must be equal to -∞ at the non-zeroth speed value. Therefore, the solutions coincidence -the optimal and heuristic ones in this particular case is possible only in the limit -at the conjugate coordinate initial value going to -∞. But, as it was noted above, at any other negative values an oscillation damping is provided with different efficiency. In figure 5 the solution -ym(x), control -um(x) and energy dependence for the 2 nd fundamental solution are given (the phase trajectory is similar to the figure 2). Figure 5. The 2 nd fundamental solution (coordinate) at optimal frequency modulation, optimal control and system's energy. This solution is obtained for the initial value p 2,0 = -10. Its alteration (unlike the alteration of the initial value at the heuristic feedback introduction) leads to the oscillation frequency, control and duty ratio alteration. Criteria (8) in this particular case are equal: Km=22,149, Sm=6,404. Let's note that at heuristic feedback introduction the corresponding criteria are equal: Ke=22,231, Se=6,251; Ks=22,232, Ss=6,247. Thus, the optimal control for the initial value p 2,0 = -10 provides somewhat quicker oscillation damping than the heuristic one but worse from the energy damping viewpoint. The difference from the heuristic control in this particular case consists of the fact that the frequency switching occurs not in points corresponding to the solution maximum but somewhat later -the delay is equal only to 0.05 of the modulating frequency half-period. Of course, at the modulation depth rise this difference rises too. The oscillation damping improvement in the sense of the criterion introduced above at the frequency switching not strictly in antinodes (whereas strictly in nodes) is explained rather simply. Although the maximal energy decrease and correspondingly the oscillation amplitude really occurs at the frequency switching strictly in nodes, the motion time with smaller frequency is greater than at optimal control. So, the quickest oscillation damping occurs when the resulting frequency and duty ratio become somewhat smaller than the heuristic ones, though only a little at the small modulation depth. The Mixed Hamiltonian Let's note the similarity of the optimal control problem solution and the one obtained at the heuristic feedback introduction through fundamental solutions product. The "shorten" that is not containing -1 optimal Hamiltonian coincides (with the accuracy to the sign between the summands) with the quadratic form produced by the problem Let's name the (15) form the mixed or general Hamiltonian (in contrast to the optimal Hamiltonian). Its equality to zero for simple harmonic oscillations is checked immediately. It is zero as the optimal Hamiltonian almost everywhere apart from the zero measure set that is switching points. At the introduction of state (for the 1 st fundamental solution and its derivative) and conjugate (for the 2 nd fundamental solution and its derivative) equations vectors one can say that they are normal to each other. Now let's note the difference in conjugate equations type. For the heuristic feedback introduction they are strictly the same as the state equations (12) (due to the differential form self-conjugation), only initial values differ. At that only the equality to zero of the "conjugated" function and its derivative sign initial value are important, in the case (12) -it is "+", whereas this value itself may be arbitrary, so that in this case the "conjugated" solution is determined with the accuracy of any positive multiplicative constant. Any change of constant doesn't lead to the control change and correspondingly the "state equations system solution". The conjugated equations of the optimal control problem differ by signs from the state equations that is instead of speed the function opposite in sign to it is introduced. Correspondingly in the speed equation the sign "+" stands in the right part, so that differentiating this equation (in the interval of constant control) and the speed substitution one gets the ordinary oscillation equation. Therefore, the conjugated variables of the optimal control problem describe exactly the same oscillating system as the investigated one described by the state equations but with other initial conditions. Moreover, at the "shorten Hamiltonian" use the conjugated equations solution exactly corresponds to the 1 st fundamental solution, the only difference consists of the speed determination (its sign) and consequently the Hamiltonian -by analogy in sign bounding the "kinetic" and "potential" energies. If one uses the full optimal Hamiltonian the main difference consists in the initial values of the conjugated variables. So, in the OCT the optimal Hamiltonian equality to zero determines the control. At the heuristic control determination by the feedback introduction through the fundamental solutions product the equality to zero of the analogous formis the solution conclusion. However, the mixed Hamiltonian introduction permits one to obtain the needed solution from the condition of its minimum (instead of Pontryagine's maximum principle). In fact from (15) follows that if solutions differ only by the phase shift than the mixed Hamiltonian changes from the doubled energy at zero shift to zero at the shift equal to π/2. (Damping/shaking up with different efficiency oscillations correspond to intermediate phase values.) The availability of explicit dependency upon time in a motion equation may be considered as the result of "external influence" on the described system. If one considers these influences as controls than the optimal control or feedback introduction automizes systems. At that the conjugated systems energies are not conserved but the mixed Hamiltonian as well as the optimal one is constant (almost everywhere) and equal to zero. Its constancy reflects the fact of the conjugated systems autonomies and piecewise control type whereas energy is periodically added to one system and subtracted from the other. But due to the constancy and equality to zero of the mixed Hamiltonian together with the Wronskian these energies product is the periodic piecewise function and its averaged value upon the modulation period is strictly constant: The other consequence of the mixed Hamiltonian equality to zero together with the Wronskian constancy is the useful analogous in form to the Liouville's theorem equation: where the commutator is denoted by brackets as usually whereas Н1(х) and Н2(х) -are ordinary Hamiltonians of conjugated equations (systems). The particularly simple form (17) has at the Wronskian equality to unity whereas for simple harmonic oscillations (W=ω) the equality (17) is checked immediately. The Control at an Arbitrary Phase Shift From the previous section results follows that the damping/shaking up efficiency is determined in particular by the phase shift between the solutions whose product is chosen as the control "key". To estimate this shift influence let's generalize the (10) formulae on the case of the switching in an arbitrary point accounting the conjugation conditions (6). Let the 1 st switching occur in the point corresponding to the initial phase ( / 2, / 2) ψ π π ∈ − . From (6) we immediately get the phase of the conjugated function, its period and characteristic index. Let's note that the switching occurs twice during a period. At the 1 st one the oscillations amplitudes ratio is equal to the amplitude ( ) B ψ whereas at the 2 nd -to the frequency's ratio as in the (10). Hence, these values must be multiplied at the characteristic index calculation: It's seen that at zero initial phase (18) reduces to (10) with the accuracy to the 2 in the nominator because (10) have been obtained for the half of a period. Characteristic indexes dependencies upon initial phase for three modulation depth values: w1=0.1, w2=0.2, w3=0.3 are shown on figure 6. It's seen that as it follows from the OCT application results characteristic indexes reach their minimal values not in the point with minimal initial phase but in the points determined by the modulation depth. Their difference from calculated according to (10) is not large: for the minimal modulation depth and the given calculation accuracy there is no difference at all; for the value 0.2 the index is -0.078 that is less on one thousandth, at last for the modulation depth equal to 0.3 the calculated value is equal to -0.113 that is less on 0.004. However, if one takes the obtained in the previous section modulation depth which provides the minimal characteristic index value -0.179 and finds its minimum as the initial phase function one gets already -0.231 for the initial phase 0.22. Let's note that the significant initial phase introduction transforms the index from a nonmonotonic function into the monotonic one right up to w=0.99. At that this modulation depth corresponds to the characteristic index minimal value equal to -0,305 at the initial phase equal to 0.24 or 13 о 45'. This initial phase provides the maximal increase/decrease at a large modulation depth (beginning from around 0.4) which is seen on the figure 7 plots for different initial phases: 0 (α), 0.1 (α1), 0.24 (α2) and 0.35 (α3). From formula (16) one can easily get analytic expressions for the true oscillation frequency and duty ratio as the modulation depth and initial phase functions. The duty ratio versus the initial phase for different modulation depth value plots are presented on figure 8. It's interesting for technical applications to determine the control providing the oscillation damping at duty ratio equal to 2 and predetermined modulation depth, say w1=0.1, for example, for the first fundamental solution. To make it one has to determine the initial phase from the duty ratio dependence on the initial phase ( fig. 4). At this initial phase one calculates the true oscillation frequency (the reduced shift of this frequency from the fundamental one is on the figure 9) and the characteristic (damping) index ( figure 6). Now it lasts only to determine the control initial phase providing damping. If the control is presumed piecewise then the initial control phase χ is found from the condition of the argument equality to zero in the point corresponding to the solution initial phase. At last the smooth control providing damping differs from the piecewise one by its modulation frequency, initial phase and damping index. These differences for the small modulation depth are not significant but cannot be calculated as easily as in the case of piecewise modulation. For the modulation depth equal to 0.1 the modulation frequency is equal to 2.04239ω whereas the control initial phase -0.504. At that the damping index modulo reduces to 0.03. Hence, to parametric oscillation increase/decrease with the help of smooth modulation with the duty ratio equal to 2 it's enough to correctly choose the modulating frequency and initial phase depending on the control parameter modulation depth. At that the (2) condition of the parametric resonance initiation is not necessary and either sufficient. The Nonlinearity Accounting The simple dependences presented in the previous section seem to exclude the necessity of OCT application, though it's not true. In the general case, for example at any friction existence, the conjugated equation must contain the increase factor due to the sign change of the 1 st derivative in the linear differential form. But if this form is not linear the construction of the conjugated one without OCT is hardly possible. Let's consider the general homogeneous differential form interesting for physical application -the motion equation in the form: The conjugated differential form may be found with the help of OCT algorithm: Differentiating as usually the 2 nd conjugated solution upon x and substituting the p 1 ' value from the 1 st one we get the conjugated to (8) homogeneous differential form looking like (substituting р 2 simply onto -р): This equality (10) is immediately checked for the case of linear differential form (8), for example, at the friction availability. It's seen that in the general case the conjugated form depends upon the solution of the main one -only linear forms are independent. The mixed and optimal Hamiltonians and canonic equations look like (compare with (8,9)): In the case of constant frequency (autonomic system) numerical solution proves the mixed Hamiltonian constancy and equality to zero. The optimal control is determined now by the signature of the basic and conjugated solutions product. In the case of small oscillations, the control corresponds to the linear oscillation case considered above ( ( ) , ( ) 1 Sin y y Cos y ≈ ≈ ). At the time-depending frequency substitution by the control function the mixed Hamiltonian immediately becomes zero as well as the optimal one. Other nonlinearities may be accounted in the similar way. Conclusion The parametric resonance analysis given in [2][3][4] being true in general is though incomplete and strict. It follows from the above consideration that: -there exist an infinite set of modulating frequencies and duty ratios satisfying the parametric resonance initiation condition and providing oscillation damping/rise up at the feedback introduction through the product of solutions differing by their initial phase; -at that the solutions are conjugated that is at phase shift corresponding to π/2 the bilinear form (15) -the mixed (general) Hamiltonian analogous to the optimal one is zero almost everywhere too, the one's system oscillations are damping whereas the other's -conjugated -are rising up; -at the absence of phase shift no frequency modulation occurs because the solution square doesn't change its sign and oscillations are strictly harmonic with maximal or minimal frequency depending on the signature sign; -for a given modulation depth there exist the optimal phase shift providing the quickest oscillation damping and strictly determined frequency and duty ratio correspond to it, their knowledge permits one to implement explicitly the control without any feedback introduction. This feedback introduction is possible either by the piecewise -signatures -or smooth functions, though piecewise control provides the best oscillation decrease/increase in the sense of the introduced figures of merit and the mixed Hamiltonian equality to zero almost everywhere. The feedback in the form of fundamental solutions product signature clearly illustrates the physical sense of the optimal control problem's conjugated variables and optimal Hamiltonian on the Hill's equation solutions class.
2019-07-19T05:15:31.450Z
2019-06-26T00:00:00.000
{ "year": 2019, "sha1": "159c19f7e4d0aab23cf76b7222ce283d99dea7d4", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ajpa.20190703.13.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ece8b270dd40f48369bbce19e679b6023d7482e5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
220495769
pes2o/s2orc
v3-fos-license
On the LHC signatures of $SU(5)\times U(1)'$ F-theory motivated models We study low energy implications of F-theory GUT models based on $SU(5)$ extended by a $U(1)'$ symmetry which couples non-universally to the three families of quarks and leptons. This gauge group arises naturally from the maximal exceptional gauge symmetry of an elliptically fibred internal space, at a single point of enhancement, $E_8\supset SU(5)\times SU(5)'\supset SU(5)\times U(1)^4.$ Rank-one fermion mass textures and a tree-level top quark coupling are guaranteed by imposing a $Z_2$ monodromy group which identifies two abelian factors of the above breaking sequence. The $U(1)'$ factor of the gauge symmetry is an anomaly free linear combination of the three remaining abelian symmetries left over by $Z_2$. Several classes of models are obtained, distinguished with respect to the $U(1)'$ charges of the representations, and possible extra zero modes coming in vector-like representations. The predictions of these models are investigated and are compared with the LHC results and other related experiments. Particular cases interpreting the B-meson anomalies observed in LHCb and BaBar experiments are also discussed. Introduction Despite its tremendous success, the Standard Model (SM) of the strong and electroweak interactions leaves many theoretical questions unanswered. Accumulating evidence of the last few decades indicates that new ingredients are required in order to describe various New Physics (NP) phenomena in particle physics and cosmology. Amongst other shortcomings, the minimal SM spectrum does not accommodate a dark matter candidate particle and the tiny neutrino masses cannot be naturally incorporated. Regarding this latter issue, in particular, an elegant way to interpret the tiny masses of the three neutrinos and their associated oscillations, is the seesaw mechanism [1] which brings into the scene right-handed neutrinos and a new (high) scale. Interestingly, this scenario fits nicely inside the framework of (supersymmetric) grand unified theories (GUTs) which unify the three fundamental forces at a high (GUT) scale. Besides, several ongoing neutrino experiments suggest the existence of a 'sterile' neutrino which could also be a suitable dark matter candidate [2,3]. Many other lingering questions regarding the existence of possible remnants of a covering theory, such as leptoquarks, vectorlike families, supersymmetry signatures and neutral gauge bosons, are expected to find an answer in the experiments carried out at the Large Hadron Collider (LHC). Remarkably, many field theory GUTs incorporate most of the above novel fields into larger representations, while, after spontaneous symmetry breaking of the initial gauge symmetry takes place, cases where additional U(1) factors survive down to low energies implying masses for the associated neutral gauge bosons accessible to ongoing experiments. However, while GUTs with the aforementioned new features are quite appealing, they come at a price. Various extra fields, including heavy gauge bosons and other colored states, contribute to fast proton decay and other rare processes. In contrast to plain field theory GUTs, string theory alternatives are subject to selection rules and other restrictions, while new mechanisms are operative which, under certain conditions, could eliminate many of the problematic states and undesired features. F-theory models [4,5,6], in particular, appear to naturally include such attractive features which are attributed to the intrinsic geometry of the compactification manifold and the fluxes piercing matter curves where the various supermultiplets reside. In other words, the geometric properties and the fluxes can be chosen so that, among other things, determine the desired symmetry breaking, reproduce the known multiplicity of the chiral fermion families, and eliminate the colored triplets in Higgs representations. Moreover, in F-theory constructions, the gauge symmetry of the resulting effective field theory model is determined in terms of the geometric structure of the elliptically fibred internal compactification space. In particular, the non-abelian part of the gauge symmetry is associated with the codimension-one singular fibers, while possible abelian and discrete symmetries are determined in terms of the Mordell-Weil (MW) and Tate-Shafarevish (TS) groups. 5 For elliptically fibred manifolds, the non-abelian gauge symmetry is a simply laced algebra (i.e. of type A, D or E in Lie classification), the highest one corresponding to the exceptional group of E 8 . At fibral singularities, certain divisors wrapped with 7-branes are associated with subgroups of E 8 , and are interpreted as the GUT group of the effective theory. In addition, U(1) symmetries may accompany the non-abelian group. The origin of the latter could emerge either from the E 8 -part commutant to the GUT group or from MW and TS groups mentioned above. Among the various possibilities, there is a particularly interesting case where a neutral gauge boson Z associated with some abelian factor with non-universal couplings to the quarks and leptons, obtains mass at the TeV region. Since the SM gauge bosons couple universally to quarks (and leptons) of the three families, the existence of non-universal couplings would lead to deviations from SM predictions that could be interpreted as an indication for NP beyond the SM. Within the above context, in [17] a first systematic study of a generic class of F-theory semi-local models based on the E 8 subgroup SU(5) ×U (1) has been presented 6 . The anomaly-free U(1) symmetry has non-universal couplings to the three chiral families and the corresponding gauge boson receives a low energy (a few TeV) mass. In that work, some particular properties of representative examples were examined in connection with new flavour phenomena and in particular, the B-meson physics explored in LHCb [20,21,22]. In the present work we extend the previous analysis by performing a systematic investigation into the various predictions and the constraints imposed on all possible classes of viable models emerging from this framework. Firstly we distinguish classes of models with respect to their low energy spectrum and properties under the U(1) symmetry. We find a class of models with a minimal MSSM spectrum at low energies. The members of this class are differentiated by the charges under the additional U(1) . A second class of anomaly free viable effective low energy models, contains additional MSSM multiplets coming in vector-like pairs. In the present work, we analyse the constraints imposed by various processes on the list of models of the first class. The phenomenological analysis of a characteristic example containing extra vector-like states is also presented, while the complete analysis of these models is postponed for a future publication. In the first category (i.e. the minimal models), anomaly cancellation conditions impose non-universal Z couplings to the three fermion field families. As a result, in most cases, the stringent bounds coming from kaon decays imply a relatively large Z gauge boson mass that lies beyond the accessibility of the present day experiments. On the contrary, models with extra vetor-like pairs offer a variety of possibilities. There are viable cases where the fermions of the first two generations are characterised by the same Z couplings. In such cases, the stringent bounds of the K −K system can be evaded and a Z mass can be as low as a few TeV. The work is organised and presented in five sections. In section 2 we start by developing the general formalism of a Z boson coupled non-universally to MSSM. Then, we discuss flavour violating processes in the quark and lepton sectors, putting emphasis on contributions to B-meson anomalies and other deviations from the SM explored in LHC and other related experiments. (To make the paper self contained, all relevant recent experimental bounds are also given). In section 3 we start with a brief review of local F-theory GUTs. Then, using generic properties of the compactification manifold and the flux mechanism, we apply well defined rules and spectral cover techniques to construct viable effective models. We concentrate on a SU(5) ×U(1) model embedded in E 8 and impose anomaly cancellation conditions to obtain a variety of consistent F-theory effective models. We distinguish between two categories; a class of models with a MSSM (charged) spectrum (possibly with some extra neutral singlet fields) and a second one where the MSSM spectrum is extended with vector-like quark and charged lepton representations. In section 4 we analyse the phenomenological implications of the first class, paying particular attention to B-meson physics and lepton flavour violating decays. Some consequences of the models with extra vector-like fields are discussed in section 5, while a detailed investigation into the whole class of models will be presented in a future publication. In section 6 we present our conclusions. Computational details are given in the appendix. Non-universal Z interactions In the Standard Model, the neutral gauge boson couplings to fermions with the same electric charge are equal, therefore, the corresponding tree-level interactions are flavour diagonal. However, this is not always true in models with additional Z bosons associated with extra U(1) factors emanating from higher symmetries. If the U(1) charges of all or some of the three fermion families are different, significant flavour mixing effects might occur even at tree-level. In this section we review some basic facts about non-universal U(1)'s and develop the necessary formalism to be used subsequently. Generalities and formalism To set the stage, we first consider the neutral part of the Lagrangian including the Z interactions with fermions in the gauge eigenstates basis [23,24] : where A µ is the massless photon field, Z 0 is the neutral gauge boson of the SM and Z is the new boson associated with the extra U(1) gauge symmetry. Also g and g are the gauge couplings of the weak SU(2) gauge symmetry and the new U(1) symmetry respectively. For shorthand, we have denoted cos θ W (sin θ W ) as c W (s W ) where θ W is the weak mixing angle with g = e/ tan θ W . The neutral current associated with the Z boson can be written as: is a column vector of left (right) chiral fermions of a given type (u, d, e or ν) in the gauge basis and q f L,R are diagonal 3 × 3 matrices of U(1) charges. f L denotes chiral fermions in the mass eigenstate basis, related to gauge eigenstates via unitary transformations of the form with the CKM matrix defined by the combination: In the mass eigenbasis, the neutral current (2.2) takes the form : If the U(1) charges in the q f L matrix are equal, then q f L is the unit matrix up to a common charge factor and due to the unitarity of V f 's the current in (2.5) becomes flavour diagonal. For models with family non-universal U(1) charges, the mixing matrix Q f L is non-diagonal and flavourar violating terms appear in the effective theory. Quark sector flavour violation The possible existence of non-universal Z couplings to fermion families, may lead to departures from the SM predictions and leave clear signatures in present day or near future experiments. Such contributions strongly depend on the mass M Z of the Z gauge boson, the U(1) gauge coupling, g , the U(1) fermion charges and the mixing matrices V f . A particularly interesting case reported by LHCb [22] and BaBar [25] collaborations, indicate that there may be anomalies observed in B-meson decays, associated with the transition b → sl + l − , where l = e, µ, τ. Current LHCb measurements of b decays to different lepton pairs hint to deviations from lepton universality. In particular, the analysis performed in the q 2 invariant mass of the lepton pairs in the range 1.1 GeV 2 < q 2 < 6 GeV 2 for the ratio of the branching ratios Br(B → K ( * ) + − ), = µ, e gives [22] where statistical and systematic uncertainties are indicated. Similarly, the results for B → K * (892) + − (where K * → Kπ), for the same ratio (2.7) are found to be R K * 0.69. Since the SM strictly predicts R SM K ( * ) = 1, these results strongly suggest that NP scenarios where lepton universality is violated should be explored. In the case l = µ in particular, additional experimental and theoretical arguments suggest that NP may be related with the muon channel [26,27,28]. In the SM, B → K ( * ) l + l − can only be realised at the one-loop level involving W ± flavour changing interactions (see left panel of Figure 1). However, the existence of a Z (neutral) gauge boson bearing non-universal couplings to fermions, can lead to tree-level contributions (right panel of Figure 1) which might explain (depending on the model) the observed anomalies. The effective Hamiltonian describing the interaction is given by [28] H b→sll where the symbols O xx n stand for the following dimension-6 operators, and C k are Wilson coefficients displaying the strength of the interaction. Also, in (2.8), G F is the Fermi coupling constant and V tb , V * ts are elements of the CKM matrix. The latest data for R K ( * ) ratios can be interpreted by assuming a negative contribution to the Wilson coefficient C µ µ 9 , while all the other Wilson coefficients 7 should be negligible, or vanishing [29,30,31,32,33]. The current best fit value is C µ µ 9 ≈ −0.95 ± 0.15. In the presence of a non-universal Z gauge boson, the C µ µ 9 Wilson coefficient is given by : The desired value for the C 9 coefficient could be achieved by appropriately tuning the ratio g /M Z . However, large suppressions may occur from the matrices Q f . In any case, the predictions must not create conflict with well known bounds coming from rare processes such as the mixing effects in neutral meson systems. Meson mixing Flavor changing Z interactions in the quark sector can also induce significant contributions to the mass splitting in a neutral meson system. A representative example is given in Figure 2. The diagrams show contributions to B 0 s [sb] mixing in the SM (left) and tree-level contributions in non-universal Z models (right). For a meson P 0 with quark structure [q iq j ], the contribution from Z interactions to the mass splitting 7 Alternative scenarios suggest : C µ µ 10 ≈ 0.73 ± 0.14 or C is given by [24]: where M W is the mass of the W ± gauge bosons and M P , F P is the mass and the decay constant of the meson P 0 respectively. There are large uncertainties in the SM computations of ∆M P , descending especially on QCD factors and the CKM matrix elements. Nevertheless, the experimental results suggest that there is still some room for NP contributions. Next, we review theoretical and experimental constraints for P 0 −P 0 meson systems to be taken into account in what follows. • B 0 s −B 0 s mixing: B s mixing can be described by the effective Lagrangian where C LL bs is a Wilson coefficient which modifies the SM prediction as follows [34]: with R loop SM = 1.3397 × 10 −3 . A model with non-universal Z couplings to fermions induces the following Wilson coefficient: where η LL ≡ η LL (M Z ) is a constant which encodes renormalisation group effects. This constant has a weak dependence 8 on the M Z scale. In our analysis we consider that η LL = 0.79 which corresponds to M Z = 1 TeV . For the SM contribution ∆M SM s we consider the result obtained in Ref. [36], which when compared with the experimental bound [37], ∆M exp s = (17.757 +0.021 −0.021 ) ps −1 , shows through eq. (2.12), that a small positive C LL bs is allowed. SM computations for the mass split in the neutral Kaon system are a combination of short-distance and long-distance effects, given as [38] where the experimental data are given by [37]: This small discrepancy between SM computations and experiment can be explained by including NP effects into the analysis. Thus, according to (2.14), the contribution of a non-universal Z boson to ∆M K must satisfy the following constraint [39]; where ∆M NP K can be computed directly from the formula (2.10). where Γ D is the total decay width of D 0 and the observed value for the ratio is x D 0.32 [40]. Since the process is subject to large theoretical and experimental uncertainties, we will simply consider NP contributions to x D less or equal to the experimental value. Leptonic Meson Decays : P 0 → l ili In the SM the decay of a neutral meson P 0 into a lepton (l i ) and its anti-lepton (l i ) is realised at the oneloop level. While in the SM these processes are suppressed due to GIM [41] cancellation mechanism, in non-universal Z models substantially larger tree-level contributions may be allowed. The decay width induced by Z interactions can be written in terms of the SM decay P − → l iνi as [24]: where the indices j, k refer to the quark structure [q jqk ] of the meson P − appearing in the SM interaction. Similarly, the indices m, n are used here to denote the quark structure of the neutral meson P 0 . All the relevant experimental bounds for this type of interactions can be found in [37]. Lepton flavour violation The lepton flavour violation process P 0 → l il j is similar to the previous one where i = j. The decay width due to tree-level Z contributions is given by [24]: As previously, the indices k, r are used to denote the quark structure [q rqk ] of the meson participating in the SM interaction, while generation indices m, n refer to the quark structure of P 0 . Bounds and predictions for these rare interactions will be given in the subsequent analysis. (g − 2) µ The anomalous magnetic moment of the muon a µ ≡ (g − 2)/2, is measured with high accuracy. However there exists a discrepancy between experimental measurements and precise SM computations [37]: where a SM µ = 116591830(1)(40)(26) × 10 −11 . This difference may be explained by NP contributions. In the case of a Z neutral boson, loop diagrams like the one shown on the left side of Figure 3 contribute to ∆a µ . Collectively, the 1-loop contribution from a non-universal Z bosons is [42]: where x Z l j := (m l j /M Z ) 2 with the loop function defined as: In our analysis we will consider that ∆a Z µ must be less or equal to ∆a µ . l i → l j γ A flavour violating Z boson contributes also to radiative decays of the form l i → l j γ. The 1-loop diagram of the strongly constrained decay µ − → e − γ is displayed in Figure 3 (right). Considering only Z contributions, the branching ratio for this type of interactions is given by [43]: where the index f = 1, 2, 3 refers to the lepton running inside the loop, Γ l i is the total decay width of the lepton l i and y 2 is a loop function that can be found in [43]. The most recent experimental bounds are: Br(µ → eγ) < 4.2 × 10 −13 , Br(τ → eγ) < 3.3 × 10 −8 and Br(τ → µγ) < 4.4 × 10 −8 · Dominant constraints are expected to come from the muon decay. l i → l j l kl j A lepton flavour violating Z boson mediates (at tree-level) three-body leptonic decays of the form l i → l j l jlk . The branching ratio is given by [44]: where the masses of the produced leptons have been neglected. For decays of the form l i → l j l kl j with k = j the branching ratio is The dominant constraint comes from the muon decay µ − → e − e − e + , with branching ratio bounded as Br(µ → eee) < 10 −12 at 90% confidence level [45]. Non-universal U(1) models from F-theory We now turn on to the class of F-theory constructions accommodating abelian factors bearing nonuniversal couplings with the three families of the Standard Model. As already mentioned, we focus on constructions based on an elliptically fibred compact space with E 8 being the maximal singularity, and assume a divisor in the internal manifold where the associated non-abelian gauge symmetry is SU (5). With this choice, E 8 decomposes as We will restrict our analysis in local constructions and describe the resulting effective theory in terms of the Higgs bundle picture which makes use of the adjoint scalars where only the Cartan generators acquire a non-vanishing vacuum expectation value (VEV) 9 . In the local picture we may work with the spectral data (eigenvalues and eigenvectors) which, for the case of SU (5), are associated with the 5 th degree polynomial This defines the spectral cover for the fundamental representation of SU (5). Furthermore, as is the case for any SU(n), the five roots must add up to zero, The remaining coefficients are generically non-zero, b k = 0, k = 0, 2, 3, 4, 5 and carry the geometric properties of the internal manifold. The zero-mode spectrum of the effective low energy theory descends from the decomposition of the E 8 adjoint. With respect to the breaking pattern (3.1), it decomposes as follows: 248 → (24, 1) + (1, 24) + (10, 5) + (5, 10) + (5, 10) + (10, 5) . (3.5) Ordinary matter and Higgs fields, including the appearance of possible singlets in the spectrum, appear in the box of the right-hand side in (3.5) and transform in bi-fundamental representations, with respect to the two SU(5)s. From the above, we observe that the GUT decuplets transform in the fundamental of SU(5) ⊥ , whilst the5, 5-plets are in the antisymmetric representation of the 'perpendicular' symmetry. For our present purposes however, it is adequate to work in the limit where the perpendicular symmetry reduces down to the Cartan subalgebra according to the breaking pattern SU(5) ⊥ → U(1) 4 ⊥ . In this 9 For non-diagonal generalisations (T-branes) see [46]. picture, the GUT representations are characterised by the appropriate combinations of the five weights given in (3.3). The five 10-plets in particular, are counted by t 1,2,...5 and the fiveplets which originally transform as decuplets under the second SU(5) ⊥ are characterised by the ten combinations t i + t j . In the geometric description, it is said that the SU(5) GUT representations reside in Riemann surfaces (dubbed matter curves Σ a ) formed by the intersections of the SU(5) GUT divisor with 'perpendicular' 7-branes. These properties are summarised in the following notation As we have seen above, since the weights t i=1,2,3,4,5 associated with the SU(5) ⊥ group, are the roots of the polynomial (3.2), they can be expressed as functions of the coefficients b k 's which carry the information regarding the geometric properties of the compactification manifold. Based on this fact, in the subsequent analysis, we will make use of the topological invariant quantities and flux data to determine the spectrum and the parameter space of the effective low energy models under consideration. We start by determining the zero-mode spectrum of the possible classes of models within the context discussed above. According to the spectral cover description, see equations (3.2-3.6), the various matter curves of the theory accommodating the SU(5) GUT multiplets are determined by the following equations: and Σ 5 t i +t j : If all five roots t i of the polynomial (3.2) are distinct and expressed as holomorphic functions of the coefficients b k , then, simple counting shows that there can be five matter curves accommodating the tenplets(decuplets) and ten matter curves where the fiveplets(quintuplets) can reside. This would imply that the polynomial (3.2) could be expressed as a product ∏ 5 i=1 (α i t i + β i ), with the coefficients α i , β i carrying the topological properties of the manifolds, while being in the same field as the original b k . However, in the generic case not all five solutions t i (b k ) belong to the same field with b k . In effect, there are monodromy relations among subsets of the roots t i , reducing the number of independent matter curves. Depending on the specific geometric properties of the compactification manifold, we can have a variety of factorisations of the spectral cover polynomial C 5 . (The latter are parametrised by the Cartan subalgebra modulo the Weyl group W (SU(5) ⊥ )). In other words, generic solutions imply branch cuts and some roots are indistinguishable. The simplest case is when two of them are subject to a Z 2 monodromy, (3.9) Remarkably, there is an immediate implication of the Z 2 monodromy in the effective field theory model. It allows the tree-level coupling in the superpotential which can induce a heavy top-quark mass as required by low energy phenomenology. Returning to the spectral cover description, under the Z 2 monodromy, the polynomial (3.2) is factorised accordingly to where the existence of the second degree polynomial is not factorisable in the sense presented above, indicating thus, that the corresponding roots t 1 ,t 2 are connected by Z 2 . Comparing this with the spectral polynomial in (3.2), we can extract the relations between the coefficients b k and a j . Thus, one gets b 0 = a 3 a 7 a 8 a 9 , b 1 = a 3 a 6 a 7 a 8 + a 3 a 4 a 9 a 8 + a 2 a 7 a 9 a 8 + a 3 a 5 a 7 a 9 , b 2 = a 3 a 5 a 6 a 7 + a 2 a 6 a 8 a 7 + a 2 a 5 a 9 a 7 + a 1 a 8 a 9 a 7 + a 3 a 4 a 6 a 8 + a 3 a 4 a 5 a 9 + a 2 a 4 a 8 a 9 , b 3 = a 3 a 4 a 5 a 6 + a 2 a 5 a 7 a 6 + a 2 a 4 a 8 a 6 + a 1 a 7 a 8 a 6 + a 2 a 4 a 5 a 9 + a 1 a 5 a 7 a 9 + a 1 a 4 a 8 a 9 , b 4 = a 2 a 4 a 5 a 6 + a 1 a 5 a 7 a 6 + a 1 a 4 a 8 a 6 + a 1 a 4 a 5 a 9 , b 5 = a 1 a 4 a 5 a 6 . (3.12) We impose the SU(5) constraint b 1 = 0 assuming the Ansatz [47] a 2 = −c(a 6 a 7 a 8 + a 5 a 7 a 9 + a 4 a 8 a 9 ), a 3 = ca 7 a 8 a 9 , where a new holomorphic section c has been introduced. Substituting into (3.12) one gets b 0 = c a 2 7 a 2 8 a 2 9 , b 2 = a 9 a 1 a 7 a 8 − a 2 5 a 2 7 + a 4 a 5 a 8 a 7 + a 2 4 a 2 8 a 9 c − ca 2 6 a 2 7 a 2 8 − ca 6 a 7 (a 5 a 7 + a 4 a 8 ) a 9 a 8 , b 3 = a 1 (a 6 a 7 a 8 + (a 5 a 7 + a 4 a 8 ) a 9 ) − (a 5 a 7 + a 4 a 8 ) (a 6 a 7 + a 4 a 9 ) (a 6 a 8 + a 5 a 9 ) c , (3.13) b 4 = a 1 (a 4 a 6 a 8 + a 5 (a 6 a 7 + a 4 a 9 )) − a 4 a 5 a 6 (a 6 a 7 a 8 + (a 5 a 7 + a 4 a 8 ) a 9 ) c , The equations of tenplets and fiveplets can now be expressed in terms of the holomorphic sections a j 's and c. In the case of the tenplets we end up with four factors which correspond to four matter curves accommodating the tenplets of SU(5). Substitution of (3.13) in to P 5 factorises the equation into seven factors corresponding to seven distinct fiveplets P 5 = (a 5 a 7 + a 4 a 8 ) × (a 6 a 7 + a 4 a 9 ) × (a 6 a 8 + a 5 a 9 ) × (a 6 a 7 a 8 + a 4 a 9 a 8 + a 5 a 7 a 9 ) × (a 1 − a 5 a 6 a 7 c − a 4 a 6 a 8 c) Table 1: Homology classes of the coefficients a j and c. Note that χ = χ 5 + χ 7 + χ 9 where χ 7 , χ 8 , χ 9 are the unspecified homologies of the coefficients a 5 , a 7 and a 9 respectively. Finally, we compute the homologies of the section a j 's and c, and consequently of each matter curve. This can be done by using the known homologies of the b k coefficients: where c 1 is the 1 st Chern class of the tangent bundle to S GUT , −t the 1 st Chern class of the normal bundle to S GUT and η = 6 c 1 − t. The homologies of a j 's and c are presented in Table 1. Because there are more a's than b's, three homologies which are taken to be [a 7 ] = χ 7 , [a 8 ] = χ 8 and [a 9 ] = χ 9 , remain unspecified. SU(5) ×U(1) in the spectral cover description Our aim is to examine SU(5) ×U(1) models and particularly the rôle of the non-universal U(1) which should be consistently embedded in the covering group E 8 . Clearly, the U(1) symmetry should be a linear combination of the abelian factors residing in SU(5) ⊥ . A convenient abelian basis to express the desired U(1) emerges in the following sequence of symmetry breaking Then, the Cartan generators corresponding to the four U(1)'s are expressed as: (3.20) The monodromy t 1 ↔ t 2 imposed in the previous section, eliminates the abelian factor corresponding to Q ⊥ with t 1 = t 2 . Then we are left with the remaining three SU(5) ⊥ generators given in (3.20). Next, we assume that a low energy U(1) is generated by a linear combination of the unbroken U(1)'s: Regarding the coefficients c 1 , c 2 , c 3 the following normalisation condition will be assumed while, further constraints will be imposed by applying anomaly cancellation conditions. The Flux mechanism We now turn into the symmetry breaking procedure. In F-theory, fluxes are used to generate the observed chirality of the massless spectrum. Most precisely, we may consider two distinct classes of fluxes. Initially, a flux is introduced along a U(1) ⊥ and its geometric restriction along a specific matter curve Σ n j is parametrised with an integer number. Then, the chiralities of the SU(5) representations are given by The integers M i , m j are subject to the chirality condition which coincides with the SM anomaly conditions [48,49] Next, a flux in the direction of hypecharge, denoted as F Y , is turned on in order to break the SU(5) GUT down to the SM gauge group. This "hyperflux" is also responsible for the splitting of SU(5) representations. If some integers N i, j represent hyperfluxes piercing certain matter curves, then the combined effect of the two type of fluxes into the 10-plets and 5-plets is described according to: We note in passing that since the Higgs field is accommodated on a matter curve of type (3.28), an elegant solution to the doublet-triplet splitting problem is realised. Indeed, imposing M i = 0 the colour triplet is eliminated, while choosing N i = 0 we ensure the existence of massless doublets in the low energy spectrum. The U(1) Y flux is subject to the conditions in order to avoid a heavy Green-Schwarz mass for the corresponding gauge boson. Furthermore, assuming F Y · χ i = N i (with i = 7, 8,9) and correspondingly F Y · χ = N, with N = N 7 +N 8 +N 9 , we can find the effect of hyperflux on each matter curve. While m i and M j are subject to the constraint (3.26), hyperflux integers N 7,8,9 are related to the undetermined homologies χ 7,8,9 and as such, they are free parameters of the theory. The flux data and the SM content of each matter curve are presented in Table 3. The particle content of the matter curves arises from the decomposition of 10 + 10 and 5 + 5 pairs which reside on the appropriate matter curves. The MSSM chiral fields arise from the decomposition of 10 and 5, and are denoted by Q, L, u c , d c , e c . Depending on the choice of the flux parameters, it is also possible that some of their conjugate fields appear in the light spectrum (provided of course that there are only three chiral families in the effective theory). These conjugate fields arise from 10 and 5 and in Table 3 and are denoted by Q, L, u c , d c , e c . In the same table we have also included the charges of the remaining U(1) symmetry. We observe that the charges are functions of the c 1,2,3 coefficients which can be computed by applying anomaly cancellation conditions. There are also singlet fields defined in (3.6) which play an important rôle in the construction of realistic F-theory models. In the present framework, these singlet states are parameterised by the vanishing combination ±(t i − t j ) = 0 , i = j, therefore, due to Z 2 monodromy we end up with twelve singlets, denoted by θ i j . Their U(1) charges and multiplicties are collectively presented in Table 4. Details on their rôle in the effective theory will be given in the subsequent sectors. Singlet Fields Weights Anomaly cancellation conditions In the previous sections we elaborated on the details of the F-SU(5) GUT supplemented by a flavourdependent U(1) extension where this abelian factor is embedded in the SU(5) ⊥ ⊃ E 8 . Since the effective theory has to be renormalisable and ultra-violet complete, the U(1) extension must be anomaly free. This requirement imposes significant restrictions on the U(1) charges of the spectrum and consequently, on the coefficients c i defining the linear combination in (3.22). In this section we will work out the anomaly cancellation conditions to determine the appropriate linear combinations (3.22). This procedure will also specify all the possibly allowed U(1) charge assignments of the zero-mode spectrum. Consequently, each such set of charges will correspond to a distinct low energy model which can give definite predictions to be confronted with experimental data. Although the well known MSSM anomaly cancellation conditions coincide with the chirality condition (3.26) imposed by the fluxes, there are additional contributions to gauge anomalies due to the extra U(1) factor. In order to consistently incorporate the new abelian factor into the effective theory, the following six anomaly conditions should be considered: A G : Gauge Gravity Anomaly . (3.34) Using the data of Table 3, it is straightforward to compute the anomaly conditions Solution Strategy The anomaly conditions displayed above are complicated functions of the c i -coefficients and the flux integers m i , M j and N k . In order to solve for the c i 's we have to deal with the flux integers first. The precise determination of the spectrum in the present construction, depends on the choice of these flux parameters. While there is a relative freedom on the choice and the distribution of generations on the various matter curves, some phenomenological requirements may guide our choices. For example, the requirement for a tree-level top Yukawa coupling suggests that the top quark must be placed on the 10 1 matter curve (see Table 3) and the MSSM up-Higgs doublet at 5 1 since, due to Z 2 monodromy, the only renormalisable top-like operator is : 10 t 1 10 t 1 5 −2t 1 ≡ 10 1 10 1 5 1 . This suggests the following conditions on some of the flux integers: Furthermore, a solution to the doublet-triplet splitting problem implies that Additional conditions can be imposed by demanding certain properties of the effective model and a specific zero-mode spectrum. In what follows, we will split our search into two major directions. Namely, minimal models which contain only the MSSM spectrum (no exotics), and models with vector-like pairs. Models with MSSM spectrum We start with the minimal scenario where the models we are interested in have the MSSM spectrum accompanied only by pairs of conjugate singlet fields. In particular, three chiral families of quarks and leptons of the MSSM spectrum are ensured by the chirality condition (3.26). On top of the conditions (3.35) and (3.36) we also assume that avoiding this way exotics since H u will be the only MSSM state in 5 1 matter curve. In addition, absence of exotics necessarily implies that Table 5 and the spectrum of the corresponding models are presented in Table 6. We refer to this class of models as Class A. Note that the SM states of all the models above carry the same charges under the extra U(1) and differ only on how the SM states are distributed among the various matter curves. In all cases we expect similar low energy phenomenological implications. Solutions for the remaining forty-eight set of fluxes arise if we relax the condition M i j = M ji and allow for general multiplicities for the singlets. Scanning the parameter space, three new classes (named as Class B, Class C and Class D), of consistent solutions emerge. Some representative solutions from each class 10 are shown in Table 7 while the corresponding models are presented in Table 8. It is being observed that for all the models presented so far, one of the tenplets 10 2 , 10 3 , 10 4 acquires the same U(1) charge with the 10 1 matter curve accommodating the top-quark. Thus, at least one of the lightest left-handed quarks will have the same Q charge with the top quark. In this case, the corresponding flavour processes associated with these two families are expected to be suppressed. Next, we will investigate some phenomenological aspects of the models presented so far. We first write down all the possible SU(5) ×U(1) invariant tree-level Yukawa terms: • Renormalisable top-Yukawa type operator: which is the only tree-level top quark operator allowed by the t i weights (see Tables 7,8) thanks to the Z 2 monodromy. • Renormalisable bottom-type quarks operators: Depending on how the SM states are distributed among the various matters curves, tree level bottom and/or R-parity violation (RPV) terms may exist in the models. Phenomenological Analysis Up till now we have sorted out a small number of phenomenologically viable models distinguished by their low energy predictions. In the remaining of this section, we will focus on Model D9. The implications of the remaining models will be explored in the Appendix. Details for the fermion sectors of this model are given in Table 8, while the properties of the singlet sector can be found in Appendix B. In order to achieve realistic fermion hierarchies, we assume the following distribution of the MSSM spectrum in to the various matter curves: where the indices (1,2,3) on the SM states denote generation. Top Sector The dominant contributions to the up-type quarks descend from the following superpotential terms W ⊃ y t 10 1 10 1 5 1 + y 1 where y i 's are coupling constant coefficients and Λ is a characteristic high energy scale of the theory. The operators yield the following mass texture : y 4 ϑ 2 13 + y 6 ϑ 15 ϑ 53 ϑ 13 y 3 ϑ 13 ϑ 15 y 1 ϑ 13 + y 5 ϑ 15 ϑ 53 y 1 ϑ 13 + y 5 ϑ 15 ϑ 53 y 2 ϑ 15 εy t y 1 ϑ 13 + y 5 ϑ 15 ϑ 53 where v u = H u , ϑ i j = θ i j /Λ and ε 1 is a suppression factor introduced here to capture local effects of Yukawa couplings descending from a common tree-level operator [50,51,52]. The matrix has the appropriate structure to explain the hierarchy in the top sector. Charged Lepton Sector In the present construction, when flux pierces the various matter curves, the SM generations are distributed on different matter curves. As a consequence, in general, down type quarks and charged lepton sectors emerge from different couplings. In the present model the common operators between bottom and charged lepton sector are those given in (4.8) with couplings κ 5 , κ 7 , κ 10 and κ 12 . All the other contributions descend from the operators where y τ is a tree level Yukawa coefficient, λ i coupling constants and η 1 encodes local tree-level Yukawa coupling effects. Collectively we have the following mass texture for the charged leptons of the model: The µ-term The bilinear term 5 152 is not invariant under the extra U(1) symmetry. However, the µ-term appears dynamically through the renormalisable operator: There are no constraints imposed on the VEV of singlet field θ 13 , thus, a proper tuning of the values of κ and θ 13 can lead to an acceptable µ-parameter, µ ∼ O(TeV ). As a result, the θ 13 singlet which also contributes to the quarks and charged lepton sectors, must receive VEV at some energy scale close to the TeV region. We also note that some of the singlet fields couple to the left-handed neutrinos and, in principle, can play the rôle of their right-handed partners. In particular, as suggested in [6], the six-dimensional massive KK-modes which correspond to the neutral singlets identified by the Z 2 symmetry θ 12 ≡ θ 21 are the most appropriate fields to be identified as θ 12 → ν c and θ 21 →ν c so that a Majorana mass term M N ν cν c is possible. We will not elaborate on this issue any further; some related phenomenological analysis can be found in [53]. CKM matrix The square of the fermion mass matrices obtained so far can be diagonalised via the unitary matrices V f L . The various coupling constants and VEVs can be fitted to make the diagonal mass matrices satisfy the appropriate mass relations at the GUT scale. In our analysis we use the RGE results for a large tan β = v u /v d scenario produced in Ref. [54]. In addition, the combination V u L V † d L must resemble as close as possible the CKM matrix. Then, the singlet VEV's ϑ i j are fitted to: It is clear that the CKM matrix is mostly influenced by the bottom sector while V u L is almost diagonal and unimodular. Next, we compute the unitary matrix V e L which diagonalises the charged lepton mass matrix. The correct Yukawa relations and the charged lepton mass spectrum are obtained for where the remaining parameters were fitted to: λ 1 = 0.4, λ 2 = λ 3 = 1, η = 10 −4 and y τ = 0.51. R-parity violating terms In the model under discussion, several tree-level as well as bilinear operators leading to R-parity violating (RPV) effects remain invariant under all the symmetries of the theory. More precisely, the tree-level operators : violate both lepton and baryon number. Notice however, the absence of u c u c d c type of RPV terms which in combination with QLd c terms can spoil the stability of the proton. There also exist bilinear RPV terms descending from tree-level operators. In the present model, these are: The effect of these terms strongly depend on the dynamics of the singlets, however it would be desirable to completely eliminate such operators. One can impose an R-symmetry by hand [47] or to investigate the geometric origin of discrete Z N symmetries that can eliminate such operators [55]- [58]. In addition, the study of such Yukawa coefficients at a local-level, shows that they can be suppressed for wide regions of the flux parameter space [59]. Since in this work we focus mostly in Z flavour changing effects 11 , we will assume that one of the aforementioned mechanisms protects the models from unwanted RPV terms. Z bounds for Model D9 Having obtained the V f matrices for the top/bottom quark and charged lepton sectors, it is now straightforward to compute the flavour mixing matrices Q f L defined in (2.6). These matrices, along with the Z mass (M Z ) and gauge coupling (g ), enter the computation of the various flavour violating observables described in Section 2. Hence, we can use the constraints on these observables in order to derive bounds for the Z mass and gauge coupling or, more precisely, for the ratio g /M Z . In any case, the so derived bounds must be in accordance with LHC bounds coming from dilepton and diquark channels [65,66,67]. For heavy Z searches, the LHC bounds on neutral gauge boson masses are strongly model dependent. For most of the GUT inspired Z models, masses around ∼ 2 − 3 TeV are excluded. In the model at hand, we have seen that the lightest generations of the left-handed quarks have different U(1) charges. Consequently, strong constraints on the Z mass are expected to come from the K −K mixing bounds. Hence, we first start from the K −K system. Using eq. (2.10) we find for the Kaon oscillation mass split that : The results are plotted in Figure 4. As expected, the Kaon system puts strong bounds on M Z . To get an estimate, for g 0.5 the constraint in (2.15) implies that M Z 120 TeV which lies far above the most recent collider searches. (TeV) Then, for Γ D = 1/τ D 2.43843 (ps) −1 [37] we have that Muon anomalous magnetic moment and µ → eγ Our results imply that Z contributions to ∆a µ are always smaller than the observed discrepancy. Even for the limiting case where g = 1 and M Z = 1 TeV our computations return: ∆a Z µ 3 × 10 −11 . This suggests that for small Z masses the model can explain the observed (g − 2) µ anomaly. However for larger M Z values implied from the Kaon system the results are very suppressed. For LFV radiative decays of the form l i → l j γ, the strongest bounds are expected from the muon channel. For g = 1, the present model predicts that M Z 1.3 TeV if the predicted µ → eγ branching ratio is to satisfy the experimental bounds. Tau decays (τ → eγ, τ → µγ) are well suppressed, due to the short lifetime of the tau lepton. While all the three body lepton decays of the form l i → l j l jlk are suppressed for the tau channel, strong constraints are obtained from the muon decay µ − → e − e − e + . In particular, the model predicts that The results are compared with the experimental bounds in Figure 5. We observe that, for g = 0.5 (dashed line in the plot) we receive M Z 42 TeV in order the model to satisfy the current experimental bound, Br(µ − → e − e − e + ) < 10 −12 . While the constraints coming from this decay are stronger than the other lepton flavour violating processes discussed so far, they still are not compatible with the restrictions descending from the Kaon system. However, important progress is expected by future lepton flavour violation related experiments [69]. In particular, the Mu3e experiment at PSI [70] aim to improve the experimental sensitivity to ∼ 10 −16 . In the absence of a signal, three-body LFV muon decays can then be excluded for Br(µ − → e − e − e + ) < 10 −16 . In Figure 5 the red horizontal line represents the estimated reach of future µ → 3e experiments. For, g = 0.5 we find that M Z 420 TeV in order the predicted branching ratio is to satisfy the foreseen Mu3e experimental bounds. Hence, for the present model, the currently dominant bounds from the Kaon system will be exceeded in the near future by the limits of the upcoming µ − → e − e − e + experiments. R K anomalies The bounds derived from the Kaon oscillation system and the three-body decay µ → e − e − e + leaves no room for a possible explanation of the observed R K anomalies. Indeed, for the relevant Wilson coefficient the model predicts that which has the desired sign (C 9 < 0), but for M Z ∼ 200 TeV and g 1 the resulting value is too small to explain the observed B meson anomalies. Similar phenomenological analysis have been performed for all the other models presented so far. A discussion on their flavour violation bounds is given in Appendix C. Collectively, the results are very similar with those of Model D9. For all the U(1) models with MSSM spectrum the dominant bounds on M Z comes from K 0 − K 0 oscillation effects and the muon decay µ → e − e − e + . It is clear from the analysis so far that a successful explanation of the LHCb anomalies in the present F-theory framework, requires the use of some other type of mechanism. A common approach, is the explanation of the LHCb anomalies through the mixing of the conventional SM matter with extra vectorlike fermions [71]- [79]. Next, we present such an F-theory model while a full classification of the various F-theory models with a complete family of vector-like fermions will be presented in a future work. Models with vector-like exotics We expand our analysis to models with the MSSM spectrum + vector-like (VL) states forming complete (10 + 10), (5 +5) pairs under the SU(5) GUT symmetry. Hence, as in the previous study, we choose appropriate fluxes, solve the anomaly cancellation conditions, and derive the U(1) charges of all the models with additional vector-like families. Among the various models, particular attention is paid to models with different U(1) charges for the VL states, while keeping universal the U(1) charges for the SM fermion families. This way one can explain the observed B-meson anomalies due to the mixing of the SM fermions with the VL exotics while at the same time controlling other flavor violation observables. A model with these properties (first derived in [17]) is materialised with the following set of fluxes: The various mass terms can be written in a 5 × 5 notation as F R M F F L where F R = ( f c i ,F,F ) and F L = ( f i , F , F) T with f = u, d, e and F = U, D, E. We will focus on the down-type quark sector. The up quark sector can be treated similarly, while the parameters can be adjusted in such a way so that the CKM mixing is ensured. The various invariant operators yielda mass matrix of the form where k's are coupling constant coefficients and ε, ξ are small constant parameters encode local Yukawa effects. Here we represent the singlet VEVs simply as θ i j = θ i j while ϑ i j represents the ratio θ i j /Λ. In order to simplify the matrix we consider that some terms are very small and that approximately vanish. In particular, we assume that k 2 = k 3 = k 5 θ 51 = k 6 = k 7 ϑ 14 ϑ 53 ≈ 0. Moreover, we introduce the following simplifications where the mass parameter M characterises the VL scale while m = kϑ 54 v d is related to the low energy EW scale. We have also assumed that the small Yukawa parameters are identical ε ≈ ξ . With these modifications the matrix takes the following simplified form The local Yukawa parameter ξ connects the VL sector with the physics at the EW scale so we will use this this small parameter to express the mixing between the two sectors. We proceed by perturbatively diagonalizing the down square mass matrix (M 2 d ) using ξ as the expansion parameter. Setting k 1 ≈ 0, γ v d = cµ and keeping up to O(ξ ) terms we write the mass square matrix in the form The block-diagonal matrix A, is the leading order part of the mass square matrix and can be diagonalised Its mass square eigenvalues are where x 1,2,3 correspond to the mass squares of the three down type quark generations d 1,2,3 respectively. At this stage we ignore the small mass of the first generation down quark which can be generated by high order corrections. For the second and third generation we observe that the ratio x 2 /x 3 depends only on the parameter α. Hence from the known ratio m s /m b we estimate that α 10 −2 . The corresponding normalised eigenvectors which form the columns of the diagonalising matrix are where q = 1 − m 2 x 2 depends only on the parameter α, since x 2 ∼ m 2 . The corrections to the above eigenvectors due to the perturbative part ξ B are given by the relation where the second term displays the O(ξ ) corrections to the basic eigenvectors of the leading order matrix A. The corrected diagonalizing matrices schematically receive the form V b L = V 0 b L + ξV 1 b L and through them the mixing parameter ξ enters on the computation of the various flavour violation observables. For the explanation of the LHCb anomalies we will consider that perturbative corrections are important for the corresponding bs coupling while almost vanish for the other flavour mixing coefficients. That way , due to the universal U(1) charges of the SM matter most of the flavour violation process are suppressed. Assuming that the corresponding lepton contribution is (Q e L ) 22 ≈ 1 and for α = 0.016 we find for the b → s transition matrix element that : where Q 1,2,3 = 1/4 is the common charge of the MSSM fermions and Q 4 = −1/2 is the charge of the extra matter descending from 10 2 matter curve. Note that the corresponding U(1) charge of the states descending from 5 4 matter curve is zero and consequently does not contribute to the above formula. It is clear from equation (5.7) that the first term is dominant since the second one is suppressed due to the large VL mass scale characterized by the parameter M. Hence, keeping only the first term we have through equation (2.9) that and for g 1, M Z 4 TeV and ξ 2 ∼ O(10 −1 ) predicts C 9 ≈ −1 which is the desired value for the explanation of the LHCb anomalies. It is emphasised here that this approach is valid in the small ξ < 1 regime. If ξ is large perturbation breaks down and a more general treatment is required. Conclusions In the present work we have examined the low energy implications of F-theory SU(5) × U(1) GUT models embedded in SU(5) × SU(5) ⊃ SU(5) × U(1) 4 . This gauge symmetry emerges naturally from a single point of E 8 enhancement, associated with the maximal geometric singularity appearing in the elliptic fibration of the internal manifold. In order to ensure realistic fermion mass textures and a treelevel top quark Yukawa coupling, we have imposed a Z 2 monodromy group which acts on the geometric configuration of 7-branes and identifies two out of the four abelian factors descending from the SU(5) reduction. The U(1) symmetry of the so derived effective field theory models, is a linear combination of the three remaining abelian symmetries descending from SU(5) . Imposing anomaly cancellation conditions we have constructed all possible U(1) combinations and found as a generic property the appearance of non-universal Z -couplings to the three families of quarks and leptons. Introducing fluxes consistent with the anomaly cancellation conditions, and letting the various neutral singlet-fields acquire non-zero vevs, we obtained various effective models distinguished from each other by their different low energy spectra. We have focused on viable classes of models derived in this framework. We have investigated the predictions on flavour changing currents and other processes mediated by the Z neutral gauge boson associated with the U(1) symmetry, which is supposed to break at some low energy scale. Using the bounds on such processes coming from current investigation at LHC and other related experiments we converted them to lower bounds on various parameters of the effective theory and in particular the Z mass. The present work provides a comprehensive classification of semi-local effective F-theory constructions reproducing the MSSM spectrum either with or without vector-like fields. On the phenomenological side, the focus is mainly in explorations of models with the MSSM fields accompanied by several neutral singlets. Fifty four (54) MSSM models have been obtained and are classified with respect to their predictions on the U(1) charges of the MSSM matter content. In most of these cases, U(1) couples non-universally to the first two fermion families, and consequently the K 0 − K 0 oscillation system forces the strongest bound on the Z mass. As such, assuming reasonable values of the U(1) gauge coupling g we obtain M Z bounds at few hundreds TeV, well above the most recent LHC searches. In other occasions various flavour violation processes are predicted that can be tested on the ongoing or future experiments. The dominant process mediated by Z is the lepton flavour violating µ → eee decay, whilst its associated µ → eγ rare reaction remains highly supperessed. Future experiments designed to probe the lepton flavour violating process µ → eee are expected to increase their sensitivity at about four orders of magnitude compared to the recent bounds. In this case the models analysed in the present work are a spearhead for the interpretations of a positive experimental outcome. Even in the absence of any signal, the foreseen bounds from µ → eee searches will be compatible with, if not dominant compared to the current bounds obtained in our models from neutral Kaon oscillation effects. On the other hand, we have seen that, models with Z coupled non-universally but only with MSSM spectrum, are not capable to interpret the recently observed LHCb B-meson anomalies. All the same, our classification includes a class of models with vector like families with non-trivial Z -couplings which are capable to account for such effects. These models display a universal nature of the Z couplings to the first two families with negligible contributions to K 0 − K 0 oscillations. Their main feature is that the U(1) charges of the vector-like fields differ from those of the first two generations inducing this way non-trivial mixng effects. As an example, we briefly described such a model which includes a complete family of vectorlike of fields where the observed LHCb B-meson anomalies can be explained through the mixing of the extra fermions with the three generations of the SM. A detailed investigation of the whole class of these models will be presented in a future publication. Appendices A Anomaly Conditions: Analytic expressions return Up to overall factors, our computations give: For the mixed A Y 11 anomaly we have: The U(1) -gravity anomaly yields the following expression: The pure cubic U(1) anomaly is: The last terms in (A.3) and (A.4) represents the contribution from the singlets. B List of models In this Appendix all the flux solutions subject to MSSM spectrum criteria, the corresponding U(1)charges and details about the singlet spectrum are presented. For each c i -solution presented, a similar solution subject to c i → −c i is also predicted from the solution of the anomaly cancellation conditions. Hence, models with charges subject to Q → −Q are also exist. As mentioned on the main text, there are fifty-four solutions that fall into four classes of models: Class A, B, C and D. Class A This class consists of six models. The flux data solutions along with the resulting c i -coefficients have been presented in Table 5 of the main text. The corresponding models defined by these solutions along with their U(1) charges are given in Table 6. Here we present only the singlet spectrum for this class of models. As have been already discussed, in this particular class of models the singlets come in pairs, meaning that M i j = M ji . Hence, a minimal singlet spectrum scenario implies that M i j = M ji = 1. The singlet charges Q i j for each model are given in Table 9, below. Class A Charges Models Class B This Class of models consists of twenty-four solutions. All the relevant data characterized the models organized in three tables. In particular, Table 10 contains the flux data of the models along with the corresponding c i -solutions, as those have been extracted from the solution of the anomaly cancellation conditions. In Table 11, the U(1) charges of the matter curves are given. Finally, details about the singlet spectrum presented in Table 12. Class B Flux data c i coefficients Class C Twelve models define this class. Gauge anomaly cancellation solutions are given in Table 13 while the corresponding matter curve U(1) charges are listed in Table 14. The properties of the singlet spectrum are described in Table 15. C9 Class D This class contains twelve models. Flux data along with the corresponding solution for the c i -coefficients are given in Table 16. The U(1) charges are listed in Table 17 while the properties (multiplicities and Q i j charges) of the singlet spectrum are described in Table 18. Phenomenological analysis of Model D9 was presented in the main body of the present text. Regarding the singlet sector of the models, their superpotential can be written as C Flavour violation bounds for the various models In the main text we have analyse in detail the low energy implications of model D9. A similar phenomenological analysis have been performed for all the MSSM spectrum models discussed so far. Due to the large number of models we do not present in detail the analysis for each model. Here we discuss the main flavor violation results for the four classes of MSSM models presented in the previous sections. Models of the same class share common U(1) properties and consequently their phenomenological analysis is very similar. Next, we discuss the basic flavour violation bounds for each class of models. The main results collectively presented in Table 19. Class A: The six models that compromised the Class A have very similar U(1) charges. More specifically, only two values allowed for the |Q | charges, 0 and 1/2. Matter fields descending from the SU(5) tenplets have zero charge and as a result the corresponding flavor violation process appear very suppressed. The Q charges appear (semi) non-universal in the lepton sector but again the corresponding LFV process are well suppressed in comparison with the experimental results. In summary, flavor violation process in Class A models appear to be suppressed and consequently M Z bounds cannot extracted for this class of models. Class B: From the twenty-four models of this class, eight-teen of them have been analysed in detail. In particular, the models B4, B5, B8, B13, B15 and B16 predict inappropriate mass hierarchies and as a result have been excluded from further analysis. For the remaining realistic models, the dominant constraints descents from the Kaon oscillation system. Approximately, the Z contribution to the K 0 − K 0 mass split is Class C: Due to the flux integers which characterize this class of models (see Table 13), all the matter fields descending from the SU(5) tenplets have the same U(1) charges and as a result the corresponding flavour violation processes (like semi-leptonic meson decays and meson mixing effects) are suppressed. However, on the lepton sector the U(1) charges are non-universal leading to lepton flavor violation phenomena at low energies. The dominant constraint descent from the three body decay µ − → e − e − e + . Approximately for all the C-models, we find that the Z contributions to the branching ratio of the decay is Br(µ − → e − e − e + ) 7.2 × 10 −6 g TeV M Z 4 which compared to the current experimental bound implies that M Z (51.8 × g ) TeV, where g the U(1) gauge coupling. In the absence of any signal in future µ − → e − e − e + searches, this bound is expected to increased by one order of magnitude: M Z (518 × g ) TeV. Class D: In this class of models the dominant constraints descend from the Kaon system. In some cases, strong bounds will be placed by future µ − → e − e − e + searches. In particular, for the models D1, D2, D5, D6, D8 and D10 the constraints from Z contributions to the K 0 − K 0 mass split is: Table 19: Dominant flavour violation process for each model along with the corresponding bounds on the mass of the flavour mixing Z boson.
2020-07-14T01:01:00.833Z
2020-07-12T00:00:00.000
{ "year": 2021, "sha1": "7920ceea539271fd72f2a9e534b56e120695b16e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-020-08794-y.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "d4dbf8f5fd96d65d70871cd83d3e9910e998e528", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
17172334
pes2o/s2orc
v3-fos-license
Participatory development of MIDY (Mobile Intervention for Drinking in Young people) Background There are few effective strategies that respond to the widespread practice of risky single-occasion drinking in young people. Brief interventions, which involve screening of alcohol consumption and personalised feedback, have shown some efficacy in reducing alcohol consumption, but are typically delivered in clinical settings. Mobile phones can be used to reach large populations instantaneously, both for data collection and intervention, but this has not been studied in combination during risky drinking events Methods Our study investigated the feasibility and acceptability of a mobile-phone delivered Ecological Momentary Assessment (EMA) and brief intervention for young people during drinking events. Our participatory design involved development workshops, intervention testing and evaluation with 40 young people in Melbourne, Australia. The final intervention included text message prompts to fill in mobile-based questionnaires, which measured drinks consumed, spending, location and mood, with additional questions in the initial and final questionnaire relating to plans, priorities, and adverse events. Participants received a tailored feedback SMS related to their drinking after each hourly questionnaire. The intervention was tested on a single drinking occasion. Prompts were sent between 6 pm and 2 am during a drinking event, with one follow up at 12 pm the following day. Results Participants reported being comfortable with hourly mobile data collection and intervention during social occasions, and found the level of intrusion acceptable; we achieved an 89 % response rate on the single occasion of testing. Participants were proactive in suggesting additional questions that would assist in the tailoring of feedback content, despite the added time burden. While we did not test the effectiveness of the intervention, participants reported value in the tracking and feedback process, with many stating that they would normally not be aware of how much alcohol they consumed in a night. Conclusions Findings suggest that the intervention was considered acceptable, feasible and novel to our participants; it now requires comprehensive testing and evaluation. Background In Australia, alcohol consumption is a significant public health issue; Risky Single Occasion Drinking (RSOD) (also known as binge drinking) is widespread and particularly concerning. More than one in seven deaths and one in five hospitalisations among young people are attributed to alcohol consumption, largely related to RSOD rather than long-term heavy consumption [1]. RSOD is associated with a plethora of harms including physical and sexual violence, suicide, risky sexual behaviour, as well as both short-and long-term brain impairment and cognitive deficits [1][2][3][4]. RSOD is common in Australia and persists beyond adolescence, with more than 66 % of 18-to 24-year-olds and 64 % of 25-29-year-old Australians reporting such drinking within the past year [5]. Thus far, researchers have identified few strategies that effectively reduce harmful drinking [6]. Education and information provision are historically popular for their visibility, reach and low cost, but strategies such as mass-media campaigns, health warnings and schoolbased programs have limited effect [6][7][8]. Clinical interventions, including brief screening and tailored feedback delivered as 'brief interventions' , have strong and growing evidence of efficacy for reducing drinking [9,10]. Brief interventions are based on 'Motivational Interviewing' techniques, which approach the behaviour change process with empathy, a focus on understanding a patient's motivations for change, and a goal of empowerment [11]. A brief intervention for reducing alcohol consumption would involve an assessment of drinking patterns that is then used to inform tailored advice and feedback on the behavioural and physiological effects of alcohol, risk of harm, and financial costs of alcohol consumption [12]. Brief interventions have traditionally been conducted in individual sessions in clinical settings such as hospitals [13], primary health care [14] and within substance disorder treatment contexts [15]. More recently, brief interventions involving face-to-face contact have been shown to reduce alcohol consumption in college and university students [16][17][18]; however, this mode of delivery is resource intensive and has limited reach. Alternative delivery methods are therefore needed to apply brief interventions to broader populations in the community. Mobile phones offer a new method for reaching populations with health interventions. In Australia, 89 % of the adult population owns a smartphone, using them regularly for SMS and internet access [19]. These phones are a viable option for intervention delivery, with previous researchers reporting success in positively influencing sexual health, tobacco cessation, physical activity and healthy eating [20,21]. Much of the available literature has utilised simple, one-way, untailored message dissemination, while brief interventions to reduce alcohol consumption require an assessment of current drinking behaviours. Therefore a suitable method of data collection is required if brief interventions are to be applied on a larger scale via mobile phone during drinking events. Ecological Momentary Assessment (EMA) involves repeated observation of self-reported behaviour, in a participant's natural environment, and permits collection of data during alcohol consumption events [22][23][24]. Realtime assessments of drinking require low cognitive demand and reduce the recall bias seen in retrospective reporting of alcohol consumption [24]. Several researchers have successfully implemented EMA using a mobile phone platform. Kuntsche and Labhart [23] used EMA in a study-specific smartphone application to record the alcohol consumption of young Swiss people, with SMS reminders sent throughout the drinking event. Riordan, Scarf and Conner [25] used SMS to collect alcohol consumption data nightly throughout orientation week for university students in New Zealand. Suffoletto et al. used SMS to collect weekly drinking intention (prior to drinking) and recalled drinking data from young people, with tailored feedback sent in response [26][27][28]. However, we could find no studies that have examined whether EMA during a drinking event could underpin the delivery of an immediate brief intervention. The combination of EMA and brief interventions has further potential benefits in the timing of intervention delivery. Brief interventions are generally undertaken outside of drinking events, targeting overall alcohol consumption. However, drinking is a contextually bound behaviour [24] and few studies have attempted to intervene during risk events, where the personalised feedback that characterises many brief interventions could be relevant and timely. While this combination shows promise in concept, it is not known whether it is feasible and acceptable to combine data collection and brief interventions during drinking events. Aim of study To investigate the feasibility and acceptability of mobile phone-delivered data collection and intervention for young people during drinking events. Methods The study was approved by the Monash University Human Research Ethics Committee. The consolidated criteria for reporting qualitative research [29] guided the research to improve rigour and transparency in reporting. Study design We employed a mixed-methods participatory design involving three stages of data collection. Firstly, focus group-style development workshops were held to explore an initial intervention design and inform the creation of intervention content. The proposed intervention was then redesigned and refined on the basis of these workshops. Secondly, these same participants tested the intervention during a regular night out on which they planned to drink. Finally, we evaluated the intervention using a mobile survey and in-depth interviews to canvass participants' opinions of the intervention. In this paper we focus on design factors related to feasibility and acceptability. Data pertaining to the development of message content within this study will be the subject of a future publication. The research was conducted in metropolitan Melbourne, Australia, with all interviews and focus groups/ workshops occurring at the authors' institution. Recruitment and workshops were completed in June 2014. Pilot testing of the intervention occurred between November and December 2014, with follow-ups occurring approximately one week after testing. The research team was comprised of qualified experts in health promotion, interventions using new technology, and alcohol consumption, including specific expertise in the young adult population group. All team members were involved in the development and refinement of data collection instruments throughout the study. The researcher responsible for conducting interviews and focus groups has training and experience in qualitative methodology. A senior team member with extensive experience in qualitative research and participatory methods reviewed transcripts to verify findings in the analysis stage. Study population and recruitment Participants were aged between 18 and 25 years, owners of smartphones, and self-reported 'regular' consumers of alcohol (drinking at least once per week on average). No further inclusion/exclusion criteria applied. Two methods of recruitment were utilised to generate a sample: firstly, 64 young people who had completed a questionnaire at a music festival [30] and indicated an interest in participating in other studies were sent a text message with some brief details and an invitation to contact the primary researcher for more information; 11 participants were recruited through this method. A further 37 participants were recruited through advertising placed at universities and through other community organisations working primarily with young people, as well as on social media. Six participants interested in participating withdrew prior to the research commencing, primarily due to being unavailable at the four sessions scheduled. The final sample of 42 young people, all of whom attended the development workshops, included 21 men and 21 women. Contact with participants outside of the workshops/interviews was exclusively electronic, with the majority occurring via text message, in addition to an electronic poll used to indicate availabilities for the workshop, intervention testing, and followup. Participants could also contact the researcher via phone call or email, but none did so. Participants received a cash reimbursement of AUD$150 for their participation; this was in compensation for the use of their phone data in the trial as well as their time. Reimbursement was not dependent on completing SMS assessments, and participants were free to withdraw at any point. Written informed consent was obtained from all participants. Of the 42 young people, 40 were retained throughout all stages of the study, with two participants attending a workshop but not completing the intervention testing or follow-up. One was lost to follow-up and another moved overseas, resulting in a retention rate of 95 % throughout the three stages of the study. Participants were predominantly Caucasian (82.5 %). Most (81 %) participants were students, of whom 79 % were undergraduate university students, with the remainder postgraduates (18 %) or vocational/technical college students (3 %). In terms of highest level of completed education, around two-thirds (63 %) had completed high school, 2 % had completed a vocational/ technical course and the remainder (34 %) had completed an undergraduate degree. Development workshops Participants were split into four workshop sessions scheduled according to their availability. Between seven and 12 participants attended sessions, each facilitated by one researcher. Each session ran for approximately three hours and was structured to include a focus group-style discussion of the proposed research design; a media analysis component in which participants discussed and evaluated various styles of alcohol messages used in previous anti-alcohol campaigns and interventions; and a design session in which participants were broken into groups of three or four and given printed materials to help them design their optimal versions of the research and message content. Participants were informed that the study was designed to design and test an intervention to reduce alcohol consumption in young people through the repeated collection of alcohol consumption and contextual information (with the example given of location) via mobile phone during their night out, that would be followed by tailored SMS feedback in response to each round of data collection. Participants were asked to express ideas and opinions on acceptability, feasibility, preferred data collection methods (e.g. sending data directly via text, SMS with web-survey link, or smartphone application), question content and wording, foreseeable barriers, optimal timing and frequency of data collection, and alcoholrelated health messaging. We asked participants to generate ideas for question content to inform the research team as to how to best tailor the feedback to reduce alcohol consumption for themselves and others of their age. In addition to focus group-style discussion, each participant was given the opportunity to write down opinions on optimal timing, frequency, platform and content of the intervention. All sessions (including the design sessions in smaller groups) were recorded digitally and transcribed verbatim; thematic analysis of transcripts and documents (see below) began after the first workshop and was used to assist with probing questions in subsequent sessions. Intervention refinement and content development Following the completion of the workshops and thematic analysis, the full research team decided how to implement design changes and develop message content using theoretical frameworks. The process of redesigning the data collection and intervention included negotiation of practical and logistical considerations and the incorporation of new ideas generated through workshops to ensure feasibility and acceptability. The message content was developed into a matrix of messages classified according to appropriateness for location, gender, stage of night and variables collected from the EMA; classifications were informed by the workshops. Intervention testing Testing occurred approximately four months following the development workshops, on nights selected by the participants. Informed by stage one, the data collection involved a mix of text-message and mobile-compatible web questionnaires. The intervention involved participants nominating one single night within a two week period, on which they had social plans and were likely to consume alcohol. When scheduling their test night, participants pre-nominated what time they wished the surveys to begin, with most opting to complete the presurvey at 7 pm. From their nominated start time, they were sent a link via SMS to the first mobile questionnaire which collected contextual data on plans for the evening, goal-setting (number of drinks and spending), if they had eaten, mood, motivations for drinking less (e.g. health/fitness, avoiding hangover, spending too much, not waking up in time for planned activities etc.), if any alcohol had been consumed so far, and with the option of writing a message to themselves to be sent later in the evening. Hourly SMS were then sent with links to a shorter EMA, which collected data on alcohol consumed since last data was sent, spending, location, how intoxicated (if at all) they perceived themselves to be and current mood. Each questionnaire allowed participants to opt out for the evening. The following day, at 12 pm, participants were sent a follow-up questionnaire that collected any missed data from 2 am onwards, and asked participants to try to recall total number of standard drinks consumed, total spend, and any adverse events due to drinking. Each time a participant responded to a questionnaire, they received a manually-tailored SMS message according to one or more of the following: gender, goals and plans set, amount of alcohol consumed so far, amount spent, location, priorities (as determined by what they reported might motivate them to drink less) or a message that they had written to themselves. Using the predeveloped matrix, the researcher identified an appropriate feedback message based on the participants' reporting, and sent the message using online SMS tool Qmani (www.qmani.com). While this manual process has obvious limitations for scalability, the researchers felt it allowed them to better investigate the tailoring process, and will use the findings to build an automated system in future research. Participants were not aware that the messages were tailored manually. A feedback SMS was intended to be sent immediately after completion of every EMA, however, in practice, the manually tailored response took 1-2 min. For example, a participant completed a survey stating that they were still out at a nightclub/bar, had ranked 'not getting home' as high on their priorities of events to avoid, and had exceeded the number of drinks that they planned to have. Based on these responses, the researcher used filters to identify the following message in a matrix cell: "You've already had more than you planned to drink tonight. Have you got a lift home planned?". Figure 1 illustrates the variables collected in each survey with further examples of messages. No participant received the same message twice. The feedback for the next-day survey included tips for their next night out, or summary of total spend or alcohol consumed compared to goals or the recall reported. Self-reported alcohol consumption data collected during the event were not analysed in depth due to the small sample. Data pertinent to feasibility and acceptability were collected in the form of response rates, complemented by rich qualitative data collected in the evaluation stage. As mentioned, the development and evaluation of the message content will be described in a future publication. Evaluation Participants were followed-up approximately one week after completing their trial of the intervention. They were asked to attend a one-on-one interview or small focus group; either took approximately an hour. Each participant completed a questionnaire on their mobile phone with question items capturing demographic data, preferred time points and frequency, the invasiveness of the trial, and 5-point Likert rating scales (1 = very poor, 5 = very good) for user friendliness, questionnaire design, visual appeal, ease of use, phone compatibility, questionnaire length and question clarity. The interviewer then used a semi-structured approach to gain further feedback and allow the participant to elaborate on questionnaire responses, propose new ideas and discuss their experience of trialling the intervention. Finally, participants were asked to evaluate the tailored messages received during the pilot testing, including message suitability, usefulness and language. Experiences of responses to different messages were then discussed in depth, and the opportunity to modify content was offered. Member checking was completed in interviews and focus group to ensure that researchers' interpretations matched the intended meaning of the participants' feedback. Field notes were also taken during these sessions. We used these qualitative and quantitative data to investigate feasibility, acceptability, optimal timing and frequency of data collection and feedback, scalability and experiences for the trial. Data analysis Transcripts, design material produced by participants and field notes were analysed thematically in an iterative process of coding, using NVivo V10 (QSR International, Melbourne, Australia). Due to the intended practical application of the findings, specific codes were generated in advance (such as that relating to optimal timing), while others emerged from the data during the data collection and analysis process. A second researcher crosscoded a sample of transcripts to verify the analysis framework. While the aim of the current study was not to test the effectiveness of the intervention, we analysed descriptive process data from the testing phase, including response rates, in SPSS Version 22 (IBM, Armonk, USA). We also examined quantitative data from the evaluation survey on design features such as timing and frequency, complementing in-depth qualitative evaluation of the intervention and research; this is known in mixed-methods research as data triangulation and improves the rigour of research by providing multiple data sources [31]. Results Results are grouped below according to three domains: acceptability factors, feasibility considerations and participants' experiences of the intervention. Acceptability of the intervention Development workshop The young people described feeling comfortable with reporting on alcohol consumption throughout drinking events. Participants also reported that it would be acceptable to regularly report on location, and occurrence of drinking-related adverse events, including vomiting, violence, accident/injury and sex, among others. In each of the four groups, at least one participant suggested adding adverse events to the list of options such as illegal behaviours such as drug-taking and drink-driving. When questioned about privacy with respect to these sensitive behaviours, most were unconcerned, with one male replying "I guess you just, like, know that you're not asking to rat us out. So you'd just say it". Participants were asked to suggest other acceptable and important question items, either to help us to understand the nature of their nights out or to allow us to provide tailored feedback intended to reduce harmful drinking. Questions on spending, mood, plans and priorities were added as a result, but in the newly developed format of a separate and longer first questionnaire, a regular EMA to be completed hourly with fewer questions, and a post-intervention questionnaire to be completed the following day. Most participants they did not mind answering slightly longer questionnaires at the beginning of the night or the following day, as long as the hourly EMAs were brief enough to not detract from their social enjoyment. Participants agreed that mobile phones were very suitable for both data collection and message dissemination, as young people are rarely far from their phones. Many participants stated "I'm on my phone anyway". However, as indicated above, acceptability hinged on design factors that determined convenience. Testing and evaluation Following the night of trialling the intervention, most participants retained their views relating to its acceptability. Many participants noted the non-judgmental approach of the broad project, which they believed encouraged participation. Many recognised that this type of research had the potential to feel burdensome if not designed and framed carefully. One participant stated in the follow-up interview that he had been sceptical that the SMS feedback might "feel like someone was nagging me when I just want to have fun", but instead found the experience more positive: "I felt like someone was just checking up on me. It was sort of nice (laugh)." The casual language used in question wording was also identified as important. Wording in the evaluation questionnaire was regarded positively, rated a mean of 4 out of 5, with participants describing the language as appropriate, clear and relatable. Of the 40 participants, 31 opted to enter in a "message to self" to be sent back to them at a later time. This option was described by participants positively in the evaluation, as it allowed them to enter in an entirely personal note or motivation. Almost all (98 %, n = 39) evaluation survey participants indicated that they were comfortable responding to all questions included in the pilot, which was confirmed in follow-up interviews. Furthermore, the research was described as being socially acceptable to friends and others around them on the night, with only 5 % (n = 2) stating that they wouldn't want their peers to know that they were tracking and reporting their drinking. When questioned further on this, some participants indicated that they had told friends as they saw it as novel, whereas others had warned friends in advance that they might be slightly more distracted than usual. In the evaluation survey, participants were asked to measure invasiveness in terms of whether they felt that completing the trial interrupted their night, with 75 % (n = 30) disagreeing or strong disagreeing on a four-point Likert scale with the statement "Doing the trial interrupted my night too much". Further, when asked if doing the trial interrupted their night "a little", "a lot" or "not at all", only 2 % selected the option "a lot", with 83 % selecting "a little" and 15 % selecting "not at all". In evaluation interviews, participants mostly reported that during testing, they were interested to see what the feedback message would be generated based on their submission, although some suggested that this would be improved if available within seconds following data entry. Feedback SMS were generally sent within two minutes during the trial but any reduced interruption was seen as beneficial. In addition, almost all participants reported that they re-read the feedback SMS the following day, while three-quarters reported sharing messages with one or more friends when they received it. Co-designing feasible research Development workshop Designing minimally-invasive data collection tools was pivotal to the logistical feasibility of repeatedly collecting data through a drinking event. Creating a purposespecific smartphone application for the intervention appealed to approximately half of the young adults, but many participants expressed concerns over compatibility between phones and mentioned that they would probably ignore an application notification, so we chose to use SMS and links to web-based surveys. Almost all participants agreed that they were more likely to read an SMS with urgency than other contact methods, with one explaining "You don't really get spammed by text. So it's probably a friend and so you kind of feel like you have to read it and reply. I reckon I'd just do it straight away because that's how you think". However, participants cautioned against submitting data via text or having to type out responses as it was more labour intensive, prone to errors and especially difficult while drinking. Compatibility, visual appeal and ease of response were still seen as barriers to completing questionnaires in web browsers. As a result, we tested several online survey tools before settling on SurveyGizmo due to its mobile compatibility. Several iterations of the questionnaire were pre-tested on various models of smartphones by over 20 researchers and 12 young adults who did not participate in the main study. Determining the most appropriate frequency and timing of questionnaires required participants to consider invasiveness against what was most likely to capture alcohol intake through the evening. Hourly surveys were seen as preferable; most participants agreed that spacing questionnaires further apart than one hour would result in difficulties recalling drinks and spending, while more frequent questionnaires were expected to be too invasive. Other suggestions were also made, including allowing participants to determine the frequency in the pre-survey, based on their own pace, and the option of participants sending back information each time they bought a drink. The consensus across groups was that the first questionnaire should be sent at 6 pm and the last at 2 am, with the option to set a later start time and to request an earlier opt-out. This timeframe was expected to cover the majority of drinking events. The feasibility concern most frequently discussed in the workshops related to the measurement of alcohol consumption. Most participants reported lacking confidence in calculating or reporting units of standard drinks, and many agreed that they would not be able to recall number of standard drinks consumed on a typical night, presenting a clear challenge to the design of the research. Testing and evaluation Technical difficulties during the pilot were few and minor, with all SMS successfully delivered, and only one glitch (related to SurveyGizmo updating their system) that prevented some participants from moving to the second page of the first questionnaire; this was resolved reasonably quickly. We sent 295 SMS prompts, resulting in 262 completed questionnaires (89 %). In evaluation interviews, explanations for missed rounds of data collection included phones being on silent, finishing work later than expected, phone running out of battery, forgetting and being in an inappropriate social situation, and the technical glitch. Table 1 describes the response rates across the hourly intervals. In terms of surveys completed per individual, 21 of the 40 participants completed all surveys sent to them, 10 missed only one survey, five missed two surveys, and four missed more than two surveys. Questionnaire design was rated highly, with 90 % (n = 36) of survey respondents agreeing that completing them was easy. Qualitative evaluation indicated that the questionnaire displayed well across all but one phone type (a very old smartphone model). Despite most participants opting to start surveys from 7 pm on their testing night, the evaluation showed a preference for 6 pm commencement so to complete goal-setting prior to any alcohol consumption, and before they may be out for dinner. Following testing, most participants (68 %) still advocated for hourly questionnaire frequency; remaining participants suggested half-hourly (13 %), every hour and a half (7 %) or every two hours (12 %). In interviews, participants more strongly recommended the option of user-determined frequency, or diary-style data entry. Both survey and interview data supported the timing chosen for data collection. As informed by development workshops, alcohol consumption was measured through a series of questions asking what types of drinks were consumed (e.g. beer, cider, wine, spirits), and then quantity in different units of each (e.g. pot, pint, bottle, longneck, shot). These responses were converted to standard drinks based on average alcohol concentration in different drink types. While participants explained in interviews that this was simple enough in terms of data entry, concerns were expressed relating to the accuracy of data. Some apprehension was based on difficulties in recalling what they had consumed in the last hour (e.g. "I couldn't remember if I'd already reported it in the last round or not"), while other concerns related to consuming higher-strength drinks, or using non-standard glass sizes. When asked to recall total standard drinks consumed during the night, participants reported an average of three fewer drinks (mean = 3.16) than had been recorded throughout the drinking event in EMAs. Tracking of spending also proved challenging, with 61 % of survey participants agreeing that calculating hourly spending was difficult. Interview participants explained that they experienced most difficulty when drinking in a home-based setting, drinking pre- Expectations of effectiveness for reducing drinking Development workshop Focus group discussion indicated that most participants had recent and regular experiences of drinking heavily and, initially, none reported being interested in reducing alcohol consumption. However, despite the acceptance of binge drinking as a regular practice, there was notable curiosity and interest in attempting to track drinking as many admitted that they took little notice of their consumption in most drinking events. Several participants across groups reflected similar sentiments, stating that they "probably don't know where to stop" and that "… It can get out of hand sometimes", and telling the common story of the night going well until the final part of the evening, when poor decision-making occurred. For our young participants, motivations for reducing alcohol consumption centred on minimising this poor decisionmaking rather than any concerns for health or safety. One participant wished she "had a sober version of myself, keeping check", while others recalled needing a sober friend to assist them in making responsible decisions. In this sense, tracking alcohol consumption and receiving positive and practical tailored messages were seen to be potentially very useful on 'bigger nights'. A small number of male participants in one group were apprehensive about how they might react, and discussed the risk of responding to messages defiantly by drinking more. However this was agreed to be more likely if messages were written in a didactic tone and explicitly instructed recipients not to drink. The participants were very interested in reducing spending on alcohol; without prompting, members of each group hypothesised that an intervention focused on tracking spending would be as effective, if not more effective, than standalone alcohol tracking. One participant stated that messages should: "Hit me where it hurts, in the hip pocket", with a large majority of others agreeing that this approach had good potential to reduce drinking. Testing and evaluation While this pilot study was not designed to test the intervention's efficacy in reducing alcohol consumption, feedback from evaluation interviews showed moderate to strong support for the intervention. When asked to recount experiences of trialling the intervention, several encouraging themes emerged. Firstly, recording their own alcohol consumption necessitated an attention to drinking that most participants had never previously attempted. In the evaluation survey, 84 % of participants agreed or strongly agreed on a four-point Likert scale that completing the intervention "helped me keep track of my drinking and spending". This was described by one young woman as "a bit of an eye-opener", while several others reflected that on subsequent drinking occasions they had been noticing their intake more closely. Setting a goal at the start of the night for maximum number of drinks was also something new to many participants, which some reported as useful. Secondly, while spending was regarded as difficult to track, it was still seen as a motivation for reducing drinking. Setting goals for spending was similarly regarded as new and potentially useful, while some participants reported receiving reminders when they had gone over this limit encouraged slowing or stopping drinking altogether: "I got the message saying I'd spent all my money, and then I don't know what happened but I was just like 'I'm done'". Likewise, reminders informing participants that they had reported having plans the next day was described as a potentially important tool with some promising anecdotal effects: "I totally forgot I had to work the next day and the message said I had to start in six hours so that was good." Discussion Our study demonstrated that young adults are both willing and able to engage in mobile-delivered research and interventions targeted to them during drinking events. Although further refinement is required to enhance the validity of data collected through a drinking event, our sample of young people assessed the process of collecting these data and providing relevant feedback as useful for reducing drinking and associated harms. Acceptability Young people described themselves as comfortable to report drinking data and were unconcerned about privacy, even when reporting more personal information including goals and priorities, location, spending and the occurrence of adverse events and behaviours such as drink-driving or drug use. While the willingness to report on such a wide variety of factors using mobile phones was surprising, it doubtless reflects the amount of time young people typically spend on their phones and the comfort and familiarity that young adults have with sharing personal information over technology. This is encouraging for future studies intending to intervene during risky drinking events. The wide range of questions suggested for inclusion in the EMA reflected the complexity of participants' drinking events, shaped by social context and varying motivations and priorities for reducing drinking. Participants recognised the need for the researcher to have better insight into their context, and be able to 'speak their language' , to make an intervention truly relevant. The combination of data collection and intervention during drinking events therefore presents a promising avenue of intervention that requires further testing and evaluation. Participants initially anticipated that our data collection and intervention could intrude too much during social events to be successfully implemented. However, we demonstrated that if data collection procedures are codesigned by young people and tested intensively, intrusion can be minimised to an acceptable level. Crucial to this was a design that allowed easy and rapid data collection and for feedback messages to be sent almost immediately. Most participants valued receiving a feedback message after data collection, seeing it as little added burden. Feasibility We initially expected challenges relating to phones running out of battery, poor reception, participant nonresponse, and technological errors such as SMS not being received or the questionnaire not displaying correctly. However, in line with previous studies, our response rate of 88 % suggests that it is feasible to collect data during drinking events. Irvine et al.'s [32] intervention for reducing alcohol consumption in disadvantaged men had a response rate of 88 % to question-based text messages. In a six-month study with weekly reporting of alcohol consumption, 82.1 % of participants completed all EMAs [33]. In another study requiring daily completion of EMAs, Kuntsche and Labhart [23] reported an 80.4 % completion rate. This suggests that the addition of brief intervention to EMA did not impact on participation in the study, although further research is required to confirm this. It is expected that collecting data over multiple nights will result in higher attrition, although participants reported in follow-up interviews that they were willing to participate in repeated nights of data collection if requested, as long as they were able to choose the nights involved. SMS was regarded as the best notification system due to its perceived urgency, and few problems were experienced in using a web-based mobile survey. Some participants suggested the intervention be moved to a smartphone application platform, and this is worth exploring in future research. A recent review of drinkingrelated smartphone applications showed that some have similar functions of tracking alcohol consumption and providing feedback [34]. However, most current apps appear designed to encourage increased alcohol consumption rather than promoting harm reduction. Our combination of EMA and brief intervention would provide an alternative to these if further developed for an app platform. In order to best capture alcohol consumption over the night without intruding too heavily, hourly EMAs between 7 pm-2 am were preferred. The greatest challenge to feasibly conducting the research lay in the reporting of alcohol consumption due to non-standard units of alcohol and difficulty in recall; this is not an issue that has been discussed in previous research using EMAs. It is not known if the mean difference of three drinks between standard drinks reported the following day and drinks reported during the night was due to inaccurate reporting during the night or loss of memory. However, previous research has shown that EMAs can reduce recall bias and improve reliability and validity. Other research has shown similar discrepancies, with higher reporting of alcohol consumption in EMAs and lower retrospective recall [24]. Our and Monk et al.'s [24] studies also reported qualitative data indicating that many participants relied on guessing to report retrospective recall of alcohol consumption, due to memory impairments or confusion. We also found reporting of spending was not straightforward, with particular challenges related to pre-purchased or shared drinks. Nevertheless, despite potential inaccuracies, participants still reported value in the tracking process. Future research needs to determine and test the best ways to capture data relevant to alcohol events. Expectations for the intervention to reduce drinking Participants who did not report a desire to reduce alcohol consumption still expressed a desire for assistance in retaining control over drinking and decision-making in the later part of drinking events. This finding highlights potential areas for intervention targeting, although messages must be designed in a way that engages participants. While qualitative reports from the pilot intervention are not evidence of effectiveness, participants did describe experiences that suggest different pathways for intervention; these include raising awareness of an individual's own consumption and spending, or by reminding people of their sober self, as well as providing decision-making support based on their prereported personal motivations and priorities. These pathways provide multiple mechanisms through which the combination of EMA with brief intervention could influence RSOD behaviour, and further exploration of these is warranted. Limitations This study is constrained by its relatively small purposive sample, meaning that results are not necessarily generalisable to a broader population. Further, all data are selfreported and thus subject to responder bias; social desirability bias and dominant responder bias are particularly pertinent to the development workshops, although measures were taken in facilitation to ensure that participants had equal opportunity to contribute. Moreover, data collected while participants were under the influence of alcohol may be prone to additional recall bias; however, these data were collected for the purposes of producing tailored feedback as opposed to generalising results. Finally, the high level of engagement shown by the young people involved was also likely to have had a positive influence on response rates, and it is not known if this would be replicated in other study populations, for instance less well-educated young people. Further, the intervention was only tested on one night, and a higher rate of attrition may occur over multiple nights of testing. The study has several strengths, including its participatory design to inform and refine intervention design. Further, the study adds to the evidence base by providing transparent detail regarding the rigorous development and design process, a gap noted in recent reviews of text message-based behaviour change interventions [35]. The mixed-methods design of the study allowed for a comprehensive intervention development process. Rigour was aided by use of data triangulation, memberchecking and cross-coding by researchers. Conclusion The study illustrates the use of a participatory design for developing an intervention for reducing alcohol consumption for young people. Recommendations from participants led to the inclusion of broader contextual information within the questionnaires delivered through EMA, which improved the personalised feel of the intervention. The young people informed the frequency and timing of EMAs, as well as question content and language and other design features. Data from follow-up interviews and questionnaires will be used to further refine the intervention for future research. The intervention was largely perceived to be acceptable, feasible to upscale, with ease of use minimising invasiveness and underpinning high response rates. The promising experiences described qualitatively suggest that the combination of EMA and brief intervention may have the potential to positively influence drinking events. The study provides detail on the development process of an intervention delivered on mobile platforms, which the literature lacks. Further work is needed to test the efficacy of this type of intervention in reducing harms related to alcohol consumption events. Competing interests Professor Dietze has received funding from Gilead Sciences Inc and Reckitt Benckiser for work unrelated to this study. The authors declare that they have no other competing interests. Author's contributions CW lead the manuscript and was involved in study design, participant recruitment, intervention development, and lead all data collection and analysis, ML was the chief investigator, and was involved in study design, intervention development, development of both qualitative and quantitative data collection measures, refining of message content, and oversaw all data analysis. PMD was involved in study design, intervention development and refining of quantitative data collection tools. BC was involved in study design, development of qualitative data collection measures, and qualitative data analysis. All authors contributed to, reviewed and approved the final manuscript.
2016-05-12T22:15:10.714Z
2016-02-24T00:00:00.000
{ "year": 2016, "sha1": "12f1ea7a32b756e3b3df88303a509039911f4729", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-016-2876-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "12f1ea7a32b756e3b3df88303a509039911f4729", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
15053714
pes2o/s2orc
v3-fos-license
Randomized Dose-Ranging Controlled Trial of AQ-13, a Candidate Antimalarial, and Chloroquine in Healthy Volunteers Objectives: To determine: (1) the pharmacokinetics and safety of an investigational aminoquinoline active against multidrug–resistant malaria parasites (AQ-13), including its effects on the QT interval, and (2) whether it has pharmacokinetic and safety profiles similar to chloroquine (CQ) in humans. Design: Phase I double-blind, randomized controlled trials to compare AQ-13 and CQ in healthy volunteers. Randomizations were performed at each step after completion of the previous dose. Setting: Tulane–Louisiana State University–Charity Hospital General Clinical Research Center in New Orleans. Participants: 126 healthy adults 21–45 years of age. Interventions: 10, 100, 300, 600, and 1,500 mg oral doses of CQ base in comparison with equivalent doses of AQ-13. Outcome Measures: Clinical and laboratory adverse events (AEs), pharmacokinetic parameters, and QT prolongation. Results: No hematologic, hepatic, renal, or other organ toxicity was observed with AQ-13 or CQ at any dose tested. Headache, lightheadedness/dizziness, and gastrointestinal (GI) tract–related symptoms were the most common AEs. Although symptoms were more frequent with AQ-13, the numbers of volunteers who experienced symptoms with AQ-13 and CQ were similar (for AQ-13 and CQ, respectively: headache, 17/63 and 10/63, p = 0.2; lightheadedness/dizziness, 11/63 and 8/63, p = 0.6; GI symptoms, 14/63 and 13/63; p = 0.9). Both AQ-13 and CQ exhibited linear pharmacokinetics. However, AQ-13 was cleared more rapidly than CQ (respectively, median oral clearance 14.0–14.7 l/h versus 9.5–11.3 l/h; p ≤ 0.03). QTc prolongation was greater with CQ than AQ-13 (CQ: mean increase of 28 ms; 95% confidence interval [CI], 18 to 38 ms, versus AQ-13: mean increase of 10 ms; 95% CI, 2 to 17 ms; p = 0.01). There were no arrhythmias or other cardiac AEs with either AQ-13 or CQ. Conclusions: These studies revealed minimal differences in toxicity between AQ-13 and CQ, and similar linear pharmacokinetics. INTRODUCTION Malaria is an overwhelmingly important public health problem with up to 3-4 billion cases and 3 million deaths each year [1,2].In terms of malaria control and human health, chloroquine (CQ) was the most important antimalarial for more than 40 years because of its efficacy, safety, and affordability [3][4][5]. However, since the first reports of CQ-resistant Plasmodium falciparum in the 1960s [6,7] and the subsequent spread of CQ resistance across Southeast Asia, South America and sub-Saharan Africa [8], the single most important factor in the worldwide morbidity and mortality of malaria has been the increasing prevalence of CQ resistance in P. falciparum [9,10]. Recent studies by ourselves and others have shown that aminoquinolines (AQs) with modified side chains are active against CQ-resistant P. falciparum in vitro [11][12][13].Subsequently, we have shown that these AQs are as safe as CQ in mice and monkeys (Cogswell, et al., unpublished data), and are active in two monkey models of human malaria (P.cynomolgi in rhesus monkeys [14], which is a model of P. vivax infection in humans, and CQ-resistant P. falciparum in squirrel monkeys [15], which is a model of human CQ-resistant P. falciparum infection).The next step was to conduct a Phase I randomized clinical trial (RCT) to determine the safety and the pharmacokinetic behavior of the lead compound, AQ-13, in healthy volunteers (Figure 1). Selection of AQ-13 as the Lead Compound Criteria for the selection of AQ-13 as the lead compound were: (1) in vitro activity against CQ-susceptible andresistant P. falciparum, (2) activity in monkey models of human P. vivax and CQ-resistant P. falciparum infection, (3) safety, and (4) affordability.Based on these criteria, three AQs (AQ-13, AQ-21, and AQ-34) each could have been the initial lead compound.However, AQ-34, which has an isopropyl side chain, was dropped from consideration because the chiral center on its side chain resulted in two enantiomers.Because an additional (optical) purification would have been required to separate those enantiomers, the cost of pure AQ-34 would have been greater than that of AQ-13 or AQ-21.In addition, further studies would have been required to compare the activities and toxicities of the two enantiomers.Between AQ-13 and AQ-21 (which have linear propyl and ethyl side chains), AQ-13 (Figure 2) was chosen as the lead compound because it was more active in monkey models of human malaria (Cogswell, et al., unpublished data). Preclinical Studies of AQ-13 in Comparison with CQ After AQ-13 had been selected as the initial lead compound, preclinical studies were performed to examine its toxicology and pharmacokinetics in animals in comparison with CQ [16,17].Because those studies revealed no differences in toxicity between AQ-13 and CQ and similar pharmacokinetics, an Investigational New Drug Application was filed with the US Food and Drug Administration (IND 55,670) [18].The rationale of that application was that an AQ active against CQ-resistant P. falciparum that was as safe and economical as CQ would be a major advance: because the few drugs effective against CQresistant P. falciparum are too expensive for use by the impoverished residents of malaria-endemic countries [19][20][21], because malaria parasites are already developing resistance to the expensive antimalarials now in use [22,23], and because there are unresolved concerns about the safety of the antimalarials now used to treat CQ-resistant P. falciparum [24,25]. Based on this information, the Phase I clinical trial reported here was performed as a series of RCTs to determine whether there were significant differences in toxicity (safety) or pharmacokinetics between AQ-13 and CQ in human volunteers. Editorial Commentary Background: Chloroquine (CQ) is a drug that has been widely used for over 40 years for the treatment and prevention of malaria.It is cheap, safe, and, except in areas where resistant malaria parasites exist, effective.However, the spread of resistant malaria parasites in most malarial regions of the world has meant that this drug, and many others, can no longer be relied upon to control disease.New drug candidates are therefore needed, and ideally should be cheap to produce as well as safe and effective.Some research groups are working on potential drug candidates from the aminoquinoline family of compounds, which includes chloroquine.One candidate, AQ-13 (aminoquinoline-13) has already been studied in animal and in vitro experiments, and seemed to be a good candidate for further testing in humans.Therefore, as the first stage in evaluating AQ-13 further, this group of researchers carried out a Phase I trial in healthy humans.The researchers specifically wanted to compare how often people given AQ-13, as compared to those given CQ, had side effects, and to find out how AQ-13 is handled in the body (i.e., how quickly the compound is taken into the bloodstream, gets broken down, and how it affects normal body functions).These sorts of studies do not tell researchers anything about the efficacy of the drug in treating malaria, but the results are absolutely essential before trials can be done that do test efficacy in people with malaria.126 healthy volunteers were recruited into the study, and each received capsules containing a different dosage of either AQ-13 or CQ.Side effects data were collected for four weeks after the drugs were given. What this trial shows: The most common side effects experienced by volunteers in the trial were headache, light-headedness, dizziness, and gastrointestinal symptoms such as nausea, vomiting, and diarrhea.Overall, the frequencies of such events were roughly similar among people receiving AQ-13 and those receiving CQ, but due to the small numbers of participants in the trial, it is not possible to say whether any observed differences in frequency of side effects between the two groups are meaningful or not.The data collected in this trial also showed that both AQ-13 and CQ were absorbed into the bloodstream in a similar way, but AQ-13 was absorbed more slowly than CQ.On ECG testing, both compounds increased the QT interval (part of the heart's electrical cycle, and used as a measure of heart function), particularly at high dosage levels, and volunteers given CQ experienced a greater increase in QT interval than those receiving AQ-13.No volunteers experienced any symptoms related to heart function.The researchers concluded that on the basis of these data, AQ-13 could proceed to further trials to evaluate Participants Healthy volunteers from 21 to 45 years of age were invited to participate in these studies.Exclusion criteria included pregnancy, breast-feeding, abnormal liver or kidney function tests, anemia (hemoglobin , 12 g/dl), chronic medications other than birth control pills, and an abnormal electrocardiogram (ECG) or Holter recording.Inpatient and outpatient studies were performed at the Tulane-Louisiana State University (LSU)-Charity Hospital General Clinical Research Center (GCRC) in New Orleans, Louisiana, United States. There were two rationales for performing the Phase I studies of AQ-13 in the United States rather than in a malariaendemic area: (1) ethical concerns of developing country colleagues and potential participants about drugs developed in the US are resolved most effectively by data indicating that the agent to be studied has been tested and shown to be safe in American volunteers, and (2) FDA regulatory staff required safety data from the US before considering studies of an investigational antimalarial in sub-Saharan Africa. Informed consent was obtained from each volunteer before screening, based on a consent form approved by the Tulane Institutional Review Board.In addition, an independent Data Safety and Monitoring Board approved by National Institutes of Health, FDA, and the US Centers for Disease Control and Prevention reviewed the results for each dose with the principal investigator (DJK) and his colleagues before providing their permission to proceed to the next dose.The members of the Data Safety and Monitoring Board and their affiliations are listed below in the Acknowledgments section.Enrolment of volunteers began in August 1999 and follow-ups were completed in August 2005.Data entry was concluded in September 2005. Interventions Participants were allocated randomly to receive the new candidate drug, AQ-13, or CQ.Sixteen volunteers were Designed as a series of double-blind RCTs at incremental oral doses of 10, 100, 300, 600, and 1,500 mg, with a 700 mg adjustment dose after 600 mg to ensure similar bioavailability for AQ-13 and CQ at the 1,500/1,750 mg dose (based on the area under the curve, AUC s , in h 3 lM, as in Figure 3).*AQ-13 dosages: 700 þ 700 þ 350 mg on days 1, 2, and 3, respectively; CQ dosages: 600 þ 600 þ 300 mg on days 1, 2, and 3, respectively.doi:10.1371/journal.pctr.0020006.g001randomized to receive AQ-13 or CQ (eight each) at doses of 10, 100, or 300 mg base.At the 600 mg dose, 36 volunteers were randomized (12 each) to AQ-13 capsules, CQ capsules, or Sanofi-Winthrop CQ tablets (Aralen).AQ-13 was produced as the dihydrochloride, trihydrate salt under Good Manufacturing Practice (GMP) conditions by Starks Associates (Buffalo, New York, United States) and CQ as the phosphate salt by Sanofi-Winthrop (New York, New York, United States).Using this GMP material, University Pharmaceuticals (Baltimore, Maryland, United States) and SRI International (Menlo Park, California, United States) produced color-coded capsules containing equal molar doses of AQ-13 and CQ.Quality assurance and dissolution tests were performed by University Pharmaceuticals, SRI International and RTI (Research Triangle, North Carolina, United States) [18].The third arm (commercially available CQ tablets) was included at the request of the FDA to determine whether there were differences between the CQ capsules prepared from GMP CQ phosphate (Sanofi-Winthrop) and commercially available CQ phosphate tablets (Aralen).Before the 1,500 mg therapeutic dose, 13 volunteers received a 700 mg adjustment dose of AQ-13 to compensate for the more rapid clearance of AQ-13.At the next stage of the Phase I study, 29 volunteers were randomized to receive either the standard therapeutic dose of 1,500 mg CQ base over 3 d or an equivalent 1,750 mg dose of AQ-13 (based on the adjustment dose). Outpatient screening and inpatient admission.To determine their eligibility, all volunteers had a complete physical exam, including an eye examination (visual acuity, visual fields, indirect ophthalmoscopy), were screened for hematologic and chemical abnormalities (complete blood count, chemistry panel including aspartate aminotransferase [AST], alanine aminotransferase [ALT], alkaline phosphatase, gamma-glutamyl transpeptidase [Gamma-GT], lactate dehydrogenase [LDH], bilirubin, creatinine, blood urea nitrogen [BUN], and fasting glucose), and for arrhythmias and other evidence of cardiac disease (physical exam, ECG, 24-hour Holter recording).Weight was measured by an electronic scale and height with a wall-mounted meter stick (Seca 216 Stadiometer, HealthCheck Systems, Brooklyn, New York, United States).Body mass index (BMI) was calculated using the formula: BMI ¼ weight (kg)/height 2 (m 2 ). Eligible volunteers were admitted as inpatients to the GCRC.Urine pregnancy testing was performed at the time of screening and again the evening before drug administration.Creatine kinase testing was also performed twice: at the time of screening and again on the evening of admission.Volunteers remained in the GCRC for 2.5-3.5 d depending on the AQ dose: 2.5 d for the 10, 100, 300, 600 mg and adjustment doses; 3.5 d for the 1,500/1,750 mg therapeutic dose. AQ administration, and blood and urine samples for drug and metabolite levels.Study drugs were administered in the GCRC on an empty stomach between 8 and 9 AM the morning after admission (after fasting for 10 h).For the first three doses, volunteers received single capsules containing 10, 100, or 300 mg CQ base or an equivalent molar amount of AQ-13 (9.1, 91.3, or 273.8 mg AQ-13 base, Figure 1).For the 600 mg dose and the 700 mg adjustment dose, volunteers received two 300 or 350 mg AQ-13 capsules (547.5 or 638.8 mg AQ-13 base) or two 300 mg CQ capsules, as a single morning dose.For the 1,500/1,750 mg therapeutic dose, volunteers received two 350 mg AQ-13 capsules or two 300 mg CQ capsules on days 1 and 2, and a single 350 mg AQ-13 or 300 mg CQ capsule on day 3 for doses of 1,750 mg AQ-13 (1,596.9mg AQ-13 base) or 1,500 mg CQ. Follow-up urine and blood samples.In addition to the blood samples, twenty-four hour urine collections were obtained for 3 d after the 1,500/1,750 mg therapeutic dose to evaluate the urinary excretion of AQ-13, CQ, and their metabolites.Concentrations of AQ-13, CQ, and their metabolites were measured in whole blood and 24-hour urines with a fluorescence high-performance liquid chromatography (HPLC) assay using an Xterra RP18 analytical column with an elution buffer containing 60% borate (20 mM, pH 9.0) and 40% acetonitrile.Quantitation was based on the peak:area ratios for AQ-13, CQ, and their metabolites in relation to the internal standard [26]. Measurement of effects of AQ-13 and CQ on the QT interval.After the 600 mg AQ-13 and CQ doses and the 700 mg (adjustment) AQ-13 dose, the QT interval was measured electronically from ECG recordings.The effects of the study drugs on the QT interval were assessed by comparing QT Two-dimensional structures are presented.Note that the AQ rings of AQ-13 and CQ are identical; the structural differences between AQ-13 and CQ are in their side chains: linear propyl side chain for AQ-13, branched isopentyl side chain for CQ.Therefore, the molecular weight (MW) of AQ-13 (292 Da) is 28 Da less than CQ (320 Da).Metabolism by N-dealkylation converts an ethyl group to a hydrogen (proton) at each step, resulting in stepwise MW differences of 28 Da.doi:10.1371/journal.pctr.0020006.g002intervals before dosing with QT intervals 4 h after dosing, and at the 2 wk follow-up.After the 1,750 and 1,500 mg doses of AQ-13 and CQ, continuous 5 d Holter recordings were used to compare the effects of AQ-13 and CQ on the QT interval adjusted for a heart rate different from 60 beats per minute (QTc).Three 1-min recordings were examined from before dosing (baseline), from 4 and 5 h after each dose, and from 24 h after the last dose.QT intervals were measured manually and electronically (Rozinn Electronics, Glendale, New York, United States).Recordings obtained 48 h after the last dose, on the fifth day of Holter monitoring, were not used for analysis because they were of poor quality.Correction of the QT interval for heart rate (i.e., QTc) was performed using Bazett's formula [27]. Recording and reporting of adverse events.Adverse events (AEs) were recorded in weekly diaries provided to each volunteer.The relatedness of these AEs to the study drugs was assessed by two physicians (FM, CH) based on temporal association and biological plausibility using five categories: definitely not, unlikely, possibly, probably, and definitely related.The AEs reported in this manuscript include all AEs assessed as possibly, probably, or definitely related.The one disagreement between these physicians was resolved by the principal investigator (DJK). Objectives The basic and preclinical studies of AQ-13 and CQ [11][12][13][16][17][18] generated two hypotheses for the Phase I human studies: AQs structurally similar to CQ were likely to: (1) be safe in human volunteers, and (2) have side effects (AEs) and pharmacokinetics (blood levels and bioavailability) similar to those of CQ.Because AQ-13 was cleared more rapidly than CQ in the preclinical studies [17,18], the protocol for the Phase I human studies included a dose adjustment step after the 600 mg dose (Figure 1).Because the information available about the effects of CQ on the QT interval was limited [28,29], these studies used Holter recordings to compare the effects of CQ and AQ-13 on the QT interval. Thus, the objectives of this Phase I trial were to determine: (1) the pharmacokinetics and safety of an investigational AQ active against resistant malaria parasites (AQ-13) [11][12][13], including its effects on the QTc interval, and (2) whether AQ-13 is likely to have pharmacokinetic and safety profiles similar to chloroquine (CQ) in humans.To address these questions, we performed a series of double-blind RCTs with incremental oral doses of AQ-13 and CQ equivalent to 10, 100, 300, 600, and 1,500 mg CQ base. Outcomes Primary outcomes (endpoints) for the RCTs comparing incremental oral doses of AQ-13 and CQ included their pharmacokinetics, clinical and laboratory adverse events (AEs), and their effects on the QT interval.Pharmacokinetic parameters were calculated using the WinNonlin software (Pharsight, Mountain View, California, United States); they included: maximal drug concentration in the blood (C max ), time from oral administration to C max (T max ), total area under the curve (AUC s ), terminal elimination half-life (t 1/2 ), mean residence time (MRT), apparent oral clearance (Cl/F) and apparent volume of distribution (Vd/F).Clinical AEs were symptoms assessed as possibly, probably or definitely drugrelated by the blinded physician reviewers that occurred within four weeks of drug administration.Laboratory AEs were abnormal hematologic or chemical test results identified within 4 d of drug administration or at the 2 or 4 wk followup.The effects of AQ-13 and CQ on the QT interval were defined in relation to the baseline QT interval, before AQ-13 or CQ administration. Sample Sizes Sample sizes chosen for the lower doses (10, 100, and 300 mg) were eight in each group (AQ-13 and CQ) in order to detect one or more severe AEs in each dose-drug group with probabilities of 94% and 83%, assuming AE rates of 30% and 20%, respectively.Sample sizes chosen for the higher doses (600, 700, 1,500, and 1,750 mg) were 12 or 13 in each group in order to obtain a minimum of ten evaluable participants for pharmacokinetic studies within each dose-drug group and thus to detect one or more severe AEs in each dose-drug group with probabilities of 99%, 93%, and 72% based on AE rates of 30%, 20%, and 10%. Randomization Volunteers who agreed to participate in the study, satisfied the inclusion and exclusion criteria, and completed their baseline studies were randomized to one of two or three treatments.Assignments of individuals to two treatments, A and B, were prepared by the study statistician by permuting blocks of four (A,A,B,B) and six (A,A,A,B,B,B) with a random number generator in a stepwise fashion-envelopes were prepared for each dose after the previous dose had been completed.The blocks of four and six were randomized so that block size was unknown to the investigators.For the comparison of three treatments, a similar procedure was performed for blocks of six (A,A,B,B,C,C) and nine ( A,A,A,B,B,B,C,C,C).There was no stratification in this study.Assignments were then handdelivered to the study pharmacist in opaque, sealed, numbered envelopes.On the morning(s) of drug administration, the study pharmacist opened those envelopes and dispensed the indicated drug (AQ-13 or CQ). Blinding Neither the volunteers, the clinical or laboratory staff, nor the investigators knew which drugs the participants had received.Allocation codes and study drugs were controlled by the study pharmacist in the hospital pharmacy, which was outside the GCRC.Study drugs were dispensed the morning after admission after a phone call from the charge nurse indicating that a new volunteer had been admitted and was ready for drug administration.Interim data were reported to the Data Safety and Monitoring Board without breaking the code.Results and comparisons were reported for volunteers in two groups at the 10, 100, 300, and 1,500/1,750 mg doses (groups 1 and 2), and in three groups at the 600 mg dose (groups 1, 2, and 3).The staff, nurses, and investigators caring for volunteers in the GCRC and evaluating the relatedness of AEs to the study drugs were blinded; i.e., they did not know which drugs the participants had received. Statistical Methods Drug concentration data for each participant were fitted to a noncompartmental pharmacokinetic (PK) model in order to estimate PK parameters using the WinNonlin 4.1 software (Pharsight).A noncompartmental model with extravascular input was chosen because it required fewer assumptions and because it better described the blood-concentration data [35].Partial areas under the curve (partial AUCs) were calculated using the linear trapezoidal method up to the last blood concentration; total AUCs were then estimated by adding the extrapolated AUC from the last measurement to infinity [35].Because the near-horizontal terminal slopes of the concentration-time data made the estimates of the extrapolated part of the area under the curve less reliable, oral clearance (Cl/F) was calculated from the formula (Cl/F ¼ dose/AUC obs ), where AUC obs is the partial AUC based on the empirically observed data (for 4 wk).The multiple-dose model for the 1,500/1,750 mg therapeutic doses was derived using the nonparametric superposition method [35].MRT was estimated using the statistical moments approach: (MRT ¼ AUMC/AUC), where AUMC is the area under the first-moment concentrationtime curve.Renal clearance (Cl r ) was estimated from the means of the renal clearances for the three 24-h urine samples collected on days 1-3 after dosing, using the formula (Cl r ¼ X/pAUC) where X is the amount of the compound excreted in the urine, and pAUC is the partial blood AUC for the day of the urine sample.Because the terminal portions of the concentration-time curves for the metabolites were virtually flat in some cases, data for curves in which the extrapolated AUC exceeded 65% of the total AUC were not included in the analysis. Quantitative data are presented as the mean 6 standard deviation or as median and range, as appropriate.Fisher exact test or Pearson chi-square was used to compare the frequencies of the AEs reported for AQ-13 and CQ at each dose, and between African Americans and persons of European descent.Due to a lack of normality, the nonparametric Mann-Whitney U test was used to compare independent samples; the Wilcoxon or Friedman test (whichever was appropriate) was used to compare repeated measures of the QTc interval.All statistical tests were two-sided with an a (significance level) of 0.05.Analyses were performed using the SPSS 11.0 statistical software (SPSS, Chicago, Illinois, United States).All analyses were based on allocation by intent-totreat.The small differences in the number of participants (n) across the tables in ''Results'' are due to the different numbers of missing data points for different outcomes. This study had 70% power to detect a 50% difference in the frequency of AEs, assuming a 40% frequency of AEs in the control group (CQ).This power is based on combined dose groups, excluding the 10 mg dose.At the 600/700 mg dose level, this study had 80% power to detect a difference of 12 ms or greater in the mean change of the QTc interval from baseline.At the 1,750/1,500 mg doses, this study had 80% power to detect a difference of 15 ms or greater in the mean change of the QTc interval from baseline.All power calculations were performed using variances estimated from the study data. Recruitment of Volunteers and Participant Flow A total of 215 volunteers were screened to obtain 175 eligible participants (Figure 1).The remaining 40 volunteers were ineligible because they had abnormal chemistry or hematology lab results, abnormal ECGs or Holter recordings, or other health problems.Of the 175 eligible volunteers, 49 decided not to enroll or were lost because of scheduling conflicts or delays between screening and enrollment.Three volunteers withdrew after enrolment at the 1,500/1,750 mg dose; the participation of one volunteer was terminated by the supervising physician because of otitis media, and two volunteers dropped out for reasons unrelated to AEs after two doses (Figure 1).Of the 123 volunteers who received the planned doses of AQ-13 or CQ, 26 missed one or more of the eight follow-up visits, and 97 completed each of the follow-up visits.Available AE and Holter data for the three participants who withdrew were included in the analyses. Baseline Data and the Results of Randomization Based on age, sex, race, weight, BMI, and the baseline QTc interval, there were no significant differences between volunteers randomized to AQ-13 versus CQ (Table 1).When baseline characteristics were compared at the different dose levels, there was a significant difference between the AQ-13 and CQ groups in mean weight (but not BMI) only at the 100 mg dose (unpublished data). Numbers Analyzed All the 63 participants who received AQ-13 and the 63 who received CQ were analyzed for AEs, including those who withdrew before completing the intended dose.Holter data were available on 14 out of the 15 participants who received 1,500 mg CQ, and on 13 out of 14 participants who received 1,750 mg AQ-13, and were all included in the analysis. Outcomes and Estimation Frequency of AEs.The AEs reported most frequently were headache and lightheadedness/dizziness, which were distributed similarly among volunteers randomized to AQ-13 and CQ (Table 2).Headache was reported by 31 of 126 volunteers; 27 of those 31 reports were assessed as drug-related by the blinded physician reviewers.Of the 27 drug-related reports of headache, 17/63 (27%) were in volunteers who received AQ-13 and 10/63 (16%) were in volunteers who received CQ (p ¼ 0.2).Lightheadedness/dizziness first appeared at the 300 mg dose level and was reported by 24/126 volunteers.Nineteen of those 24 reports were assessed as drug-related (11/ tract symptoms were the next most common AEs (nausea, diarrhea, vomiting, abdominal pain, loss of appetite), and first appeared at the 600 mg dose.The numbers of volunteers reporting one or more GI symptoms were similar in the two groups (AQ-13, 14/63; CQ, 13/63; p ¼ 0.9).However, drugrelated GI symptoms were reported more frequently by volunteers who received AQ-13 (28 reports) than volunteers who received CQ (20 reports), because GI symptoms were more clustered in volunteers treated with AQ-13.Other symptoms, such as mild, transient eye (blurred vision, difficulty focusing, floating objects) or ear symptoms (changes in hearing, ringing in the ears), mild skin rash (one volunteer had a sparse maculopapular erythematous eruption on the lower torso), and fatigue were infrequent and occurred at similar frequencies in both groups.CQ pruritus was not reported by any of the volunteers.Because of the small numbers of volunteers studied at each dose, no significant conclusions can be drawn from comparisons of AEs between drugs at the individual dose levels.There were no differences in the incidence of AEs between African American volunteers and those of European descent (p ¼ 0.63). Post-dose clinical and laboratory follow-up.There was no clinical evidence for end-organ toxicity after either AQ-13 or CQ during the daily inpatient examinations, or at the 2 or 4 wk outpatient follow-up.Specifically, there was no evidence for cardiac, dermatologic, hepatic, hematologic, ocular, or other organ toxicity. Repeat laboratory testing 4 d after drug administration revealed no hematologic or chemical toxicities with either AQ-13 or CQ.At the 2 wk follow-up, two volunteers who received AQ-13 and two who received CQ had mildly abnormal liver function tests (AQ-13: one participant with a bilirubin of 1.5 mg/dl, another participant with an AST of 135 U/l, an ALT of 149, and an alkaline phosphatase of 146; CQ: one participant with a bilirubin of 1.7 and another with an ALT of 50).The two volunteers who received AQ-13 had received the 100 and 300 mg doses; the two volunteers who received CQ had both received the 1,500 mg dose.Follow-up test results were normal for all participants at the times of the 3 and 4 wk outpatient visits. Pharmacokinetics of AQ-13 and CQ.The 600 mg doses of AQ-13 and CQ were absorbed rapidly after oral administration (Figure 3).Blood levels of AQ-13 and CQ peaked at similar times (T max 4.0 h [1.0-8.0 h] and 3.0 h [1.0-8.0 h] for AQ-13 and CQ), but had different maximal concentrations (C max 1.4 lM [0.9-2.4 lM] and 1.8 lM [1.3-5.2 lM] for AQ-13 and CQ; p , 0.01), and the absorption of CQ was slightly more rapid than AQ-13 (Table 3).One hour after dosing, the CQ blood level was 72% of the CQ C max versus 52% for AQ-13.AQ-13 had a shorter terminal elimination t 3).However, no PK differences were observed between the results obtained with the GMP CQ capsules and the standard CQ tablets available commercially (Aralen, p 0.15 for all PK parameters). With the 700 mg adjustment dose of AQ-13, the lower C max and the smaller AUC s of AQ-13 at the 600 mg dose (Figure 3A; Table 3) indicate that AQ-13 is less bioavailable than CQ, cleared more rapidly than CQ, or both.To compensate for the apparent lower bioavailability of AQ-13 and achieve similar systemic exposure (based on the AUC)-in order to compare the safety of AQ-13 and CQ-the AQ-13 dose was increased (adjusted) to 700 mg and compared with the 600 mg dose of CQ (Figure 3B and 3C).Because the major metabolite of AQ-13 (mono-N-dealkylated AQ-13) is not active against CQ-resistant parasites, this adjustment was based on the AUC s for the parent compound (AQ-13), and did not consider either of its metabolites.The 700 mg dose of AQ-13 was administered to 13 healthy volunteers using the same protocol.The 700 mg dose of AQ-13 produced a larger AUC s than 600 mg CQ, but a similar first-week partial AUC (AUC w1 ), and a similar mean C max (Table 3).Based on these results, the 1,500 therapeutic dose of CQ was compared with 1,750 mg of AQ-13 in the last part of the study. In the comparison of the 1,500 mg therapeutic dose of CQ with 1,750 mg AQ-13, the 1,750 mg AQ-13 dose produced a smaller AUC s than 1,500 mg CQ, although this difference was of borderline significance (p ¼ 0.09; Table 4).The more clinically relevant AUC w1 and mean C max tended to be lower in volunteers who received AQ-13 than in volunteers who received CQ, although these differences were not significant (p ¼ 0.3).These results are consistent with the 600/700 mg dose because AQ-13 was cleared more rapidly than CQ (Cl/F ¼ 14.0 l/h [6.8-20.3l/h] and 9.5 l/h [5.4-20.6 l/h]; p ¼ 0.03).However, the terminal elimination t ½ and MRT were similar 4).With both AQ-13 and CQ, peak blood concentrations were achieved 3-4 h after the second dose (27-28 h after the first dose). Pharmacokinetics of AQ-13 and CQ metabolites.Mono-Ndealkylated AQ-13 and CQ (AQ-72 and MDCQ) are the major metabolites of AQ-13 and CQ [33].Both AQ-72 and MDCQ appeared in the blood within 1 h after the oral administration of 600 or 700 mg of AQ-13 or 600 mg of CQ (Table 5), and were identified in all but two of 60 participants (one each with AQ-13 and CQ).Although the di-dealkylated metabolites of AQ-13 and CQ (AQ-73, BDCQ) were not detected in the blood, they were identified in urine collections from days 1-3 after dosing. The pharmacokinetics of AQ-72, the initial metabolite of AQ-13 (600 mg AQ-13), were similar to those of the parent drug (median MRT of 16.In contrast, the pharmacokinetics of MDCQ, the initial metabolite of CQ (600 mg CQ), were different from those of CQ; MDCQ had a longer MRT and terminal t 1/2 than CQ (median MRT of 44.8 d [20.8- ; p , 0.01).Similar results for both AQ-72 and MDCQ were obtained at the 1,750/1,500 mg doses, except MRT was shorter with MDCQ at the 1,500 mg dose than at the 600 mg dose (Table 5). The amounts of unchanged drug recovered from 24 hour urine collections on days 1-3 were 8.4% and 18.0% of the total oral doses of AQ-13 and CQ, respectively (443 lmol [304-645 lmol] and 829 lmol [530-1,202 lmol]).Although the AQ-72/AQ-13 and MDCQ/CQ ratios in urine were similar (23.6% and 22.7%), when comparing the ratios of the second metabolite to the parent drugs, the AQ-73/AQ-13 ratio was twice as large as the BDCQ/CQ ratio (5.2% and 2.6% for AQ-73 and BDCQ), consistent with more effective conversion of AQ-13 to its mono-and di-dealkylated metabolites, more rapid Cl r of AQ-73 than BDCQ, or both.The Cl r of AQ-13 was less than that of CQ (p ¼ 0.01; Table 4).Similarly, the renal clearance of AQ-72 was less than that of MDCQ (p , 0.01; Table 5). Effects of AQ-13 and CQ on the QTc interval.Both AQ-13 and CQ prolonged the QTc interval at doses of 600/700 and 1,500/1,750 mg.CQ produced greater prolongation of the QTc interval than AQ-13 (Table 6).Four hours after drug administration, volunteers who received 600 mg CQ had a mean 16 ms (95% confidence interval [CI], 9 to 23 ms) increase in the QTc interval from baseline, in comparison to an 11 ms (95% CI,, 4 to 18 ms) increase after 600 or 700 mg AQ-13.When the data were analyzed by gender, significant increases in the QTc interval were observed only for females with both drugs (AQ-13, 18 ms increase [95% CI, 10 to 27 ms]; CQ, 22 ms increase [95% CI, 14 to 31 ms]).In contrast, mean QTc interval changes were not significant for males with either AQ-13 or CQ (AQ-13, 1 ms; CQ, 7 ms; p .0.3 for both).Among the 49 male and female volunteers who received 600/700 mg AQ-13 or 600 mg CQ, two volunteers developed QTc intervals greater than 450 ms (467 ms and 457 ms).Both were female, both had received CQ; neither had any cardiac AEs. On the other hand, for the 1,750 mg AQ-13, 1,500 mg CQ dose, after the therapeutic dose, the effects of AQ-13 and CQ on the QTc interval were parallel to their blood levels-that is, QTc prolongation was greatest 4 h after the second dose on day 2, which was the time of the peak blood levels for both drugs (Figures 4-6).With AQ-13, the mean 6 standard deviation QTc interval increased from 397 6 16 ms at baseline to 407 6 11 ms 4 h after the second dose (p ¼ 0.025).With CQ, the mean QTc interval increased from 396 6 21 ms to 424 6 19 ms (p , 0.01).The mean increase in the QTc interval was greater after CQ than AQ-13: 28 ms (95% CI, 18 to 38 ms) versus 10 ms (95% CI, 2 to 17 ms).Figure 4 demonstrates the time course of the effects of the study drugs on the QTc interval, which then decreased gradually after day 2 as the AQ-13 and CQ blood levels fell.Despite prolongation of the QTc interval by both CQ and AQ-13, there were no cardiac AEs (Table 6). When the data were analyzed by gender, the mean QTc prolongation tended to be greater with CQ than AQ-13 in both males and females (males: 16 ms for AQ-13, 95% CI 9 to 23 ms; 31 ms for CQ, 95% CI 16 to 46 ms; females: 12 ms for AQ-13, 95% CI 4 to 20 ms; 28 ms for CQ, 95% CI 17 to 39 ms).However, the small number of volunteers in each category did not permit statistical comparisons between males and females within drug groups or between drugs.As with the 600/700 mg dose, two volunteers who received the 1,500/1,750 mg dose developed QTc intervals .450 ms 4 h after dosing on day 2 (453 ms for both).Both were female, both had received CQ; neither had any cardiac AEs. Analysis of QTc interval changes at the individual level showed that the maximal prolongations of the QTc interval from baseline at the 600/700 mg dose were 54 ms for CQ and 42 ms for AQ-13.At the 1,500/1,750 mg dose, the maximal prolongation from baseline for CQ was 63 ms versus 35 ms for AQ-13.All four volunteers were female, and none experienced any cardiac AEs.QTc intervals returned to baseline in all participants by the time of the 2 wk follow-up. DISCUSSION Study Design and Interpretation RCTs.Because these studies were conducted as RCTs (Figure 1), they are different from Phase I clinical trials without controls.The rationale for this study design was twofold.First, the safety of CQ is sufficiently established that CQ is a standard against which other drugs are compared.Second, current [36,37].Thus, there was a need to re-examine the safety and pharmacokinetics of CQ using strategies such as Holter monitoring that were not available 60 years ago. As demonstrated in Table 1, the randomization strategy was effective: volunteers randomized to AQ-13 and CQ had similar age and sex distributions, similar weights and BMIs, and similar baseline QTc intervals.In terms of the upcoming Phase II studies of AQ-13 in Mali (West Africa), it is helpful that 37% (46/126) of the participants in Phase I were African or African American.The participation of these Africans and African Americans makes it less likely that the Phase II studies in Africa will identify new frequent AEs with AQ-13. Adverse Events AEs during the GCRC inpatient stay.Headache, lightheadedness, and GI tract AEs were reported most frequently; they occurred at similar frequencies with AQ-13 and CQ, and are known side effects of CQ [3,4,[30][31][32][33][34].Although AQ-13 may produce GI side effects more frequently (nausea, diarrhea), the number of volunteers studied does not permit one to conclude that AQ-13 has more GI toxicity than CQ.Other less common AEs, such as fatigue, blurred vision, ringing in ears, and rash were mild, transient, and had similar frequencies in both groups. AEs identified during the follow-up visits.At the 2 and 4 wk follow-up visits, there was no evidence for cardiac, ocular, hepatic, hematologic, renal, dermatologic, or other end-organ AEs.Although AEs involving these and other organs have been reported with AQs previously [31][32][33][34], they have typically been reported in persons treated for prolonged periods of time (5-10 y or more) at doses of 200-400 mg base or higher per day [34,38].The absence of clinically detectable AEs and the normal laboratory tests in 119 volunteers at the 2 and 4 wk follow-up are consistent with previous reports on the safety of short-term CQ treatment [3,4,[30][31][32][33].The abnormal liver function test results (ALT, AST, and alkaline phosphatase) in one volunteer at the 300 mg dose may be related to AQ-13.However, all the tests were normal one week later and no similar hepatic AEs were observed in any volunteer with higher doses of AQ-13.The AEs observed are consistent with the hypothesis that AQ-13 is as safe as CQ in humans. Pharmacokinetics Results obtained after the 600/700 and 1,500/1,750 mg oral doses of CQ and AQ-13 are consistent with previous studies; they demonstrated rapid oral absorption, a multiexponential decline in blood concentrations after the C max , a long terminal elimination t 1/2 , and a large Vd/F [39][40][41].The estimated CQ clearance is also in agreement with previous reports [29,42].However, accurate assessment of the terminal elimination t 1/2 and the Vd/F is difficult because of tissue sequestration with CQ [29,40,[42][43][44] and AQ-13.For example, the 14-24 d estimate of the terminal elimination t 1/2 for CQ agrees with some reports [40,41], but is shorter than in others [29,42]. With the 700 mg dose, clearance of AQ-13 was less than with 600 mg (medians, 11.8 l/h versus 14.7 l/h; p ¼ 0.01).One potential explanation is that participants who received the 700 mg dose were heavier than participants who received 600 mg (mean weights 6 standard deviation of 83.3 6 17.2 versus 72.0 6 14.1 kg; p , 0.01).As a result, AQ-13 may have distributed more extensively in participants who received 700 mg because of extra body fat, which made the drug less available for elimination, and thus may have affected its clearance [43]. After 1,500 mg CQ, MDCQ was eliminated more slowly than CQ (MDCQ: terminal t 1/2 of 21.8 d, MRT of 29.0 d; CQ: 13.2 and 16.0 d).In contrast, the terminal t 1/2 and MRT of AQ-72 were similar to those of AQ-13 (Tables 4 and 5).The longer t 1/2 and MRT of MDCQ (in comparison to CQ) are consistent with its lower renal clearance (3.8 l/h versus 6.0 l/h; p ¼ 0.03), and with the findings of other investigators [41,42,45].As with MDCQ and CQ, the renal clearance of AQ-72 was less than that of its parent compound, AQ-13 (2.0 l/h versus 3.3 l/h; p , 0.01).However, the similar t 1/2 values and MRTs of AQ-72 and AQ-13 are inconsistent with the lower Cl r of AQ-72; these findings suggest that another pathway, such as metabolism of AQ-72 to AQ-73 by the CYP450 system, may account for this difference.The greater urinary excretion of AQ-13 and CQ than their more water-soluble metabolites (Tables 4 and 5) [26] is consistent with the active transport of CQ, and possibly AQ-13, by organic cation transporters such as organic cation transporter-like 2 (ORCTL2) [46].The paradoxical observation that AQ-72 has both a shorter MRT in the blood and a lower Cl r than MDCQ (Table 5) may be explained by a greater role for CYP450 metabolism (N-dealkylation) with AQ-13 than CQ [47]; this hypothesis is also consistent with the observation that the urinary ratio for AQ-73/AQ-13 was twice the urinary ratio for BDCQ/CQ, consistent with greater conversion of AQ-72 to AQ-73 than of MDCQ to BDCQ. Effects of AQ-13 and CQ on the QTc Interval Previous animal [48,49] and human studies [28,50,51] have shown that CQ prolongs the QT interval.The results reported here confirm those observations, and establish the dose (blood-level)-related nature of QTc prolongation by CQ.At the 600 mg dose, CQ prolonged the mean QTc interval by 15 ms (Table 6).The same effect (16 ms QTc prolongation) was seen 4 h after the first 600 mg CQ dose (on day 1) with the 1,500 mg therapeutic dose of CQ (Figure 4).The QTc interval increased by an additional 12 ms after the second 600 mg CQ dose on day 2 (mean increase of 27 ms relative to baseline), and then decreased gradually as CQ blood levels fell after the third (300 mg) dose on day 3, and thereafter, thus demonstrating a dose (blood level)-response relationship between the CQ blood level and QTc prolongation.These results are consistent with a previous study that suggested a dose-dependent effect of CQ on the QT interval after oral administration [28]. Although a similar pattern was observed with AQ-13, the effects of AQ-13 on the QTc interval were less than those of CQ.For example, the first 700 mg dose at the 1,750 mg level prolonged the mean QTc interval by 4 ms, and the second by an additional 6 ms.The QTc interval then decreased gradually thereafter as the AQ-13 blood levels fell (Figure 4; Table 6). When the effects of AQ-13 and CQ were analyzed by gender, QTc prolongation was significant only for females after the 600 and 700 mg doses.In contrast, significant QTc prolongation was observed in both males and females after the 1,500/ 1,750 mg dose (Table 6).This discrepancy could be due to the known increased vulnerability of women to drug-induced QTc interval prolongation [52,53], which caused this effect to appear in them at doses lower than in men; alternatively, this could be a chance finding because of the small sample sizes involved.These results establish that AQ-13, like CQ, prolongs the QTc interval in humans and that CQ produces greater QTc prolongation than AQ-13.However, the significance of these observations is unclear because no arrhythmias or other cardiac AEs were observed in any participants. Generalizability The results reported here suggest that the AEs of AQ-13 may be no different from those of CQ, that higher doses of AQ-13 than CQ may be necessary to produce similar blood levels and AUCs, and that AQ-13 may produce less QT prolongation than CQ in humans.However, given the small numbers and nonrepresentative selection of study participants, the extent to which these results are generalizable is unclear. Overall Evidence The results reported here are consistent with the hypotheses underlying the objectives of these studies.First, the similar AEs observed with AQ-13 and CQ are consistent with the hypothesis that AQs with structures similar to CQ should be similarly safe in humans.Second, they demonstrate that AQ-13, an AQ analogous to CQ, has similar linear pharmacokinetics in human volunteers, despite the fact that it requires a larger dose to achieve equivalent drug exposure because of a more rapid clearance.These results are also consistent with the preclinical studies, which suggested that the AEs of AQ-13 and CQ would be similar and that a dose adjustment would be necessary for AQ-13 because of its more rapid clearance [17,18].Because this Phase I has demonstrated the safety of AQ-13 doses up to 1,750 mg, the next logical study (after examining the effects of a fatty meal on the absorption of AQ-13) is a dose-finding efficacy (Phase 2) study in humans with uncomplicated P. falciparum malaria. Figure 1 . Figure 1.Phase 1 Randomized Clinical Trial of AQ-13 in Comparison with CQ Figure 4 . Figure 4. Changes in the QTc Interval after 1,750 mg AQ-13 or 1,500 mg CQ Changes in the QTc interval from baseline were determined using the Rozinn Electronics system software to evaluate the Holter recordings.doi:10.1371/journal.pctr.0020006.g004 a p-
2016-05-04T20:20:58.661Z
2007-01-01T00:00:00.000
{ "year": 2007, "sha1": "66dbf7e285e9e77128f72be8d36761b3d1bd9f77", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosclinicaltrials/article/file?id=10.1371/journal.pctr.0020006&type=printable", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "66dbf7e285e9e77128f72be8d36761b3d1bd9f77", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
210631147
pes2o/s2orc
v3-fos-license
Simulation model of the electrical complex of auxiliary equipment of an oil and gas production enterprise In this article we define the problem, formulate the theme and justify the relevance of the tasks to be solved, which depend on the object under study. The object under study was determined and the structure and basic elements were analyzed. It was found that an auxiliary transformer and from eight to ten outgoing lines are connected to one section of bus of a power transformer of the field substation. Mathematical models of electrical systems of a field substation and a pumping station for transporting oil flows with shop transformers, individual, node and centralized compensating installations have been developed. In mathematical models which take into account the change in the volume of transported oil into a common pipeline, asynchronous electric motors of pumping units with a capacity of 75, 132 and 160 kW with low-voltage frequency converter are considered. Using the MATLAB Simulink software package, we developed mathematical models for 75, 132 kW (132 and 160 kW) electric drives of pumping units. These models take into account emergency operation, i.e. when two pumping units operate from the same power source. The study of the operating modes of the electrotechnical complex of a booster pump station during direct start-up from the electrical network was carried out using its simulation model developed in the PSCAM software package. The article presents the simulation results in the form of graphs showing relationships between frequency buildup, electromagnetic and dynamic moments and the stator current of the asynchronous motor. Scientific novelty of the present work is in the developed mathematical and simulation models which take into account parametric perturbations in external and internal distribution electrical networks, and allow one to determine the energy parameters of operating modes in all sections and elements of the outgoing line, taking into account reactive power compensation in transient and steady-state conditions. Introduction This article is aimed to develop a mathematical and simulation model of the electrical engineering complex of auxiliary equipment of an oil and gas producing enterprise in order to optimize its operating modes and increase its energy efficiency. The object of research is electrical engineering complex of auxiliary equipment of an oil and gas producing enterprise: a pumping station for transporting super-viscous oil (SVO) and high-sulfur oil (HSO) flows. Figure 1 shows the scheme of electrical complex of a field substation (ECFS) [1,2,7,15], from which the object under study receives power. This complex consists of electrical complexes of the enterprise (ECE) and auxiliary equipment (AE), which are powered from two sections of 10 kV busbars. AE is a pumping station for the transportation of SVO and HSO flows through a single pipeline over a distance of 15 km. The pumping station is equipped with four pumping units with low- [4,[8][9][10][11], which are connected to two step-down power transformers TM -630/10/0.4 kV. The mode of operation of pumping units is in pairs from different transformers T3 and T4 (Table 1). Figure 1. Single-line schematic diagram of electrical systems of an oil and gas production enterprise and its auxiliary equipment. Optimization of mode of voltage and power consumption of electric drives of pumping units provides development of mathematical and simulation models of these units. Figure 2 shows the MATLAB Simulink mathematical model of electric drives of pumping units when operating in emergency mode, i.e. when two pumping units operate from the same power source. With the help of this mathematical model, the quality of AE functioning is investigated, which includes electric drive of a pumping unit with low-voltage frequency converter with varying supply voltage and with changing of technological process. The mathematical model takes into account the process of changing the volume of SVO transportation and its temperature, as well as deviation (drops) of voltage in the distribution electric network up to 40% with a duration of 3-10 seconds [1,2,3]. Similar studies were carried out with mathematical models of electric drives of pumping units with engine capacities of 75 and 160 kW. As a result of mathematical modeling, we obtained graphs showing relationships between frequency buildup, electromagnetic and dynamic moments and stator current (Fig. 3) of an induction motor. Analysis of time-dependences of current of asynchronous motors stator with a power of 75, 132 and 160 kW showed that these engines are averagely loaded approximately by 61% relative to the rated currents. The starting current ratio of asynchronous motors is in the range from 5.17 to 6.35, which fully satisfies the nominal values. From the obtained dependencies of stator currents of asynchronous motors with a power of 75, 132 and 160 kW, it was established that all asynchronous motors started in a time interval of 26-35 seconds. Figure 4 shows the simulation model of AE, developed using the PSCAM software package. This model was used to study the AE operating modes in stationary conditions, when the low-voltage frequency converter is not actively involved in the work or when it is in failure, and electric drives of the pump units are started directly from the electric network. This AE simulation model allows one to take into account the parameters of all compensating units (CU) and determine the energy parameters in all parts of the complex electrical connections. It also allows one to explore the process of starting the electric drive of pumping units ( Figure 5 and 6) with voltage drops in the distribution electrical network. Similar studies were carried out for simulation models of electric drives of pumping units with engines of 75 and 160 kW. Simulation modeling was carried out for a voltage drop of 30% in a distribution electrical network with a 3 s duration [1.5-7.12]. As a result of simulation modeling, we obtained graphs showing relationships between frequency buildup, electromagnetic and dynamic moments ( Figure 5) and stator current ( Figure 5 and 6) of AM [13,14]. Analysis of dependencies of the supply line current of an asynchronous motor with a power of 132 kW ( Figure 6) showed that this motor, in the absence of an individual CU, is loaded averagely by 79%, and when taking into account the CU, it is loaded by 71% relative to the rated current. When using an individual CU, the supply line current is reduced by 8%. The electromagnetic moment and the moment of resistance in the steady state is 0.7 ( Figure 5) From the plots of 132 kW AM stator current, it is seen that with an individual CU, the engine starts in 9.5 seconds, and with CU it starts in 8.5 seconds. (Figure 6). Conclusions Scientific novelty of the proposed work are in the developed mathematical and simulation models which take into account disturbances in external and internal distribution electrical networks and allow one to determine the energy parameters of the operating mode in all areas and elements of the outgoing line, taking into account individual, node and centralized compensating installations in transient and steady-state modes.
2019-11-14T17:08:09.359Z
2019-11-13T00:00:00.000
{ "year": 2019, "sha1": "d1bbdcc87e1b46d640df0f0c9b62fb1dc09cffbc", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/643/1/012096", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a66c7c1af7163fe627e5907388dd49b067d15988", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
55835842
pes2o/s2orc
v3-fos-license
Preliminary Phytochemical screening and Biological Activities of Adina cardifolia Adina cordifolia leaf was investigated for its phytochemical screening and antioxidant activity. The plant extracts were screened for presence of flavonoids, carbohydrate, alkaloid, saponin, phenol, tannins, phlobatannins, terpenoids, cardiac glycosides. Total flavonoid content, phenols content was estimated. Antioxidant activity was determined using nitric oxide scavenging assay, DPPH assay, hydrogen peroxide scavenging and ferric reducing methods, also MIC was calculated against a set of bacteria (S. aureus, B. subtilis, E. coli, K. pneumonia). Preliminary Phytochemical screening and Biological Activities of Adina cardifolia Introduction Medicinal plants contain some organic compounds which provide definite physiological action on the human body and these bioactive substances include tannins, alkaloids, carbohydrates, terpenoids, steroids and flavonoids [1]. Awareness of medicinal plants usage is a result of the many years of struggles against illnesses due to which man learned to pursue drugs in barks, seeds, fruit bodies, and other parts of the plants [2]. The knowledge of the development of ideas related to the usage of medicinal plants as well as the evolution of awareness has increased the ability of pharmacists and physicians to respond to the challenges that have emerged with the spreading of professional services in facilitation of man's life [3]. Among the 7,000 species of medicinal plants recognized all over the world. The medicinal value of plants lies in some chemical substances that produce a definite physiologic action on the human body [4]. The most important of these bioactive compounds of plants are alkaloids, flavonoids, tannins and phenolic compounds. The phytochemical research based on ethno-pharmacological information is generally considered an effective approach in the discovery of new anti-infective agents from higher plant [5]. Scientists estimate that there may be as many as 10,000 different phytochemicals having the potential to affect diseases such as cancer, stroke metabolic syndrome [6]. Plants are rich in a wide variety of secondary metabolite such as tannins, terpenoids, alkaloids, and flavonoids which have been proved in vitro to have anti microbial properties. The use of plant extracts and phytochemical, both with known antimicrobial properties, can be of great significance in therapeutic treatments [7]. Adina cordifolia is a moderate sized deciduous tree grows up to 35 m in height. Leaves large, cordate and abruptly acuminate. Flowers yellow in globose pedunculate heads; fruits capsules, splitting into two dehiscent cocci, seeds many, narrow, small, and tailed. Test for flavonoids Test solution when treated with few drops of ferric chloride solution would result in the formation of blackish red color indicating the presence of flavonoids [9]. Test for carbohydrates Benedict's test-Test solution was mixed with few drops of Benedict's reagent (alkaline solution containing cupric citrate complex) and boiled in water bath, observed for the formation of reddish brown precipitate to show a positive result for the presence of carbohydrate [10]. Test for Alkaloids One milliliter of aqueous extract was stirred and placed in 1% aqueous hydrochloric acid on a steam bath, Then, 1 mL of the filtrate was treated with Dragedorff's reagent. Turbidity or precipitation with this reagent was considered as evidence for the presence of alkaloids [11]. Test for Saponin About 2 g of the powdered sample was boiled in 20 ml of distilled water in a water bath and filtered. 10 ml of the filtrate was mixed with 5 ml of distilled water and shaken vigorously for a stable persistent froth. The frothing was mixed with 3 drops of olive oil and shaken vigorously, then observed for the formation of emulsion [12]. Test for Phenols 5 ml of Folin Ciocalteu reagent and 4ml of aqueous sodium carbonate were added to 0.5 ml of extract. Appearance of blue color indicates the presence of phenols. Test for Tannins About 0.5 g of the extract was boiled in 10 ml of water in a test tube and then filtered. A few drops of 0.1% ferric chloride was added and observed for brownish green or a blue-black colouration [13]. Test for Phlobatannins Fruit extracts were boiled with 1% aqueous Hydrochloric acid. Formation of red precipitate indicates the presence of phlobatanins. Test for Terpenoids 5 ml of each extract were mixed in 2 ml of Chloroform and 3 ml Concentrated sulphuric acid was carefully added to form a layer. A reddish brown colour at the interface indicates the presence of terpenoids [14]. Test for Cardiac glycosides 5 ml of each extract was treated with 2 ml of glacial acetic acid containing one drop of Ferric chloride solution. This was layered with 1 ml of concentrated sulphuric acid. A brown ring at the interface indicates the presence of a deoxysugar which is characteristics of cardenolides. A violet ring may appear below the brown ring, while in the acetic acid layer, a greenish ring may form just spreading gradually throughout the layer [14]. Determination of total flavonoids content assay About 10 g of the plant sample was extracted repeatedly with 100 ml of aqueous, methanol, ethanol, petroleum ether, chloroform, benzene (80% accept water) at room temperature. The whole solution was filtered through Whatman filter. The filtrate was later transferred into a crucible and evaporated into dryness over a water bath; the dry content was weighed to a constant weight [15]. Determination of total phenols content assay Total phenols were determined by Folin Ciocalteu reagent. 5 ml of Folin Ciocalteu reagent and 4 ml of aqueous Sodium carbonate were added to 0.5 ml of extract. After 15 min. of incubation at room temperature, the absorbance was read at 765 nm. The standard curve was prepared using Gallic acid. Total phenols were expressed in terms of Gallic acid equivalents (mg CAE/100 g FW) [8]. Determination of nitric oxide scavenging activity assay Nitric oxide scavenging activity was measured by the spectrophotometry method [16]. Sodium nitroprusside (5mmol) in phosphate buffered saline was mixed with a control without the test compound. Test solutions at different concentrations (7.8-1000 µg/ml) were dissolved in methanol and incubated at 25º C for 30 min. After 30 min, 1.5 ml of the incubated solution was removed and diluted with 1.5 ml of Griess reagent (1% Sulphanilamide, 2% Phosphoric acid, and 0.1% Naphthyl ethylenediamine dihydrochloriode). The absorbance of the chromophores formed during the subsequent coupling with Naphthyl ethylenediamine dihydrochloriode was measured at 546 nm (Oyaizu M., 1986). Inhibition of nitric oxide free radical in percentage was calculated by the formula: Where A control is the absorbance of the control (solution without extract) and A test is the absorbance of samples (extract and ascorbic acid). Determination of 1, 1-diphenyl-2-picrylhydrazyl (DPPH) scavenging activity assay DPPH assay was carried out as described by Hsu et al. [17]. with some modifications. 1.5 ml of 0.1 mM DPPH solution was mixed with 1.5 ml of various concentrations (10 to 500 µg/ml) of leaf extract. The mixture was shaken vigorously and incubated at room temperature for 30 min in the dark. The reduction of the DPPH free radical was measured by reading the absorbance at 517 nm by a spectrophotometer. The solution without any extract and with DPPH and methanol was used as control. The experiment was replicated in three independent assays. Ascorbic acid was used as positive controls. Inhibition of DPPH free radical in percentage was calculated by the formula: Where A control is the absorbance of the control (solution without extract) and A test is the absorbance of samples (extract and ascorbic acid). Determination of Hydrogen Peroxide scavenging activity Assay The ability of the Adina cardifolia extracts to scavenge hydrogen peroxide was determined according to the method of A solution of hydrogen peroxide (40 mM) was prepared in phosphate buffer (pH 7.4). Extracts (100 μg/mL) in distilled water were added to a hydrogen peroxide solution (0.6 mL, 40 mM). Absorbance of hydrogen peroxide at 230 nm was determined 10 minutes later against a blank solution containing the phosphate buffer without hydrogen peroxide [18]. The percentage of hydrogen peroxide scavenging of both Adina Cardifolia extracts and Ascorbic acid standard compounds were calculated: Where A control is the absorbance of the control (solution without extract) and A test is the absorbance of samples (extract and ascorbic acid). The absorbance was measured at 700 nm against a blank using UV-Vis spectrophotometer. Increased absorbance of the reaction mixture indicates increase in reducing power. Determination of Minimum Inhibitory Concentration (MIC) Minimum inhibitory concentration was carried out on sensitive extracts. four test tubes were used for each organism. Broth dilution method was used. 9ml of nutrient broth was dispensed into each of the four test tubes used per organism. The test tubes were autoclaved at 121°C for 15 minutes and allowed to cool and properly labeled. One milliliter of reconstituted extract, prepared by dissolving 1ml extract in 10ml solution, was dispensed into the first test tube and shaken. Form the first test tube, 1ml of the mixture was taken and dispensed into the second and shaken. This was repeated for the rest of the test tubes. From the last test tube, 1ml was discarded. This dilution was carried out for all viable extracts, giving rise to 10,000, 1,000, 100, and 10mg/ ml of extracts respectively in the test tubes. The test tubes were then inoculated with test organisms using a sterile wire loop. The fifth test tube was not inoculated and serves as a control. After inoculation, the test tubes were properly sealed and incubated for 24 hours. They were observed for turbidity (growth of organism). The test tube with least concentration which showed no turbidity indicates the MIC [20]. Results The extraction of leaf powder was done in six different solvents viz., Ethanol, Methanol, Petroleum ether, Benzene, Chloroform and Water. The different extracts of plant have been investigated for their antioxidant activity, Photochemical and antimicrobial activity. Qualitative phytochemical analysis The phytochemical characteristic of plant tested was summarized in the Determination of total flavonoid content Total flavonoids content of Adina cordifolia showed maximum flavonoids content of Adina cardifoliawas found to be 0.22 mg/g in Methanol. and benzene had lower phenolic content only with (0.132 mg/gram). These phytochemical protect the cells from oxidative damage caused by free radicals. Determination of Nitric oxide scavenging activity The scavenging of NO was found to increase in dose dependent manner. The extracts showed weak nitric oxide-scavenging activity at low concentration as in at 100 and 200 μg/ml. Maximum inhibition of NO was observed in the extracts of chloroform at highest concentration (500 mg/ml) for the sample. At this maximum concentration, inhibition was found to be 70.66% for ascorbic acid, which serves as the standard. For dry leaf extract, inhibition was found to be higher 57.35% for the sample. At 500 mg/ml other solution extract found 56.15% in methanol, 49.45% in ethanol, 52.1% in Petroleum ether, 51.32% in water and 50.15% in benzene respectively. As shown in Graph 2. Determination of DPPH scavenging activity DPPH radical scavenging activity of dry leaf extract of Adina cordifolia was compared with Ascorbic acid. At a concentration of 500(µg/ml), the scavenging activity of ethanol extract of Adina cordifolia reached 59.33%, while at the same concentration; the standard was 72.56%. At 500 µg/ml other solution extract found 51.23% in methanol, 49.36% in Water, 43.25% in Petroleum ether, 44.25% in chloroform and 38.59% in benzene respectively as shown in Graph 3 Determination of Hydrogen peroxide scavenging activity Adina cordifolia leaf extract scavenged hydrogen peroxide which may be attributed the presence of phenolic group. The scavenging of H 2 O 2 was found to increase in dose dependent manner. Maximum inhibition of H 2 O 2 was observed in the extracts of Ethanol at highest concentration (600µg/ml) for the sample. At this maximum concentration, inhibition was found to be 61.47% in ethanolic extract of dry leaf whereas ascorbic acid inhibited more. At 600 µg/ml other solution extract found 50.36% in methanol, 60.9% in Water, 56.02% in Petroleum ether, 48.65% in chloroform and 43.9% in benzene respectively. As shown in Graph 4-7. Determination of ferric Reducing antioxidant power activity Data show that all the samples increased their reducing ability when the concentration of extracts was increased. At 200 ppm, ethanol extract had the highest ability to reduce Fe (III) and had no significant difference with other different solution extract. At 1200 ppm, Ethanol extract have higher ability than other to reduce Fe (III) to Fe (II). At 1200ppm methanol extract showed higher absorbance. After it ethanol, water, petroleum ether, benzene, chloroform respectively. Determination of Minimum Inhibitory Concentration (MIC) The MIC was evaluated on plant extracts that showed antimicrobial activity. The minimum inhibitory concentration (MIC) values obtained for extracts against the bacterial strains varied from plant extract to the other. at least three microorganisms tested. The ethanol extracts of Adina cordifolia were the most active against the microorganisms studied. This possibly means that the compound responsible for the antimicrobial activity was present in each extract at a different concentration. Ethanol extracts exhibited a higher degree of antimicrobial activity as compared with water and other extract fractions. This finding is correlated with the medicinal preparations that use rum and liquor to extract the active plant components. For instance, MIC values of 0.76, 0.78, 0.88, and 0.91 mg/ ml were obtained for ethanolic extracts of A.cardifolia. The strongest activity (MIC 0.76 mg/mL) was observed in the ethnolic extracts of A. cordifolia (leaves), Benzene extract of A. cordifolia (leaves) was the least active, showing an MIC of 3.30 mg/mL. Conclusion The results revealed the presence of medicinally important constituents in the plants studied. Many evidences gathered in earlier studies which confirmed the identified phytochemicals to be bioactive. Several studies confirmed the presence of these phytochemicals contribute medicinal as well as physiological properties to the plants studied in the treatment of different ailments. Therefore, extracts of dry leaf powder of Adina cordifolia plant could be seen as a good source for useful drugs. This study was designed to investigate the phytochemical characterization and evaluate the in vitro antioxidant activity of Adina cordifolia which was used as a antiulcer, anti-inflammatory in Ayurveda remedies. To conclude, the above experiments clearly indicate that different extract of dry leaf powder of Adina cardifolia showed effective free radical scavenging activity which can be attributed to the presence of Flavonoids, Tannin and phenolics along with other compounds. Phenolic compounds, flavonoids and other secondary metabolites were detected in 6 different extract. Ethanolic, aqueous and methanolic extract showed the powerful result of containing phenolics, flavonoids and scavenging activity. Same these extract of Adina cordifolia had shown the effective antimicrobial activity for four different bacterial strains. The traditional medicine practice is recommended strongly for these plants as well as it is suggested that further work should be carried out to isolate, purify, and characterize the active constituents responsible for the activity of these plants. Also additional work is encouraged to elucidate the possible mechanism of action of these extracts. Further investigations on the different extract of dry leaf powder of Adina cardifolia purification of compounds and molecular mechanisms of its protective actions will be performed.
2019-03-31T13:45:38.823Z
2015-02-07T00:00:00.000
{ "year": 2015, "sha1": "d0f7b802fa86faee5d31a0ada9ab787650909e7b", "oa_license": "CCBY", "oa_url": "https://www.omicsonline.org/open-access/preliminary-phytochemical-screening-and-biological-activities-of-adina-cardifolia-1948-5948.1000178.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4895e88027f65d2bda06f29c48c75723c8507c53", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
213602582
pes2o/s2orc
v3-fos-license
Terrestrial and Marine Landforms along the Cilento Coastland (Southern Italy): A Framework for Landslide Hazard Assessment and Environmental Conservation : This study shows the terrestrial and marine landforms present along the Cilento coast in the southern part of the Campania region (Italy). This coast is characterized by the alternation of bays, small beaches, and rocky headlands. In the adjacent submerged areas, there is a slightly inclined platform that has a maximum width of 30 km to the north, while it narrows in the south to approximately 6 km. A wide variety of landforms are preserved in this area, despite the high erodibility of the rocks emerging from the sea and the e ff ects of human activities (construction of structures and infrastructures, fires, etc.). Of these landforms, we focused on those that enabled us to determine Quaternary sea-level variations, and, more specifically, we focused on the correlation between coastal and sea-floor topography in order to trace the geomorphological evolution of this coastal area. For this purpose, the Licosa Cape and the promontory of Ripe Rosse located in northern Cilento were used as reference areas. Methods were used that enabled us to obtain a detailed digital cartography of each area and consequently to apply physical-based coastal evolution models. We believe that this approach would provide a better management of coastal risk mitigation which is likely to become increasingly important in the perspective of climate change. Introduction In relatively recent coastal landscapes, such as those of central-western Mediterranean Sea, the events responsible for the landform evolution and the controls they underwent must be sought within the last hundred thousand years. Furthermore, morphogenetic events continue to exert their effect and shape the landscape today, which is complicated by the actions of human beings who built facilities and infrastructures along the coasts to promote tourism or facilitate mobility [1]. These actions are often performed without analyzing landforms and processes carefully, thus causing instability or increasing environmental vulnerability and degradation [2]. This study aims to highlight the main emerged and submerged landforms present along the spectacular coastscape of the National Park of Cilento, in southern Italy. This coastal landscape, with lovely inland areas, received several international awards. In 1997, the entire region was recognized by UNESCO's (United Nations Educational, Scientific and Cultural Organization) Biosphere Reserve with the aim of maintaining a long-term equilibrium between man and his environment by conserving biological diversity, promoting economic development, and preserving cultural values (MAB -Man and Biosphere program), while, in 1998, three sites in the Cilento area (Paestum, Velia, and Padula) The geological and tectonic setting mentioned above led to a prevalent morpho-structural control of the rocky coastal landscape of the Cilento area, sometimes resulting from the retreat and In disconformity on the previous units, Middle and Upper Miocene syn-orogenic units are present, whose successions are made up mainly of fine to extremely coarse pelitic and calcareous-marly arenaceous turbidites deposited in deep submarine fans (thrust top basin) [22]. Of these sequences, those of the Cilento group (Upper Burdigalian-Upper Tortonian) [23] are the most common along the coast, which are generally found on internal units. In submerged areas, these units are frequently covered by recent sands colonized by fossil organisms and sometimes by seagrass meadows [17,18]. The sand cover usually passes to muds away from the coastal bottoms. The Quaternary post-orogenic units include all continental sediments, transitional sediments, and marine clastic sediments, deposited after the final emergence of the Apennine Chain, probably beginning in the Lower Pliocene [12,13]. In Cilento, they are represented by exposed aeolian, fluvial, slope, lake, and travertine deposits along the river valleys and on the plains near the coast, as well as by the marine transitional deposits stacked on the emerged and submerged coastal areas. These units may show intercalations of the products of Campania volcanic activity [24][25][26][27] (Figure 3). Water 2019, 11, x FOR PEER REVIEW 3 of 25 represent the lithotypes of the submerged coastal area. In this case, they are partially covered by veils of more recent sediments [16][17][18]. The external units are mainly composed of Mesozoic-Tertiary carbonates (Bulgheria unit-Middle Liassic to Lower Miocene [13,19]), representative of sedimentary environments ranging from shallow-water carbonates (often back-reef facies) to deep-water carbonates. The outcrops of these units are located on the high and rocky coastline of southern Cilento [18] (Figure 2). On the coastal bottoms, even partially emerged, these rocks are often covered with calcareous algae and animals with calcareous skeletons (sponges, corals, serpulids, bryozoans, mollusks) [17,18,20,21]. In disconformity on the previous units, Middle and Upper Miocene syn-orogenic units are present, whose successions are made up mainly of fine to extremely coarse pelitic and calcareous-marly arenaceous turbidites deposited in deep submarine fans (thrust top basin) [22]. Of these sequences, those of the Cilento group (Upper Burdigalian-Upper Tortonian) [23] are the most common along the coast, which are generally found on internal units. In submerged areas, these units are frequently covered by recent sands colonized by fossil organisms and sometimes by seagrass meadows [17,18]. The sand cover usually passes to muds away from the coastal bottoms. The Quaternary post-orogenic units include all continental sediments, transitional sediments, and marine clastic sediments, deposited after the final emergence of the Apennine Chain, probably beginning in the Lower Pliocene [12,13]. In Cilento, they are represented by exposed aeolian, fluvial, slope, lake, and travertine deposits along the river valleys and on the plains near the coast, as well as by the marine transitional deposits stacked on the emerged and submerged coastal areas. These units may show intercalations of the products of Campania volcanic activity [24][25][26][27] (Figure 3). The geological and tectonic setting mentioned above led to a prevalent morpho-structural control of the rocky coastal landscape of the Cilento area, sometimes resulting from the retreat and The geological and tectonic setting mentioned above led to a prevalent morpho-structural control of the rocky coastal landscape of the Cilento area, sometimes resulting from the retreat and replacement of the previous fault-line scarp, alternated with small, elongated coastal plains (e.g., Alento River plain) [27,28]. These plains were formed by the deepening of the rivers favored by the correspondence with tectonic lineaments and the easy erodibility of the outcropping lithotypes. The filling of these flared coastal valleys occurred due to over-flooding and marine ingressions. The traces of marine sediments uplifted to different altitudes from terraces, and the transitional sediments on the continental shelf show how sea level variations are superimposed on tectonic events. This is more easily seen along the southern coasts of Cilento composed of external carbonate units. In particular, the coastal profile of Mount Bulgheria shows ancient level surfaces (up to 400 m) with marine sediments from the Lower Pleistocene onward [29,30]. Calcareous cliff faces at sea level are often vertical [3,31]. The rest of the coast is composed of terrigenous deposits of internal and syn-orogenic units that gradually descend toward the sea through stratified escarpments or covered by debris, locally terraced, with generally concave profiles and sometimes composite with different slopes [3,31]. This diversity is due to the presence of marly-clayey levels or pelitic interlayers, which facilitate the occurrence of landslides in continuous evolution. In order to complete this brief geomorphological analysis, it is essential to mention the coastal slopes composed of clastic sediments, such as those represented by steep Pleistocene dune-beach systems of the Pleistocene. The oldest marine abrasion surfaces preserved in soft rock date back to the Upper Pleistocene [30] and are mainly found in the northern coastal section. Lastly, in order to complete the geomorphological scenario, accumulations of debris and sand tongues occurred at the base of the cliffs on the shoreline and close to micro-crags formed by terrigenous and clastic rocks. The former come from the dismantling of the adjacent slope, whereas the latter come from the coastal morphodynamics which transport sand and deposit it in the inlets [3,31]. The analysis of the submerged portion mainly concerns the continental shelf [16,17] (Figure 4) with a variable maximum width of 30 km in the north and a minimum width of 6 km in the south and an edge generally located at a depth of 200m except for the northern stretch of coast, where it is situated at a depth of approximately 230 m (Licosa Cape offshore), and the southeastern stretch of the coastline in the Gulf of Policastro, where it is located in shallower water (<100 m). The average slope varies from 0.3 • in the northern sector to 0.8 • in the southern sector, in correspondence with the narrowest portion. In this submerged portion, several marine abrasion terraces were identified, which were formed by the action of the sea waves during the Pleistocene paleo-standings of sea level with edges located at various depths [16,17]. Furthermore, in order to confirm that the major structural elements of the emerged part continue beneath the sea surface near the emerged valleys (e.g., Alento River Valley), depressed areas were identified, which are filled with sediments with varying grain size. In geophysical sub-bottom profiles, there is a series of normal and listric faults, oriented northwest to southeast [32]. The latter were caused by the collapse of the Tyrrhenian margin during the Pliocene and the Lower Pleistocene [12,16,17]. This type of fault probably defined the current coastal profile of the Cilento promontory which has the same orientation. At lower depths, sandy plains generally prevail in continuity with the emerged beaches and degrade toward the mudflats offshore. Locally, sands can also be found at greater depths; in this case, they represent ancient relict shorelines which were formed when the sea level was lower than it is today [16][17][18]. The climate on the Cilento coast is temperate with average annual temperatures of approximately 17 • C (12.6-20.8 • C) and an average annual rainfall that varies from 730 mm in the northern sector to 790 mm in the southern sector. Rainfall is concentrated in spring and late autumn, while, during the summer, there are long periods of drought. This climate is favorable for the development of evergreen forests and Mediterranean scrub along the coast. Of particular interest are the native spontaneous species that grow in the coastal areas, approximately 10% of which are of considerable phytogeographic importance, as they are endemic and/or rare [33,34]. On the beaches, among the sand communities, the increasingly rare sea lily (Pancratium maritimum) is still present; phytocoenoses with highly specialized halophytes live on the reefs in direct contact with sea spray and the endemic statice Salerno (Limonium remotispiculum) thrives (Figure 5a). [16]). Legend: HN: terrestrial hydrography; LC-low coast; HC-high coast; CH-channels incised in the sea bottom; AS-acoustic substrate rising from the sea bottom; L-depressed areas; AC-stack as ancient mouth complexes; SB-sandy bodies rising from the sea bottom; TB-edges of abrasion terraces; T-morpho-structural terrace; TR-trench; I-isobath. On the coastal cliffs, the Mediterranean rupicolous species are dotted with precious endemics such as the Primula di Palinuro (Primula palinuri), the clove of cliffs (Dianthus rupicola), the Centaurea (Centaurea cineraria), the iberide florida (Iberis semperflorens), the Neapolitan Campanula (Campanula fragilis), and many other flowering plant species that compose a coastal landscape of rare beauty ( Figure 5b). In the sunniest and driest areas, we find the ginestra of Cilento (Cilento genista), the carob (Ceratonia siliqua), the red or Phoenician juniper (Juniperus phoenicea), and holm oak and pine woods (Pinus halepensis), which seem to be expanding again as they are being reforested. On the coastal cliffs, the Mediterranean rupicolous species are dotted with precious endemics such as the Primula di Palinuro (Primula palinuri), the clove of cliffs (Dianthus rupicola), the Centaurea (Centaurea cineraria), the iberide florida (Iberis semperflorens), the Neapolitan Campanula (Campanula fragilis), and many other flowering plant species that compose a coastal landscape of rare beauty (Figure 5b). In the sunniest and driest areas, we find the ginestra of Cilento (Cilento genista), the carob (Ceratonia siliqua), the red or Phoenician juniper (Juniperus phoenicea), and holm oak and pine woods (Pinus halepensis), which seem to be expanding again as they are being reforested. On the coastal cliffs, the Mediterranean rupicolous species are dotted with precious endemics such as the Primula di Palinuro (Primula palinuri), the clove of cliffs (Dianthus rupicola), the Centaurea (Centaurea cineraria), the iberide florida (Iberis semperflorens), the Neapolitan Campanula (Campanula fragilis), and many other flowering plant species that compose a coastal landscape of rare beauty (Figure 5b). In the sunniest and driest areas, we find the ginestra of Cilento (Cilento genista), the carob (Ceratonia siliqua), the red or Phoenician juniper (Juniperus phoenicea), and holm oak and pine woods (Pinus halepensis), which seem to be expanding again as they are being reforested. On these last stretches of coastline, frequent fires and the fact that the roads were widened to reach the homes built on the slopes increased land degradation and reduced slope stability. However, there are still coastal stretches that preserve their original natural condition which are monitored closely by the Cilento, Vallo di Diano, and Alburni National Parks with the aim of mitigating damage and preventing deterioration [3,31]. More recently, the municipalities of Santa Maria di Castellabate in the north and the Costa degli Infreschi and Masseta in the south developed marine conservation and monitoring strategies. The reason for protecting and monitoring these marine areas is because of the richness of their seabeds, which contain biocenoses of great interest, such as pre-coralligenous and coralligenous species, as well as large quantities of Posidonia seagrass beds (Posidonia oceanica) [17,18]. Materials and Methods Firstly, a review of the existing literature on the geology and geomorphology of Cilento was carried out. Most of these studies were focused on short stretches of coastline that offered particular cues as they were extremely didactic and representative for the development of research (i.e., References [20,25,27]). Previous coastal geomorphological studies did not integrate information on the dynamics and geomorphological evolution of the submerged sectors. This study attempts to fill these research gaps by trying to correlate the emerged and submerged landforms of the northern Cilento sites near Punta Licosa, using an integrated approach ( Figure 1). Integrated geomorphological surveys and analysis were carried out, starting from current terrestrial and submerged landforms. These latter surveys were carried out using sea vision underwater lighting on boats and by performing underwater scuba dives. The results of these surveys were supported by consulting topographic maps of the area. The oldest topographic maps used were the 1956 1:25,000 scale supplied by the IGMI (Istituto Geografico Militare Italiano) and the more recent 2004 1:5000 scale map supplied by the Campania Region. The information on these maps was completed by observing various aerial photogrammetric images obtained from 1943 onward produced by the IGMI, up to those taken in 2012 by the Campania Region. Images found on the web were also analyzed, particularly those taken by Google Earth in 2015 [35], as well as those placed on the National Cartographic Portal of the Italian Ministry of Environment in 2012 [36]. From these images, the LIDAR-derived DEM was extracted for some specific areas. The new geological cartography created for this area enabled us to highlight the emerged and submerged landforms of the Cilento coast; more specifically, sheets 502 "Agropoli", 519 "Capo Palinuro", and 520 "Sapri" [37][38][39] represented the basis for defining the nature and genesis of the coastal forms. Subsequently, the availability of a map realized by ISPRA (National Institute for Environmental Protection) for the inclusion of the National Parks of Cilento, Vallo di Diano, and Alburni in the UNESCO Global Geoparks Network provided us with a broader view [18]. In fact, this map not only adds to the information obtained from the sheets mentioned above, but it focuses on some specific aspects, such as the characteristics of protected marine habitats. The set of information gathered enabled us to highlight the emerged and submerged coastal forms of Cilento in more detail than the existing literature and to qualitatively reconstruct the shortand medium-term geomorphological evolution of various coastal landscapes, such as high cliffs. However, the need to make this information available to planners and administrators for future reference led the authors to develop innovative approaches. Therefore, a cartography was created using the Salerno University geomorphological mapping system (GmIS_UniSa) [40], which is based on a GIS procedure which includes "traditional based on symbol" cartography, as well as polygonal structures, with complete coverage, based on objects and multi-themes of the dataset and the set of rules. This study provides the physical features of simple landforms or composite physical surfaces, by defining elementary polygons or several adjacent polygons and then determining the processes that generated them. Moreover, it enables us to establish the geomorphological model by defining the relationships (geometric, temporal, physical, geological, lithological, and hierarchical) between the different landforms represented [40,41]. Unfortunately, due to our limited knowledge of the seabed, it is not yet possible to create a similar digital map. However, the better representation of the emerged forms emerged with the "object-oriented" cartography and their relationships with the submerged ones led to an improvement of the knowledge in space and time of this coast. Subsequently, in particular traits, such as Licosa Cape and Ripe Rosse, based on a quantitative restitution of the forms and the role of the coastal processes that generated them in the past, particularly since the late Pleistocene, we tested a physically based numerical model of evolution using SCAPE software with its open-source components and tools [42]. With SCAPE, it was possible to trace the evolution of the basal part of a particularly high coast, where wave action is "almost exclusively" set to continue for the next 500 years. The shape of the coast used for this software was identified by a series of large-scale profiles (1:2000), collected from the same reference line, while, for the basal part of the representative profile modeled by SCAPE, a 1:500 scale was chosen. The execution of the model generated a series of output files with data on the profiles of the rocky cliffs and the beaches below, on the annual flow and transport rate of sediments, and then on the accumulated annual volume. This information, obtained using programs such as Excel and Matlab, enabled us to obtain a graphic representation of the data acquired with the SCAPE program. The results obtained will help us to understand the coastal processes that occur on a particular stretch of coast, and they allow intervention measures and preventing or reducing damage and risks to the environment. Terrestrial and Marine Landforms of Cilento A detailed description of the terrestrial and marine landforms of the Cilento coast would lose sight of the purpose of this study. In fact, we wish to give emphasis to landforms which are relevant to a better understanding of vulnerable landscapes and to promoting the conservation of the emerged and submerged geomorphological features of the study area [3,18,43]. Of the 100-km-long Cilento coastline, 70% is rocky while 30% includes sand or pebble beaches. Approximately 14 km of coast [44,45] was not considered in these percentages, as they are mainly occupied by anthropic activities [46], and are more concentrated in the port areas (e.g., Agropoli in the north, Casalvelino in Alento River Plain, Marina di Camerota in the south), even if a few were built to protect the eroded sections of the coastline. The direct survey assisted by aerial photographs, as well as by digital observation systems (LIDAR) on particular stretches, allowed the correlation between the various coastal stretches characterized by rocky outcrops composed of both the calcareous sequences of the external units and the turbidite succession of the internal and syn-orogenic units. Each sequence illustrates a different morphological configuration for geological reasons (lithology and tectonics) and for the erosive-depositional phenomena that influence it [47]. In some cases, these phenomena can be attributed to sea-level changes that occurred during the last hundred thousand years [48]. A further differentiation concerns how these high, rocky coasts are related to the current submerged portion, as the geophysical surveys carried out on the seabed in the last decades detected [16,17], which may be sharp or gradual due to the presence of debris stacks. The combination of these conditions involves a particular morphological evolution that is correlated with each type of rocky coastline [31,47]. Along the Cilento coast, rocks with low erosion resistance (soft rocks) prevail, represented by the sequences in which sandstones and/or calcarenites intercalate at clay levels. These successions are attributable to internal units, and to syn-orogenic ones and post-orogenic deposits. In many cases, the emerged portion is connected to the submerged portion by a broad coastal platform (>200 m) and with a sea-bottom slope that can only exceed 10% locally (Type A in Reference [49]; Type A1 in Reference [47]). The profile is generally convex with an almost uniform gradient (on average 45 • ), although there may occasionally be concavities in the upper portion of the cliff or gradient differences ( Figure 6a). The evolution of this morphotype takes place due to the parallel retreat of cliffs, which is induced by wave motion that progressively erodes the base of the cliffs, thus causing the collapse of the unstable material of the slopes. Moreover, meteoric degradation occurs on these slopes, which can be decisive when the turbiditic succession presents a high argillaceous fraction. In this case, shortening is also joined to the cliff retreat [47]. Therefore, few landforms created by coastal processes are conserved on these cliffs; however, where wave motion is less forceful (e.g., on a broad, sub-horizontal coastal platform) and there is less degradation (e.g., fewer pelitic intercalations, less extension of the exposed surface), relatively more recent landforms can still be observed today [50][51][52][53]. shortening is also joined to the cliff retreat [47]. Therefore, few landforms created by coastal processes are conserved on these cliffs; however, where wave motion is less forceful (e.g., on a broad, sub-horizontal coastal platform) and there is less degradation (e.g., fewer pelitic intercalations, less extension of the exposed surface), relatively more recent landforms can still be observed today [50][51][52][53]. More specifically, the latest interglacial sediments and landforms (OIS5) are still preserved on coastal stretches with these lithotypes [54]. For example, the age of the sites of Ogliastro Marina and Acciaroli in northern Cilento was determined by analyzing the extent of isoleucine epimerization in protein preserved in molluscan fossils embedded in raised marine deposits outcropping at 4 m (a.s.l.) [55]. They are sandy matrix conglomerates or fossiliferous biocalcarenites containing the fossilized remains of numerous marine species (Glycimeris glycimeris, Astralium (Bolma) rugosum, Natica sp., Venus sp., Cardium sp., Tapes sp., Pecten sp., Spondylus gaederopus, Cladocora coespitosa, etc.) without a precise stratigraphic meaning, but certainly indicative of a warm-moderate environment. However, there are rather wide 4-5-m marine abrasion platforms in the northern sectors with slightly cemented sand dunes, which also lie below sea level. These platforms are covered with red or sometimes brown colluviums that may contain the pyroclastic deposits attributed to the Campanian Ignimbrite (39 ka before present (B.P.) [56]). Moreover, at approximately +2 m, a "beach rock" can still be seen in easily erodible soft rocks that could be evidence of one of the last sea transgressions in Late Pleistocene times. This "2-m bench" reaches a maximum width of approximately 35 m in a few stretches of coastline. It remains uncovered by the sea, yet it is overwashed by storm waves at high tide. It is an almost horizontal platform similar to that described in front of a cliff by Sunamura for Type B [49] ( Figure 6b). Its position on the coast north of the promontory of Cape Palinuro means that it was less exposed to the most intense storm surges coming from southeast, as suggested by Reference [2] in similar contexts. However, its presence in other areas (cliffs north of Alento River alluvial plain), even if narrower, shows that they can also be in areas where they are exposed to strong storms. Pools and channels on the platform surface become enlarged and integrated as their protruding edges recede. Cliff recession occurs due to shore platform lowering and flattening, weathering processes, and the removal of weathered material by wave action [50]. On coastal slopes modeled in sequence with lithotype alternations (e.g., turbidite succession), there are widespread landslide phenomena and relative landforms are clearly detectable [57][58][59][60]. More than 220 different types of landslides were surveyed by various authorities [61]. Some landslides were caused directly or indirectly by wave action, while others were caused by lithological conditions (e.g., fractured rocks, layering, poorly consolidated sediments) or meteoric degradation (rainfall). The results of the survey show that rotational slides are the most common type of landslide, even though many of these are inactive; falls and complex landslides, such as More specifically, the latest interglacial sediments and landforms (OIS5) are still preserved on coastal stretches with these lithotypes [54]. For example, the age of the sites of Ogliastro Marina and Acciaroli in northern Cilento was determined by analyzing the extent of isoleucine epimerization in protein preserved in molluscan fossils embedded in raised marine deposits outcropping at 4 m (a.s.l.) [55]. They are sandy matrix conglomerates or fossiliferous biocalcarenites containing the fossilized remains of numerous marine species (Glycimeris glycimeris, Astralium (Bolma) rugosum, Natica sp., Venus sp., Cardium sp., Tapes sp., Pecten sp., Spondylus gaederopus, Cladocora coespitosa, etc.) without a precise stratigraphic meaning, but certainly indicative of a warm-moderate environment. However, there are rather wide 4-5-m marine abrasion platforms in the northern sectors with slightly cemented sand dunes, which also lie below sea level. These platforms are covered with red or sometimes brown colluviums that may contain the pyroclastic deposits attributed to the Campanian Ignimbrite (39 ka before present (B.P.) [56]). Moreover, at approximately +2 m, a "beach rock" can still be seen in easily erodible soft rocks that could be evidence of one of the last sea transgressions in Late Pleistocene times. This "2-m bench" reaches a maximum width of approximately 35 m in a few stretches of coastline. It remains uncovered by the sea, yet it is overwashed by storm waves at high tide. It is an almost horizontal platform similar to that described in front of a cliff by Sunamura for Type B [49] (Figure 6b). Its position on the coast north of the promontory of Cape Palinuro means that it was less exposed to the most intense storm surges coming from southeast, as suggested by Reference [2] in similar contexts. However, its presence in other areas (cliffs north of Alento River alluvial plain), even if narrower, shows that they can also be in areas where they are exposed to strong storms. Pools and channels on the platform surface become enlarged and integrated as their protruding edges recede. Cliff recession occurs due to shore platform lowering and flattening, weathering processes, and the removal of weathered material by wave action [50]. On coastal slopes modeled in sequence with lithotype alternations (e.g., turbidite succession), there are widespread landslide phenomena and relative landforms are clearly detectable [57][58][59][60]. More than 220 different types of landslides were surveyed by various authorities [61]. Some landslides were caused directly or indirectly by wave action, while others were caused by lithological conditions (e.g., fractured rocks, layering, poorly consolidated sediments) or meteoric degradation (rainfall). The results of the survey show that rotational slides are the most common type of landslide, even though many of these are inactive; falls and complex landslides, such as slide-flows, are also very widespread. The presence of debris at the base of the cliffs can modify their evolution or accelerate the formation of beaches in coves or bays toward the direction of the current along the coast. In the southern coastal area of Mount Bulgheria, the cliffs are composed of extremely erosion-resistant limestone (hard rock) (Figure 7a). In many cases, these rocks lie below sea level, as they correspond to structural slopes. The profile is generally vertical or sub-vertical; thus, the action of the waves is drastically reduced. In fact, the depth of the sea at the base of the cliff is greater than the depth of the breakers [47]. Therefore, on these cliffs, defined by Reference [49] as plunging cliffs, subaerial processes can prevail. The most common of these processes is represented by rock falls, generally in correspondence with structural weaknesses [62][63][64][65]. Locally, erosional remnants are left on the seabed following cliff retreat, so that the seabed appears articulated, with small terraces, arches, and rocks emerging from sea, as observed on this coastal stretch. However, the retreat rate is lower than the previous morphotype, which allows for the conservation of a great variety of coastal landforms. In particular, at the base of the limestone cliffs, there are tidal notches or fossil biocorrosion grooves, often associated with holes bored by lithophagous species. Caves and hypogean karst cavities formed during the neotectonic period, which developed along the main fractured lines or occasionally along interstatal discontinuities, are almost always remodeled by wave erosion or marine biocorrosion and partially or totally filled with marine and continental sediments [3,29,66] (Figure 7b). slide-flows, are also very widespread. The presence of debris at the base of the cliffs can modify their evolution or accelerate the formation of beaches in coves or bays toward the direction of the current along the coast. In the southern coastal area of Mount Bulgheria, the cliffs are composed of extremely erosion-resistant limestone (hard rock) (Figure 7a). In many cases, these rocks lie below sea level, as they correspond to structural slopes. The profile is generally vertical or sub-vertical; thus, the action of the waves is drastically reduced. In fact, the depth of the sea at the base of the cliff is greater than the depth of the breakers [47]. Therefore, on these cliffs, defined by Reference [49] as plunging cliffs, subaerial processes can prevail. The most common of these processes is represented by rock falls, generally in correspondence with structural weaknesses [62][63][64][65]. Locally, erosional remnants are left on the seabed following cliff retreat, so that the seabed appears articulated, with small terraces, arches, and rocks emerging from sea, as observed on this coastal stretch. However, the retreat rate is lower than the previous morphotype, which allows for the conservation of a great variety of coastal landforms. In particular, at the base of the limestone cliffs, there are tidal notches or fossil biocorrosion grooves, often associated with holes bored by lithophagous species. Caves and hypogean karst cavities formed during the neotectonic period, which developed along the main fractured lines or occasionally along interstatal discontinuities, are almost always remodeled by wave erosion or marine biocorrosion and partially or totally filled with marine and continental sediments [3,29,66] (Figure 7b). Marine sediments are generally conglomerates with a coarse, medium-cemented sandy matrix, known as "Panchina", mixed with bioclasts of gastropods and mollusks or coral fragments. They are usually associated with restricted and slightly sloping marine abrasion platforms. In other cases, they are represented by cemented biocalcarenites, such as "beach-rock", coral reefs, or "trottoir", such as "reefs", which are often composed of Cladocora coespitosa [20,21]. Continental sediments are almost always associated with low sea-level stands, which are essentially accumulations of pseudo-stratified breccias mainly composed of calcareous elements with sharp or blunt edges, in abundant reddish, colluvial, or pyroclastic matrix. Pre-Tyrrhenian breccias are often well cemented, poor or without a reddish matrix. In other cases, continental deposits are composed of reddish sands of colluvial or wind origin. There are occasionally karst speleothemes and concretionary accumulations in situ. The presence of pyroclastites (fine ash) is of particular importance as they are excellent chronostratigraphic markers [20,29]. Brown and immature soils settle on both breccias and colluvial deposits [20,29]. Unlike soft rocks, it is quite common to observe a series of landforms created during the oldest paleo sea-level stand on limestone and dolomite in southern Cilento. In fact, five marine terraces are located in this sector between 170/180 m and 40/50 m, and at lower altitudes such as +8/8.5 m, 3.5/5 m, and 2 m [20,21] (Figure 8). Marine sediments are generally conglomerates with a coarse, medium-cemented sandy matrix, known as "Panchina", mixed with bioclasts of gastropods and mollusks or coral fragments. They are usually associated with restricted and slightly sloping marine abrasion platforms. In other cases, they are represented by cemented biocalcarenites, such as "beach-rock", coral reefs, or "trottoir", such as "reefs", which are often composed of Cladocora coespitosa [20,21]. Continental sediments are almost always associated with low sea-level stands, which are essentially accumulations of pseudo-stratified breccias mainly composed of calcareous elements with sharp or blunt edges, in abundant reddish, colluvial, or pyroclastic matrix. Pre-Tyrrhenian breccias are often well cemented, poor or without a reddish matrix. In other cases, continental deposits are composed of reddish sands of colluvial or wind origin. There are occasionally karst speleothemes and concretionary accumulations in situ. The presence of pyroclastites (fine ash) is of particular importance as they are excellent chronostratigraphic markers [20,29]. Brown and immature soils settle on both breccias and colluvial deposits [20,29]. Unlike soft rocks, it is quite common to observe a series of landforms created during the oldest paleo sea-level stand on limestone and dolomite in southern Cilento. In fact, five marine terraces are located in this sector between 170/180 m and 40/50 m, and at lower altitudes such as +8/8. The highest of the heights, which occasionally can present marine deposits, are attributable to the Middle Pleistocene for physical continuity with similar forms [30]. On the other hand, lower wave-cut terraces, represented by sea-notches and bioconstruction, are correlated to OIS5 [29,30]. The differences in position derive from the tectonic uplift this relief underwent during the Pleistocene [29,54]. According to the estimates carried out on the Middle Pleistocene marine terraces, the uplift rate should have reached 0.2 mm/year during the last 700 ka B.P. period [20,54], although the uplift rate may be significantly lower considering the traces of the Upper Pleistocene sections. In the submerged portion, evidence of several paleo-sea level stands were found, which are mainly represented by wave-cut terraces and sea-notches outcropping along the underwater cliff, and occasionally by marine conglomerates with Lithophaga burrows, which can be divided into four main groups located at depths of −44/46, −18/24, −12/14, and −7/8 m below sea level ( Figure 8). Particular morphotypes observed along the Cilento coastline are known as "slope-over-wall cliffs", which are generally composed of soft rocks [2] and have vegetated slopes (typically with a gradient of 20°-30° but locally up to 45°) that descend down a sub-vertical rocky cliff face to the sea. The upper part of slope may have an almost uniform gradient (especially where it follows stratification by immersion toward the sea, cleavage, joint or fault planes), but, more often, it is convex in shape like a hog's back and, occasionally, it can be concave, where the lower slopes of the deposit that covers it are preserved. Their genesis is generally attributable to the rise in sea levels during the Holocene after the last glaciation period. Because of glacio-eustatic sea-level changes, this morphotype underwent alterations due to wave actions during interglacial periods and sub-aerial (therefore, not marine) modifications during glacial periods ( Figure 9). The highest of the heights, which occasionally can present marine deposits, are attributable to the Middle Pleistocene for physical continuity with similar forms [30]. On the other hand, lower wave-cut terraces, represented by sea-notches and bioconstruction, are correlated to OIS5 [29,30]. The differences in position derive from the tectonic uplift this relief underwent during the Pleistocene [29,54]. According to the estimates carried out on the Middle Pleistocene marine terraces, the uplift rate should have reached 0.2 mm/year during the last 700 ka B.P. period [20,54], although the uplift rate may be significantly lower considering the traces of the Upper Pleistocene sections. In the submerged portion, evidence of several paleo-sea level stands were found, which are mainly represented by wave-cut terraces and sea-notches outcropping along the underwater cliff, and occasionally by marine conglomerates with Lithophaga burrows, which can be divided into four main groups located at depths of −44/46, −18/24, −12/14, and −7/8 m below sea level ( Figure 8). Particular morphotypes observed along the Cilento coastline are known as "slope-over-wall cliffs", which are generally composed of soft rocks [2] and have vegetated slopes (typically with a gradient of 20 • -30 • but locally up to 45 • ) that descend down a sub-vertical rocky cliff face to the sea. The upper part of slope may have an almost uniform gradient (especially where it follows stratification by immersion toward the sea, cleavage, joint or fault planes), but, more often, it is convex in shape like a hog's back and, occasionally, it can be concave, where the lower slopes of the deposit that covers it are preserved. Their genesis is generally attributable to the rise in sea levels during the Holocene after the last glaciation period. Because of glacio-eustatic sea-level changes, this morphotype underwent alterations due to wave actions during interglacial periods and sub-aerial (therefore, not marine) modifications during glacial periods (Figure 9). The highest of the heights, which occasionally can present marine deposits, are attributable to the Middle Pleistocene for physical continuity with similar forms [30]. On the other hand, lower wave-cut terraces, represented by sea-notches and bioconstruction, are correlated to OIS5 [29,30]. The differences in position derive from the tectonic uplift this relief underwent during the Pleistocene [29,54]. According to the estimates carried out on the Middle Pleistocene marine terraces, the uplift rate should have reached 0.2 mm/year during the last 700 ka B.P. period [20,54], although the uplift rate may be significantly lower considering the traces of the Upper Pleistocene sections. In the submerged portion, evidence of several paleo-sea level stands were found, which are mainly represented by wave-cut terraces and sea-notches outcropping along the underwater cliff, and occasionally by marine conglomerates with Lithophaga burrows, which can be divided into four main groups located at depths of −44/46, −18/24, −12/14, and −7/8 m below sea level (Figure 8). Particular morphotypes observed along the Cilento coastline are known as "slope-over-wall cliffs", which are generally composed of soft rocks [2] and have vegetated slopes (typically with a gradient of 20°-30° but locally up to 45°) that descend down a sub-vertical rocky cliff face to the sea. The upper part of slope may have an almost uniform gradient (especially where it follows stratification by immersion toward the sea, cleavage, joint or fault planes), but, more often, it is convex in shape like a hog's back and, occasionally, it can be concave, where the lower slopes of the deposit that covers it are preserved. Their genesis is generally attributable to the rise in sea levels during the Holocene after the last glaciation period. Because of glacio-eustatic sea-level changes, this morphotype underwent alterations due to wave actions during interglacial periods and sub-aerial (therefore, not marine) modifications during glacial periods ( Figure 9). It can, therefore, be deduced that climate change played a fundamental role in their evolution, thus determining a climate-induced of erosion alternation of erosive conditions, sometimes attributed to sea processes and sometimes land processes, which occurred in various ways [48,53,67]. This is particularly evident on the stretch of coastline called Ripe Rosse in northern Cilento, and on the coastal stretch called Marina di Pisciotta in the south, adjacent to Palinuro Cape (Figure 1). Even if the south of Cilento is not well known, due to its morphological conditions, there are some lovely beaches, which are popular seaside destinations. They are mainly situated at the mouths of incisions in valleys or in small bays. Long beaches can only be found in Santa Maria di Castellabate, between Casalvelino and Marina di Ascea, to the north and south of Palinuro [44,45] (Figure 10a). Only the "central" stretch develops in the small coastal alluvial plain crossed by the Alento River (Figure 10b). This river lies in a Pleistocene morphotectonic depression that lowers the succession of the Cilento group toward the sea [27]. It can, therefore, be deduced that climate change played a fundamental role in their evolution, thus determining a climate-induced of erosion alternation of erosive conditions, sometimes attributed to sea processes and sometimes land processes, which occurred in various ways [48,53,67]. This is particularly evident on the stretch of coastline called Ripe Rosse in northern Cilento, and on the coastal stretch called Marina di Pisciotta in the south, adjacent to Palinuro Cape (Figure 1). Even if the south of Cilento is not well known, due to its morphological conditions, there are some lovely beaches, which are popular seaside destinations. They are mainly situated at the mouths of incisions in valleys or in small bays. Long beaches can only be found in Santa Maria di Castellabate, between Casalvelino and Marina di Ascea, to the north and south of Palinuro [44,45] (Figure 10a). Only the "central" stretch develops in the small coastal alluvial plain crossed by the Alento River (Figure 10b). This river lies in a Pleistocene morphotectonic depression that lowers the succession of the Cilento group toward the sea [27]. The plain is dominated by a large sub-horizontal surface of a terrace composed of fluvial sediments of various grain sizes, and it contains fragments of building bricks, which partly cover the ancient Magna Graecia remains of the port of the city of Elea. This city was the seat of the famous philosophical school where Zenone and Parmenides settled, which was later seized by the Romans and given the name of Velia. The archaeological excavations carried out there today are an important tourist and cultural attraction. The aforementioned ancient marine terrace is responsible for the retreat of the coast over 500 m to the west. The area became a marshy area following the silting of the Greek port in the first century anno Domini (A.D.) and was definitively abandoned in the ninth century. Today, the Alento river and its tributaries are engraved on the terrace for 1-2 m. It is difficult to link this coastal variation to historic variations in sea-level rise; the alluvial progression appears to be related to climate changes that may have generated greater sedimentary deposits during the High Middle Ages and caused greater slope degradation [68]. In fact, the slopes that dominate the terrace are covered with a thick eluvio-colluvial cover composed of reddish clays and silt that form part of the foundations of the ancient Greek city and contain archaeological remains. The pedogenized deposits of the dune to the west of Velia lie on the terraced deposits and the historical colluvial sediments [69]. The submerged area nearest the emerged area is characterized by a gently sloping sandy bottom covered with current ripple marks formed by waves except for a few stretches [18]. It is generally colonized by fossil organisms (e.g., Donax spp., Chamelea gallina, Callista chione) that are able to resist wave and current action. Offshore, beyond 20/25 m, the muddy fraction The plain is dominated by a large sub-horizontal surface of a terrace composed of fluvial sediments of various grain sizes, and it contains fragments of building bricks, which partly cover the ancient Magna Graecia remains of the port of the city of Elea. This city was the seat of the famous philosophical school where Zenone and Parmenides settled, which was later seized by the Romans and given the name of Velia. The archaeological excavations carried out there today are an important tourist and cultural attraction. The aforementioned ancient marine terrace is responsible for the retreat of the coast over 500 m to the west. The area became a marshy area following the silting of the Greek port in the first century anno Domini (A.D.) and was definitively abandoned in the ninth century. Today, the Alento river and its tributaries are engraved on the terrace for 1-2 m. It is difficult to link this coastal variation to historic variations in sea-level rise; the alluvial progression appears to be related to climate changes that may have generated greater sedimentary deposits during the High Middle Ages and caused greater slope degradation [68]. In fact, the slopes that dominate the terrace are covered with a thick eluvio-colluvial cover composed of reddish clays and silt that form part of the foundations of the ancient Greek city and contain archaeological remains. The pedogenized deposits of the dune to the west of Velia lie on the terraced deposits and the historical colluvial sediments [69]. The submerged area nearest the emerged area is characterized by a gently sloping sandy bottom covered with current ripple marks formed by waves except for a few stretches [18]. It is generally colonized by fossil organisms (e.g., Donax spp., Chamelea gallina, Callista chione) that are able to resist wave and current action. Offshore, beyond 20/25 m, the muddy fraction contains fossil organisms such as mollusks of the Veneridi family, worms, and crustaceans. Large areas of the sandy plain are covered with meadows of phanerogams (Posidonia oceanica, Cymodocea nodosa), while the muddy coastal plain is characterized by "fields" of soft corals (Pennatulacei), particularly in the area in front of the Alento estuary [18]. On the sea bottom near the high coast characterized by a "slope-over-wall profile", small banks of gravel and coarse organic sand can be seen. These are low-relief seabeds with almost horizontal surfaces, due to erosion caused by low-sea-level stands following the Upper Pleistocene [18]. They are adjacent to the emerged part of the northernmost stretch of the study area in front of Mount Tresino and between Acciaroli and Pioppi, while it is more detached at a depth of 5 m in front of Ripe Rosse and Marina di Pisciotta (close to Palinuro Cape). The submerged landscape which lies in front of Licosa Cape is of particular interest. Along this stretch, the continental shelf reaches a maximum length of approximately 23 km with a border that slopes gradually down to the ocean floor. In the profile, various edges of sub-horizontal surfaces modeled by wave action were recognized up to 150 m. According to Reference [16], the progradation of this platform toward the sea occurred until the last glacial expansion (18 ka B.P.), while the sub-flat surfaces were formed during the last sea level rise. In fact, the acoustic profiles, surveyed in this area, show a truncation of the prograding bodies near an erosion surface, covered by a thin drape of Holocene sediments. In order to confirm the sedimentary characteristics of these prograding bodies, a core sample was collected from the deepest part of the shelf at −149 m. At approximately 73 cm from the bottom of the core sample, there are coarse sands containing numerous whole or fragmented mollusk shells, including Arctica islandica, a cold-water species of the Pleistocene [16], which survived in the Mediterranean until the end of the Würm. The channels identified on the continental shelf by the geophysical analysis were probably formed during the same period, near rivers and streams [17]. They represent the relict forms of a hydrographic network of subaerial origin when the sea retreated to the isobath of 110-120 m, while the sediments that cover them date back to the subsequent sea-level rise [70]. Therefore, these channels would have been formed when the continental shelf emerged from the sea during the last glaciation (18 ka B.P.) [16]. Some of these channels also show sedimentary bodies in their termini located at approximately −90 m, which can be interpreted as mouth bar complexes. Finally, it should be noted that, according to References [16,17], the continental shelf has three terraces located at depths of 54 m, 86 m, and 107 m, modeled on the rocky bottom (acoustic substrate). Such a bottom has a limited extension and cannot be easily followed. Other terraces with irregular surface morphology were recognized by Reference [17], and depressed areas full of different size sediment grains were identified during the last study (e.g., north of Licosa Cape and in front of the Alento River mouth), which may be due to distensive tectonic lineaments activated during the Pliocene and Pleistocene. Among these, those that border the Alento River Plain continue in front of the seabed [17]. Future Scenarios Understanding the evolution of a coastline is important for its conservation and enhancement. Firstly, we focused on a site in the Cilento that protects landforms from sea-level changes both on land and on the seabed, where a series of geomorphological processes took place from the Pleistocene epoch until today. Secondly, we evaluated the risks induced by geomorphic processes that occurred over time on a coastal cliff and how the knowledge acquired could be used for developing mitigation strategies. These interventions lower the degree of vulnerability and, consequently, the risk of losing structures and infrastructures present on the coastal landscape. These considerations also take into account future climate predictions [71][72][73], which indicate an increase in temperatures, which would significantly increase sea levels. According to the Intergovernmental Panel on Climate Change (IPCC) hypothesis, the sea level could rise by more than a meter by 2100 if there is a global temperature increase of 1.5 • C [74], which would affect coastal processes and seriously change the Cilento coastscape. This landscape attracts tourists from all over the world and helps the economy to thrive. For this reason, we try to predict what will happen in the future by reconstructing the geomorphological evolution of the coastal landforms [75,76]. Licosa Cape promontory is a site of Cilento that needs to be protected and enhanced (Figure 11a). In fact, both the emerged and the submerged areas are recognized as priority areas for protection. In fact, they are included in the National Park of Cilento, Vallo di Diano, and Alburni and in the marine protected area of Santa Maria di Castellabate. It represents a high morphological, northwest (NW)-southeast (SE)-oriented area characterized by rounded ridges with regular, moderately steep slopes, or less frequently with concave-convex profiles; transversely, the slope shows triangular-shaped facets. The promontory consists entirely of the basal turbiditic succession of the Cilento Group (Pollica formation: Upper Burdigalian-Langhian [23]. This arenaceous-pelitic succession, composed of thin/thick tabular layers, emerges along the outer edge of the promontory, and its height varies from 4 to 10 m (Figure 11b). On the eastern edge, the slope is joined by thick and polycyclic colluvial taluses (Late Pleistocene) and alluvial fans (Middle Pleistocene) [37,77]. The former are mainly composed of angular arenaceous clasts in a yellow to yellowish-brown and reddish-brown matrix that varies from loamy sand to sandy loam, while the latter have sub-rounded clasts and positive or inverse gradation. Both taluses and fans are dissected by minor canals and incisions, generally V-shaped, which are filled with alluvial deposits. Water 2019, 11, x FOR PEER REVIEW 13 of 25 happen in the future by reconstructing the geomorphological evolution of the coastal landforms [75,76]. Licosa Cape promontory is a site of Cilento that needs to be protected and enhanced ( Figure 11a). In fact, both the emerged and the submerged areas are recognized as priority areas for protection. In fact, they are included in the National Park of Cilento, Vallo di Diano, and Alburni and in the marine protected area of Santa Maria di Castellabate. It represents a high morphological, northwest (NW)-southeast (SE)-oriented area characterized by rounded ridges with regular, moderately steep slopes, or less frequently with concave-convex profiles; transversely, the slope shows triangular-shaped facets. The promontory consists entirely of the basal turbiditic succession of the Cilento Group (Pollica formation: Upper Burdigalian-Langhian [23]. This arenaceous-pelitic succession, composed of thin/thick tabular layers, emerges along the outer edge of the promontory, and its height varies from 4 to 10 m (Figure 11b). On the eastern edge, the slope is joined by thick and polycyclic colluvial taluses (Late Pleistocene) and alluvial fans (Middle Pleistocene) [37,77]. The former are mainly composed of angular arenaceous clasts in a yellow to yellowish-brown and reddish-brown matrix that varies from loamy sand to sandy loam, while the latter have sub-rounded clasts and positive or inverse gradation. Both taluses and fans are dissected by minor canals and incisions, generally V-shaped, which are filled with alluvial deposits. Along the north and southwest flanks of the Monte Licosa ridge, the basal debris deposits gradually adapt to the terraced surfaces of the promontory, close to several marine abrasion platforms [25]. The highest and largest terrace (20-25 m a.s.l.) with a surface area of 500 m 2 in the southwestern area, probably gives the promontory of Licosa its almost quadrangular shape. In addition to this terrace, there are three other orders of marine terraces suspended at different heights above sea level with relative organogenic and pyroclastic deposits (Table I). These characteristics indicate that the terraces were formed between the Middle and Late Pleistocene and, therefore, demonstrate the exact sequence of eustatic events and tectonic movements. More precisely, the chronological reconstruction of the terraces was based on (i) epimerization of isoleucine and U/Th dating methods on biogenic samples [78,79]; (ii) presence of Paleolithic pre-Mousterian industries [25,80]; (iii) presence of a pyroclastic marker layer, widespread along the southern Tyrrhenian coastal areas, which dates back to the OIS 3-2 transition [26,81]; and (iv) stratigraphic correlations on a regional scale [78,82]. Along the north and southwest flanks of the Monte Licosa ridge, the basal debris deposits gradually adapt to the terraced surfaces of the promontory, close to several marine abrasion platforms [25]. The highest and largest terrace (20-25 m a.s.l.) with a surface area of 500 m 2 in the southwestern area, probably gives the promontory of Licosa its almost quadrangular shape. In addition to this terrace, there are three other orders of marine terraces suspended at different heights above sea level with relative organogenic and pyroclastic deposits (Table 1). These characteristics indicate that the terraces were formed between the Middle and Late Pleistocene and, therefore, demonstrate the exact sequence of eustatic events and tectonic movements. More precisely, the chronological reconstruction of the terraces was based on (i) epimerization of isoleucine and U/Th dating methods on biogenic samples [78,79]; (ii) presence of Paleolithic pre-Mousterian industries [25,80]; (iii) presence of a pyroclastic marker layer, widespread along the southern Tyrrhenian coastal areas, which dates back to the OIS 3-2 transition [26,81]; and (iv) stratigraphic correlations on a regional scale [78,82]. A full-coverage object-oriented mapping was performed in order to provide a complete representation of the promontory of Licosa Cape (features and evolution processes) at different scales. All the surface features identified by field surveys and aero-photogrammetry analysis were automatically identified, hierarchically organized, and mapped using the GmIS_UniSa procedure [40] ( Figure 12). Special attention was given to the objective recognition, classification, and mapping of present-day land forming processes (incised channels, rock cliffs, and shallow landslides) superimposed on stadial Pleistocene landforms (terraced alluvial fans, marine terraces, talus slopes, colluvial hollows in headwaters) ( Figure 12). A full-coverage object-oriented mapping was performed in order to provide a complete representation of the promontory of Licosa Cape (features and evolution processes) at different scales. All the surface features identified by field surveys and aero-photogrammetry analysis were automatically identified, hierarchically organized, and mapped using the GmIS_UniSa procedure [40] (Figure 12). Special attention was given to the objective recognition, classification, and mapping of present-day land forming processes (incised channels, rock cliffs, and shallow landslides) superimposed on stadial Pleistocene landforms (terraced alluvial fans, marine terraces, talus slopes, colluvial hollows in headwaters) ( Figure 12). This was not the case for the submerged area in front of Cape Licosa, for which the submerged landscape map was developed by Reference [18] for the Cilento National Park, Vallo di Diano, and Alburni ( Figure 13). As previously mentioned, the submerged landscape is extremely interesting for the topographic features that are visible on the sea floor and for those that can be highlighted by the acoustic profiles that were realized in the area. In particular, close to the shoreline, the rocky bottom corresponds with the sea floor except for a light veil of silty/sandy sediments covered by hydroids This was not the case for the submerged area in front of Cape Licosa, for which the submerged landscape map was developed by Reference [18] for the Cilento National Park, Vallo di Diano, and Alburni ( Figure 13). As previously mentioned, the submerged landscape is extremely interesting for the topographic features that are visible on the sea floor and for those that can be highlighted by the acoustic profiles that were realized in the area. In particular, close to the shoreline, the rocky bottom corresponds with the sea floor except for a light veil of silty/sandy sediments covered by hydroids and stooling silicones. This rocky bottom emerges at a short distance forming a little island with an almost flat-topped summit. Offshore, at the depth of about 25 m, there is another sub-horizontal surface that slopes gently seaward, which is composed of organogenic sands and gravels produced by the fragmentation of coralligenous bioconstructions. According to the survey carried out on this terrace, the surface is characterized by sediment waves (megaripples). Also, in this case, its shape reveals phenomena that occurred when the sea level was lower during the upper Pleistocene-Holocene period or during the sea-level rise after the last glaciation as suggested by various authors [16,17,77]. and stooling silicones. This rocky bottom emerges at a short distance forming a little island with an almost flat-topped summit. Offshore, at the depth of about 25 m, there is another sub-horizontal surface that slopes gently seaward, which is composed of organogenic sands and gravels produced by the fragmentation of coralligenous bioconstructions. According to the survey carried out on this terrace, the surface is characterized by sediment waves (megaripples). Also, in this case, its shape reveals phenomena that occurred when the sea level was lower during the upper Pleistocene-Holocene period or during the sea-level rise after the last glaciation as suggested by various authors [16,17,77]. Figure 13. Submerged landscape map of Licosa Cape (extracted from Reference [17]). Legend: d-spur with coralligenous bioconstruction; e-rocky bedding planes covered by bioturbated mud; f-bank with coarse organogenic cover; h-depositional terrace composed of organogenic sand and gravel; i-wave-cut terrace with mixed organogenic cover; m-slope with mixed organogenic sediments; n-deep terrace with muddy bioclastic cover; o-shelf muddy plain; p-rock; the dotted cover indicates the phanerogam plants. Other depositional bodies are found at greater depths and run parallel to the edge of the platform. They are characterized by a type of echo with an indistinct background without reflections in the substrate [16], which indicates the presence of more reflective sandy deposits than pelitic deposits. The upper part is sometimes covered with a thin layer of Holocene sediments. According to Reference [16], the characteristics of the sandy deposits are attributable to a beach environment when the sea was at its lowest level (18 ka B.P.), which are useful for carrying out beach nourishment interventions along the coasts. The emerged and submerged landforms detected close to the Licosa promontory suggest that the polyphasic and polycyclic evolution that occurred during the Quaternary was affected by climatic variations. In the emerged portion, the debris deposits at the foot of Mount Licosa could be due to the cold phases of the Upper Pleistocene, when there was little or no forest cover and the land was covered with semiarid vegetation such as grasses and shrubs [83,84]. These phases favored the fragmentation of the rocks (cryoclastic processes due to freezing/thawing cycles) when large amounts of debris were produced on the slopes. At that time, sea levels were low, and the emerged area reached its maximum extension, as confirmed by the acoustic recordings and the beach sediments found in the previously mentioned core sample. Moreover, during the interglacial or interstadial-stadial periods, the relatively finer parts of the upper and steeper parts of the slopes were removed by different transport processes (gravity and/or water) [82,85]. This material, which Figure 13. Submerged landscape map of Licosa Cape (extracted from Reference [17]). Legend: d-spur with coralligenous bioconstruction; e-rocky bedding planes covered by bioturbated mud; f-bank with coarse organogenic cover; h-depositional terrace composed of organogenic sand and gravel; i-wave-cut terrace with mixed organogenic cover; m-slope with mixed organogenic sediments; n-deep terrace with muddy bioclastic cover; o-shelf muddy plain; p-rock; the dotted cover indicates the phanerogam plants. Other depositional bodies are found at greater depths and run parallel to the edge of the platform. They are characterized by a type of echo with an indistinct background without reflections in the substrate [16], which indicates the presence of more reflective sandy deposits than pelitic deposits. The upper part is sometimes covered with a thin layer of Holocene sediments. According to Reference [16], the characteristics of the sandy deposits are attributable to a beach environment when the sea was at its lowest level (18 ka B.P.), which are useful for carrying out beach nourishment interventions along the coasts. The emerged and submerged landforms detected close to the Licosa promontory suggest that the polyphasic and polycyclic evolution that occurred during the Quaternary was affected by climatic variations. In the emerged portion, the debris deposits at the foot of Mount Licosa could be due to the cold phases of the Upper Pleistocene, when there was little or no forest cover and the land was covered with semiarid vegetation such as grasses and shrubs [83,84]. These phases favored the fragmentation of the rocks (cryoclastic processes due to freezing/thawing cycles) when large amounts of debris were produced on the slopes. At that time, sea levels were low, and the emerged area reached its maximum extension, as confirmed by the acoustic recordings and the beach sediments found in the previously mentioned core sample. Moreover, during the interglacial or interstadial-stadial periods, the relatively finer parts of the upper and steeper parts of the slopes were removed by different transport processes (gravity and/or water) [82,85]. This material, which was distributed on the wide coastal plains during the coldest periods, accumulated close to the coast in the warmest periods. On the basis of these characteristics, at least two generations of debris deposits were identified, including the oldest glacial/interglacial stages (OIS9(?) 7-6) and the last interglacial/glacial cycles (OIS5 to . Under these conditions, with the rising sea level, semi-submerged terraced surfaces with relative deposits were modeled, similar to those identified in this site. The highest terrace may have reached its present position between 20 and 25 m due to a tectonic uplift, which probably occurred in the Upper Pleistocene. According to Reference [77], it was modeled in the Middle Pleistocene (OIS7), corresponding to a time range between 245,000 and 190,000 years before the present. However, the overlapping of fossil-rich sandstone deposits associated with this terrace, on dark-red deposits belonging to continental dunes, could make the older traces recede to a previous colder stage (OIS8). The terraces at lower altitudes are not easily recognizable, except for those that can be observed at approximately 4 m along the whole promontory. This may be due to the worsening of erosion phenomena along their escarpments which occurred during warm periods. With regard to the best represented terrace, organogenic deposits are associated with thicknesses of approximately 50 cm with a pyroclastic level. Using the data obtained from these elements, it was possible to trace the formation of these terraces back to the Upper Pleistocene (OIS5c [79]). The Licosa Cape promontory is currently covered by typical Mediterranean woodlands even if they appear to be quite degraded [34], which is due to repeated deforestation carried out until the middle of the 20th century for agricultural purposes. More recently, the innermost area was used for grazing animals, while the area nearest the coastline was placed under protection. In fact, these areas were left to a slow and spontaneous re-naturalization. The man-induced degradation of the landscape probably increased the geomorphic instability of some escarpments, especially in the piedmont area. Moreover, a hypothetical sea level rise could accelerate erosion and reshape this landscape, as occurred in the past. In a future scenario, the emerged and submerged landforms described will be less visible. However, the documentation for the valorization of the site may prove useful for promoting the geomorphological evolution of this stretch of known coast. The other stretch of coastline analyzed in detail was Ripe Rosse, which lies to the south of Licosa Cape ( Figure 14). It shows how a better understanding of the coastal geomorphological evolution can be useful for mitigation and protection. On this high rocky stretch of coastline, the risk of landslide increased significantly, which may be due to the anthropogenic changes of the upper slope caused by the construction of an important road for tourism facilities and commercial activities in the Cilento and by a particular geomorphological evolution, as occurred on other coastal stretches of Cilento. was distributed on the wide coastal plains during the coldest periods, accumulated close to the coast in the warmest periods. On the basis of these characteristics, at least two generations of debris deposits were identified, including the oldest glacial/interglacial stages (OIS9(?) 7-6) and the last interglacial/glacial cycles (OIS5 to . Under these conditions, with the rising sea level, semi-submerged terraced surfaces with relative deposits were modeled, similar to those identified in this site. The highest terrace may have reached its present position between 20 and 25 m due to a tectonic uplift, which probably occurred in the Upper Pleistocene. According to Reference [77], it was modeled in the Middle Pleistocene (OIS7), corresponding to a time range between 245,000 and 190,000 years before the present. However, the overlapping of fossil-rich sandstone deposits associated with this terrace, on dark-red deposits belonging to continental dunes, could make the older traces recede to a previous colder stage (OIS8). The terraces at lower altitudes are not easily recognizable, except for those that can be observed at approximately 4 m along the whole promontory. This may be due to the worsening of erosion phenomena along their escarpments which occurred during warm periods. With regard to the best represented terrace, organogenic deposits are associated with thicknesses of approximately 50 cm with a pyroclastic level. Using the data obtained from these elements, it was possible to trace the formation of these terraces back to the Upper Pleistocene (OIS5c [79]). The Licosa Cape promontory is currently covered by typical Mediterranean woodlands even if they appear to be quite degraded [34], which is due to repeated deforestation carried out until the middle of the 20th century for agricultural purposes. More recently, the innermost area was used for grazing animals, while the area nearest the coastline was placed under protection. In fact, these areas were left to a slow and spontaneous re-naturalization. The man-induced degradation of the landscape probably increased the geomorphic instability of some escarpments, especially in the piedmont area. Moreover, a hypothetical sea level rise could accelerate erosion and reshape this landscape, as occurred in the past. In a future scenario, the emerged and submerged landforms described will be less visible. However, the documentation for the valorization of the site may prove useful for promoting the geomorphological evolution of this stretch of known coast. The other stretch of coastline analyzed in detail was Ripe Rosse, which lies to the south of Licosa Cape (Figure 14). It shows how a better understanding of the coastal geomorphological evolution can be useful for mitigation and protection. On this high rocky stretch of coastline, the risk of landslide increased significantly, which may be due to the anthropogenic changes of the upper slope caused by the construction of an important road for tourism facilities and commercial activities in the Cilento and by a particular geomorphological evolution, as occurred on other coastal stretches of Cilento. Figure 14. An example of "slope-over-wall" profile at Ripe Rosse in the northern Cilento; note that plants on the detritus cover the slope and the gravel/pebbly beach at the foot of the cliff; along the cliff, thin and fine turbidite outcrops can be seen. Figure 14. An example of "slope-over-wall" profile at Ripe Rosse in the northern Cilento; note that plants on the detritus cover the slope and the gravel/pebbly beach at the foot of the cliff; along the cliff, thin and fine turbidite outcrops can be seen. Ripe Rosse is reachable from the beaches that surround it. There is a rather narrow strip (2 to 4 m wide) where debris of all sizes accumulated, which indicates the numerous rockfalls that make up the cliff. It is an outcrop with a large turbidite succession greater than 150 m thick, belonging to basal formation of the Cilento group (Pollica formation: Upper Burdigalian-Langhian [23]. In particular, this succession is composed of coarse-grained and medium-grained sandstone layers, generally with clear bases, which pass upward to finer sand, silt, and mud. The sandstone layers are sometimes replaced by conglomerates with erosive and concave bases. Laterally adjacent to these coarser deposits, there are finer grain sizes and thinner turbidite layers and chaotic intervals interlayered with these turbidites, interpreted as submarine landslides, in a basin floor fan [86,87]. The plants (e.g., Genista cilentina, Ceratonia siliqua, Juniperus phoenicea, Pinus halepensis) that cover the slope belong to the Mediterranean scrub, whereas, in the adjacent submerged areas, seagrass meadows (Posidonia oceanica, Cymodocea nodosa) are widely diffused. This coast represents a key biodiversity asset, as it performs important ecological functions that are highly considered by the UNESCO Man and Biosphere program since 1997 [4]. As previously mentioned, "Ripe Rosse" has a slope-over-wall profile, which is composed of a convex upper part and a vertical lower part, which was formed by the sea-level rise following the last glaciation [31]. This is confirmed by the presence of a small wave-cut platform covered with coarse organogenic sands and gravels at depths of 5 m to 25 m from the sea bottom in front of the cliff, which was formed during the last sea-level drop [17,18]. Moreover, as revealed by Reference [88], the gradient of the shallowest part of the coastal shelf is very low and has an irregular topography with small scarps and other positive morphologies up to the terraces at −43 m, which does not allow precise sea-level estimation. Therefore, it is believed that this was due to the climatic oscillation occurred in the last Pleistocene and Holocene and, consequently, the processes influenced by it, which influenced its geomorphological evolution and led to the current condition of the cliff. Moreover, this evolution could also be decisive in the future, when a sea-level rise is expected. To this aim, a SCAPE numerical model was used, which gave promising results on coastal risks and mitigation methods. This model was preceded by geomorphological analysis including field surveys, elaboration of maps (1956 and 2004) and aerial photos (1943, 2012, and 2015). The multi-temporal processing was completed using available multi-temporal Google Earth (GE) images from 2015 [35] and images placed on the National Cartographic Portal by the Italian Ministry of the Environment [36]. The DEM obtained by LIDAR was extracted from this website. This detailed analysis enabled us to determine the geomorphological features of "Ripe Rosse", as well as to reconstruct its short-and medium-term geomorphological evolution, as we obtained information on the processes that occurred in the past, especially since the late Pleistocene. The spatio-temporal information of the area was obtained and digitalized on a geomorphological map using the GmIS_UniSa procedure [40], which proved useful for the numerical model but is not reported in this paper. The model, which was calculated on several profiles of the coastline, includes their geometrical features, input data, as well as files, describing wave conditions, tidal levels, average sea level, annual sediment flow, sediment transport, and accumulated annual volume. The profile of "Ripe Rosse" mainly consists of an upper portion with a moderate slope (mean 40 • ) that descends toward a vertical basal cliff (mean 80 • ) into the sea with a slightly inclined coastal platform up to 200 m in width. More specifically, Ripe Rosse has a convex, colluvial, debris flow slope on the remnants of a buried, uplifted marine platform, covered with rounded, gravelly marine deposits, hanging onto the cliffed bedrock slope. The original, longer convex-concave profile was connected to a lower sea level during the last glacial age. The cliffed slope was progressively modeled by a slope retreat mechanism due to the post-glacial sea level rising until the present day. A threshold behavior of the entire coastal slope profile, with a general gravitational collapse, was identified after the complete disruption of the buried marine platform [67]. Such peculiar profiles could be formed on coasts where cliffs of relatively resistant rock are degraded by periglacial freeze-and-thaw processes resulting in solifluction, and they form coastal slopes that are then undercut by marine erosion. This process is still active on high-latitude coasts, but it was more widespread during Pleistocene times, when coasts that are now temperate were subjected to the down-slope movement of frost-shattered rubble during cold phases with low sea levels. The Pleistocene cliffs became slopes covered with solifluction deposits composed of angular gravel. This slope apron deposit extended out onto what is now the sea floor. Late in Pleistocene times, the climate became milder and vegetation grew on these coastal slopes. The sea level rose, and the slopes were undercut by erosion. This evolution was simulated to predict future climatic conditions, since climate tropicalization will be the most popular topic for the next few hundred years. Starting from its current state and bearing the sea-level rise in mind, the effect of the marine processes at the foot of the cliff was reconstructed ( Figure 15). subjected to the down-slope movement of frost-shattered rubble during cold phases with low sea levels. The Pleistocene cliffs became slopes covered with solifluction deposits composed of angular gravel. This slope apron deposit extended out onto what is now the sea floor. Late in Pleistocene times, the climate became milder and vegetation grew on these coastal slopes. The sea level rose, and the slopes were undercut by erosion. This evolution was simulated to predict future climatic conditions, since climate tropicalization will be the most popular topic for the next few hundred years. Starting from its current state and bearing the sea-level rise in mind, the effect of the marine processes at the foot of the cliff was reconstructed ( Figure 15). Figure 15. Qualitative reconstruction (step by step) of the geomorphological evolution for the next 500 years. The dotted line indicates the topographic surface at −15,000 years from the present with the sea level at −130 m from the current position. In yellow, the detrital material covering the slope and then deposited at the base of the cliff is shown; in orange, the material dismantled from the wall is shown, which determines the general collapse of the cliff, once the threshold is exceeded. The removal of the material collapsed from the slope and the formation of a large platform of coastal erosion was also considered. The formation of a vast beach at the foot of the cliff, made of sediments transported from the adjacent coast in erosion or by piles of rocks that fell from the slope above, is the result of erosion processes which change the profile of the cliff. The morphological expression of this change in the coastal platform is represented by the increase in its gradient and Figure 15. Qualitative reconstruction (step by step) of the geomorphological evolution for the next 500 years. The dotted line indicates the topographic surface at −15,000 years from the present with the sea level at −130 m from the current position. In yellow, the detrital material covering the slope and then deposited at the base of the cliff is shown; in orange, the material dismantled from the wall is shown, which determines the general collapse of the cliff, once the threshold is exceeded. The removal of the material collapsed from the slope and the formation of a large platform of coastal erosion was also considered. The formation of a vast beach at the foot of the cliff, made of sediments transported from the adjacent coast in erosion or by piles of rocks that fell from the slope above, is the result of erosion processes which change the profile of the cliff. The morphological expression of this change in the coastal platform is represented by the increase in its gradient and the decrease in its height, which accelerates the recession rate. The simulation realized with SCAPE software was tested for 500 years starting from the current conditions and considering the hypothesis of a sea-level rise of 1 mm/year on a 10-m-high cliff. The result was a 140-m cliff retreat, represented graphically by the Excel and Matlab programs along the modeled section and in a representative profile (Figure 16). the decrease in its height, which accelerates the recession rate. The simulation realized with SCAPE software was tested for 500 years starting from the current conditions and considering the hypothesis of a sea-level rise of 1 mm/year on a 10-m-high cliff. The result was a 140-m cliff retreat, represented graphically by the Excel and Matlab programs along the modeled section and in a representative profile (Figure 16). The simulation showed clearly that the vertical basal part of the coastal slope recedes parallel to itself with uniform denudation intensity if the slope processes are constant and/or the rock resistances are uniform. It is important to note that the recession is facilitated by the continuous removal of debris from the base of the slope and the formation of a partially submerged accumulation. Unfortunately, it is not yet possible to simulate the entire slope above the wall; however, the progressive retreat of the wall should intercept the threshold of the slope portion with the detrital material, which would accelerate the evolution of the entire coastal slope as confirmed by the geomorphological reconstruction. If this were to happen in hot and humid climatic conditions or under high anthropogenic pressure (slope cuts and wildfires), there would be an emphasis on the subaerial processes extended to the entire slope with a consequent evolution of the "substitution" of the slope shape. This evolution could lead to the consumption of the top portion and shorten the coastal slope, which would increase the risks to which the road would be subjected, which is the only road leading to the coastal resorts located southward. The coastal slope of Marina di Pisciotta, northward to Palinuro Cape, is in a similar situation. In this case, a slow-moving landslide [89] occurred on the cliff escarpment, which affected both the roads and the railway line that connect northern and southern Italy (Salerno-Reggio Calabria line). Also, in this case, it is a high coast with a "slope-over-wall" profile. Gaining knowledge of the geomorphological evolution of this type of coast would enable us to implement appropriate risk mitigation strategies, which would prevent roads from being damaged and improve mobility and the economy. Conclusions The coastal landscape of Cilento (southern Italy) has a great variety of terrestrial and marine landforms. Despite the continuous degradation of rocks with different degrees of erodibility and the negative effects of mankind on the territory, these forms are able to maintain their morphological characteristics. These characteristics make the landscape attractive to tourists, who choose the The simulation showed clearly that the vertical basal part of the coastal slope recedes parallel to itself with uniform denudation intensity if the slope processes are constant and/or the rock resistances are uniform. It is important to note that the recession is facilitated by the continuous removal of debris from the base of the slope and the formation of a partially submerged accumulation. Unfortunately, it is not yet possible to simulate the entire slope above the wall; however, the progressive retreat of the wall should intercept the threshold of the slope portion with the detrital material, which would accelerate the evolution of the entire coastal slope as confirmed by the geomorphological reconstruction. If this were to happen in hot and humid climatic conditions or under high anthropogenic pressure (slope cuts and wildfires), there would be an emphasis on the subaerial processes extended to the entire slope with a consequent evolution of the "substitution" of the slope shape. This evolution could lead to the consumption of the top portion and shorten the coastal slope, which would increase the risks to which the road would be subjected, which is the only road leading to the coastal resorts located southward. The coastal slope of Marina di Pisciotta, northward to Palinuro Cape, is in a similar situation. In this case, a slow-moving landslide [89] occurred on the cliff escarpment, which affected both the roads and the railway line that connect northern and southern Italy (Salerno-Reggio Calabria line). Also, in this case, it is a high coast with a "slope-over-wall" profile. Gaining knowledge of the geomorphological evolution of this type of coast would enable us to implement appropriate risk mitigation strategies, which would prevent roads from being damaged and improve mobility and the economy. Conclusions The coastal landscape of Cilento (southern Italy) has a great variety of terrestrial and marine landforms. Despite the continuous degradation of rocks with different degrees of erodibility and the negative effects of mankind on the territory, these forms are able to maintain their morphological characteristics. These characteristics make the landscape attractive to tourists, who choose the Cilento coastal areas for their holidays, but they also capture the interest of researchers and experts in coastal geomorphology [3,18]. For this reason, the Cilento territory and the contiguous marine areas are protected, both at the national and at the international level. However, even if it is possible to protect the environment and ensure sustainable development inland, it can be difficult in coastal areas, due to both anthropic pressure and climate change effects. As previously mentioned, the Cilento coastline was already affected by erosion processes that led to coastal erosion and shoreline retreat [44,45] and by numerous landslides that occurred on the cliffs or the slopes behind the beaches [61]. Seas and oceans are under considerable anthropic pressure due to structures and infrastructures built close to the coast to the detriment of the conservation of the environment, and the ports and coastal defenses are not entirely adequate for the context. On the other hand, a sea-level rise caused by an increase in temperatures would have further effects on the coastline that cannot be fully controlled. These impacts would be greater where the adjacent beaches and structures cannot be effectively protected, and greater still on soft rock cliffs, like those found in the study area. With regard to Licosa Cape, where anthropic pressure is not so high, climate change effects should be considered for the conservation of the landforms. Due the presence of a wide coastal platform, the estimated rise in sea level would probably not have significant short-and medium-term effects on the area close to the terrace. However, there could be an intensification of landslide phenomena along the slopes of the Monte Licosa ridge and swamping in the terrace area, which already occurred in the past in warm periods. The case of the "slope-over-profile" profile would be completely different as verified by the application; once the threshold represented by the wall is exceeded, there would be a huge earth flow followed by the complete collapse of the slope and the destruction of the structures/infrastructures built on it. In Cilento, there are numerous infrastructures such as roads, but risk mitigation in order to conserve these landforms would entail huge economic costs. Zoning regulations could help to protect the area, as the result of a detailed knowledge of the landscape and its space-time evolution [90]. To this end, efforts should be made to adopt multidisciplinary approaches that use innovative topographic and geo-morphometric analyses that enable us to develop a detailed digital geomorphological map and enhance our spatio-temporal knowledge. This paper provides useful information on the landforms for planners and operators working in the area. Meanwhile, for the site of Ripe Rosse and other places located in areas prone to landslides, a proposal was put forward to establish the "prototypal moving geosites" within the Geopark Network in order to emphasize their scientific, educational, and social relevance [91]. To this aim, we wish to invite researchers to monitor these particular geosites as students strive to understand the forms and processes related to them. Mankind should implement activities that do not damage directly and indirectly our geological and geomorphological heritage in order to conserve all terrestrial land and marine landforms.
2019-12-19T09:15:25.043Z
2019-12-12T00:00:00.000
{ "year": 2019, "sha1": "317baed9109b748190973270b5c561b5f9c74302", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4441/11/12/2618/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "043abb189a40060f5f7d6e189cc1e12723ad7d69", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
250598556
pes2o/s2orc
v3-fos-license
FOXM1-CD44 Signaling Is Critical for the Acquisition of Regorafenib Resistance in Human Liver Cancer Cells Regorafenib is a multikinase inhibitor that was approved by the US Food and Drug administration in 2017. Cancer stem cells (CSCs) are a small subset of cancer-initiating cells that are thought to contribute to therapeutic resistance. The forkhead box protein M1 (FOXM1) plays an important role in the regulation of the stemness of CSCs and mediates resistance to chemotherapy. However, the relationship between FOXM1 and regorafenib resistance in liver cancer cells remains unknown. We found that regorafenib-resistant HepG2 clones overexpressed FOXM1 and various markers of CSCs. Patients with hepatocellular carcinoma also exhibited an upregulation of FOXM1 and resistance to regorafenib, which were correlated with a poor survival rate. We identified a close relationship between FOXM1 expression and regorafenib resistance, which was correlated with the survival of patients with hepatocellular carcinoma. Thus, a strategy that antagonizes FOXM1–CD44 signaling would enhance the therapeutic efficacy of regorafenib in these patients. Introduction Cancer stem cells (CSCs) have the characteristics of extensive proliferation, selfrenewal, and increased frequency of tumor formation. Moreover, CSCs can induce the epithelial-mesenchymal transition (EMT), which is responsible for tumor metastasis [1,2]. Distinct populations of CSCs in hepatocellular carcinoma (HCC) have been defined and characterized using normal liver stem cell markers and liver progenitor cell markers, including CD44, CD133/PROM-1, CD90/THY-1, aldehyde dehydrogenase, CD13, oval cell marker OV-6, Sal-like protein 4, CD117/c-kit, intercellular adhesion molecule 1, CD24, delta-like 1, cytokeratin 19, and epithelial cell adhesion molecule (EpCAM) [3,4]. It is reasonable to assume that the transformed marker-negative cancer cells can dedifferentiate to acquire the specific features of liver CSCs [5,6]. In addition, CSCs have been shown to be more resistant to chemotherapy and radiotherapy. We previously reported that the CSC-like cells derived from the HepG2 cell line using OCT4, KLF4, SOX2, and C-MYC (OSKM, Yamanaka factors [7]), together with the shTP53 knockdown lentivirus, also exhibited drug resistance and upregulation of EMT markers, such as CD44, EpCAM, and CD133 [8]. EpCAM + HCCs are much more sensitive to T cell factor/β-catenin binding inhibitors than EpCAM − HCCs in vitro. Dishevelled-1 is an essential component of the Wnt signaling pathway that stabilizes β-catenin and mediates the Wnt pathway. Dishevelled-1 silencing using small interfering RNA (siRNA) or other small-molecule inhibitors in 5-fluorouracil-resistant HepG2 cells could restore the 5-fluorouracil responsiveness via reduced cell proliferation and migration as well as increased apoptosis [9]. Furthermore, the sorafenib-resistant cell population was enriched for cells with cancer stem-like properties and tumorigenicity, along with high levels of EpCAM, CD133, CD90, and the epithelial progenitor marker cytokeratin 19 [10]. Several liver CSC markers, including LGR5, SOX9, NANOG, and CD90, were elevated in sorafenib-resistant subclones [11]. Drug resistance in cancer is one of the most limiting factors in the clinical treatment of patients with cancer. There are as many underlying mechanisms of resistance as there are patients with cancer because each cancer has its defining set of characteristics that dictates cancer progression and that can eventually lead to death. Therefore, solving this resistance problem seems to be an unattainable goal [12]. Regorafenib was approved by the US Food and Drug Administration (US FDA) in 2017 for the second-line treatment of HCC in adult patients who had previously received sorafenib therapy [13][14][15][16] or experienced disease progression after cancer treatment [17]. Despite the survival benefit of regorafenib for patients with metastatic cancer, its overall clinical efficacy remains limited. Thus, the identification of the target genes involved in the drug resistance induced by regorafenib and their mechanistic studies might be crucial for the development of new treatments for HCC. The forkhead box protein M1 (FOXM1) is a member of the FOX transcription factor family and plays an important role in the regulation of cell-cycle progression, drug resistance, CSC renewal, and cancer differentiation [18,19]. A recent study demonstrated that the upregulation of FOXM1 was associated with a poor prognosis among patients with HCC, colorectal cancer, and lung cancer [20]. Thus, FOXM1 was regarded as a key transcription factor associated with HCC [21]. Conversely, inhibition of FOXM1 in liver cancer cells reduced cell proliferation, angiogenesis, and EMT [22]. These findings indicate the critical role of FOXM1 in tumorigenesis and cancer development. However, the mechanism underlying FOXM1 dysregulation remains to be determined. Some studies demonstrated that the oncogenic activity of FOXM1 can be regulated, at least in part, by the PI3K/AKT and/or RAS/MAPK signaling pathways [19,23]. Based on these observations, we focused our analyses on regorafenib resistance and its underlying mechanisms, such as the examination of target genes, including FOXM1 and the CSC marker CD44. Thus, we examined the role of FOXM1 in the establishment of the CSC characteristics and regorafenib resistance via CD44 using the HepG2 and Hep3B liver cancer cell lines. FOXM1 and Various CSC Markers Were Significantly Overexpressed in Regorafenib-Resistant Cells HepG2_Rego_R cells were generated by continuous cultivation using 2-6 µM regorafenib for 6 months. Subsequently, the cells became loosely attached to the substratum but remained attached to each cell, and then they were aggregated to form the spheroid structures ( Figure 1A). To examine the cell viabilities of HepG2, HepG2_Rego_R, Hep3B, and Hep3B_Rego_R cells, 2-10 µM of regorafenib was used to calculate the half-maximal inhibitory concentration (IC50; Figure S1). The IC 50 values of HepG2 and HepG2_Rego_R cells were 7.3 µM and 10 µM, respectively. Thus, we determined that the suitable concentration of regorafenib for further experiments was 5 µM. The HepG2_Rego_R cells exhibited a 3.75-fold higher colony-forming ability than the parental HepG2 cells in the absence of regorafenib. In the presence of 5 µM regorafenib, the colony number for the HepG2_Rego_R cells increased by 1.45-fold, but their colony size seemed to be smaller than that of the dimethyl sulfoxide (DMSO) control cells, without 5 µM regorafenib treatment. In the case of HepG2 cells, no colonies were obtained after regorafenib treatment ( Figure 1B). However, even after treatment with 5 µM regorafenib, larger spheroids were obtained for HepG2_Rego_R cells than those obtained for HepG2 cells without 5 µM regorafenib treatment. Hep3B_Rego_R cells were also obtained using the same conditions with the addition of 5 µM regorafenib. The colony size and number characteristics of these clones were similar to those of the HepG2_Rego_R cells, i.e., smaller colony sizes and approximately twofold higher colony numbers than those for the parental cell colonies. Spheroids of the HepG2_Rego_R cells were 2-5-fold larger than those of the HepG2 cells both in the absence and presence of regorafenib ( Figure 1C). Larger spheroids were obtained for the HepG2_Rego_R cells than for HepG2 cells, even on treatment with 5 µM regorafenib. Similar observations were made for the Hep3B_Rego_R cells (data unpublished). Next, we examined the expression levels of stem cell and CSC markers. The mRNA expressions of stem cell markers, such as ATP binding cassette subfamily G member 2 (ABCG2), SOX2, GATA-binding factor 6, and CD44, were significantly increased in HepG2_Rego_R cells compared with HepG2 cells (Figure 2A) [24,25]. These marker proteins were also confirmed to express higher ( Figure 2B). (D) Comparative protein expression levels of FOXM1, AURKA, and BIRC5 between HepG2 and HepG2_Rego_R cells. The relative protein expression levels were calculated based on the protein levels in HepG2 cells. Statistical analysis was performed as described in Materials and Methods. Data are presented as mean ± SEM (n = 5) and were analyzed using two-way ANOVA and Bonferroni posttests (* p < 0.05, ** p < 0.01, and *** p < 0.001). We also examined FOXM1, which is a transcription factor that plays an important role in the cell cycle [18]. Recently, FOXM1 was also reported as an oncogene in a variety of cancers, including lung cancer [26], breast cancer [27], colorectal cancer [28], and HCC [21,29]. The expression of FOXM1 was dramatically increased in HepG2_Rego_R cells compared with that in HepG2 cells. Moreover, the RNA expression of the possible downstream target genes of FOXM1, including Aurora kinase A (AURKA) [30] and BIRC5/Survivin [31], was also significantly increased (3-14-fold; Figure 2C) in the HepG2_Rego_R cells. In addition, we examined the protein expression levels of FOXM1, AURKA, and BIRC5 and found that they were higher in the HepG2_Rego_R cells (approximately 2-18-fold) than in HepG2 cells ( Figure 2D). Taken together, these findings indicate that the regorafenib-resistant cancer cells exhibited characteristics more similar to those of CSCs, with a higher expression of stem cell and CSC markers. The EMT Phenomenon Occurred in Regorafenib-Resistant Cells EMT has been reported as one of the tumor-related functions that might enable the CSCs to gain the abilities of self-renewal and invasion and prevent apoptotic activity [32][33][34]. To understand the EMT phenomenon in regorafenib-resistant cells, we measured the expression of EMT marker genes using reverse transcription-quantitative polymerase chain reaction (RT-qPCR) ( Figure 3A). The increased expression levels of vimentin (VIMENTIN), twist family BHLH transcription factor 1 (TWIST1), and zinc finger E-box binding homeobox 1 (ZEB1) were detected in the HepG2_Rego_R cells as compared with the control HepG2 cells. Western blotting of EMT-related proteins, such as TWIST1, VIMENTIN, ZEB1, cadherin 1 (CDH1), cytokeratin 7 (CK7), and cytokeratin 18 (CK18), was also performed. In general, three genes, TWIST1, VIM, and ZEB1, have been reported as mesenchymal cell markers, which resulted in the induction of alterations of cell morphology, motility, and adhesion ability [35][36][37][38][39]. In addition, CDH1, CK18, and CK7 are considered epithelial cell markers, and the downregulation of these three genes would force the cells into EMT [40][41][42]. , and ZEB1 were determined using RT-qPCR. Data are presented as mean ± SEM (n = 5) and were analyzed using two-way ANOVA and Bonferroni posttests (* p < 0.05, *** p < 0.001). (B) Comparative expression of EMT-related proteins between HepG2 and HepG2_Rego_R cells as well as Hep3B and Hep3B_Rego_R cells. The expression levels of HepG2 and Hep3B cells are regarded as 1.0. Data are presented as mean ± SEM (n = 3) and were analyzed using Student's t-test. (C) The migration abilities of HepG2 and HepG2_Rego_R cells were compared as described in Materials and Methods. One of the representative results and control HepG2_Rego_R cells before migration at 0 h is presented in the right panel. The data are presented as mean ± SEM (n = 3) and were analyzed using Student's t-test (*** p < 0.001). Both HepG2_Rego_R and Hep3B_Rego_R cells exhibited EMT characteristics, such as elevated expressions of TWIST1/2, VIMENTIN, and ZEB1, together with low expressions of epithelial marker proteins, such as CDH1, CK18, and CK7 ( Figure 3B). The transwell migration assay demonstrated that the migration ability of HepG2_Rego_R cells was at least twofold greater than that of the parental HepG2 cells ( Figure 3C). These findings suggest that the regorafenib-resistant liver cancer cells are more malignant than the parental liver cancer cells. Inhibition of FOXM1 Restored Cell Death in Regorafenib-Resistant Cells To understand the role of FOXM1 in cancer progression and drug resistance, we used the FOXM1 inhibitor thiostrepton to determine the role of FOXM1 in cell death and colony formation [43]. The relative expression levels of stem cell and CSC markers were analyzed using Western blotting. Treatment of HepG2_Rego_R cells with 1 µM thiostrepton for 72 h resulted in a significant reduction in protein and RNA expressions of CSC-related markers, such as SOX2, CD44, and BIRC5, as well as control FOXM1 levels ( Figure 4A,B). However, the expression trends of some EMT markers were variable. The expression levels of TWIST1/2 were decreased, whereas those of both CK18 and CDH1 were increased after exposure to thiostrepton ( Figure 4A). Thus, EMT markers might show different expression patterns indicating the heterogeneity of the EMT sensitivities of TWIST1/2 and other markers, such as CDH1 and CK18 ( Figure 4A,B). In addition, the cell viability was reduced in a dose-dependent manner after exposure to thiostrepton for 48 h ( Figure 4C) and 72 h ( Figure 4D). The viabilities of HepG2_Rego_R and Hep3B_Rego_R cells on treatment with 1 µM of thiostrepton were not significantly changed at 3 and 6 days after the treatment compared with those of HepG2 and Hep3B parental cells subjected to the same treatment ( Figure S2). Thus, drug-resistant clones, such as HepG2_Rego_R and Hep3B_Rego_R cells, did not show severe cytotoxicity on thiostrepton treatment. Moreover, the size of spheres and the sphere-forming ability of the HepG2_Rego_R and Hep3B_Rego_R cells were decreased after FOXM1 inhibition ( Figure 5A,B). Furthermore, the colony-forming abilities of the HepG2_Rego_R and HepG2 cells were greatly decreased (by 20-55%) after treatment with 0.5 µM thiostrepton compared with the control DMSO-treated cells ( Figure S3). Thus, the effects of 0.5 µM thiostrepton on the Hep3B_Rego_R and Hep3B cells were similar to those on the HepG2_Rego_R and HepG2 cells (unpublished data). FOXM1 Knockdown Impaired CD44 and SOX2 Expression and the CSC Population in Regorafenib-Resistant Cells We performed knockdown experiments using short hairpin RNA (shRNA; shFOXM1) constructs targeting FOXM1 and its control off-target construct in HepG2_Rego_R and Hep3B_Rego_R cells. The expression of the CSC markers SOX2 and CD44 was significantly decreased in protein levels. In addition, the expression of AURKA in the knockdown cells was also decreased by 50% compared with those in the control HepG2_Rego_R and Hep3B_Rego_R cells ( Figure 6A). This effect might be regulated by nutrient availability, as reported previously [44,45]. Thus, we then examined the role of FOXM1 in CSCs under the condition of nutrient deprivation by culturing HepG2_Rego_R and Hep3B_Rego_R cells, their parental cells, and the shFOXM1-transfected cells in Dulbecco's modified Eagle's medium (DMEM) containing 1% fetal bovine serum ( Figure S4A,B). Compared with their parental cells, HepG2_Rego_R and Hep3B_Rego_R cells exhibited a higher survival. However, the introduction of shFOXM1 resulted in a decrease in the survival of HepG2_Rego_R and Hep3B_Rego_R cells. Moreover, the size of spheres and the sphere-forming ability of HepG2_Rego_R and Hep3B_Rego_R cells were decreased significantly after the introduction of the shFOXM1 construct ( Figure 6B,C). In the case of siRNA-FOXM1 use, the protein expression of SOX2 and CD44 was completely knocked down, which was similar to the effects of shRNA-FOXM1 (data not shown). Because CD44 expression is highly correlated with FOXM1 expression [6], we predicted that FOXM1 would play a critical role in the promoter activity of the CD44 gene. Thus, we performed a CD44 promoter-driven luciferase assay. The CD44 promoter activity was dramatically increased in HepG2_Rego_R cells compared with that in the parental HepG2 cells, although the control shRNA against the off-targets of FOXM1 did not show any significant CD44 promoter activity reduction. Thus, the knockdown of FOXM1 resulted in the reduction in CD44 promoter-luciferase activity (Figure 7). Taken together, these findings indicate that FOXM1 knockdown impaired the expression of CD44 and SOX2 as well as cell proliferation and sphere formation as CSC features in regorafenib-resistant cells. FOXM1 Overexpression Was Correlated with Poor Prognosis and Tumor Growth in Patients with HCC To investigate the FOXM1 signaling pathway and its possible downstream proteins, including AURKA and BIRC5, in patients with HCC, the whole cohort of patients with HCC and those with Hepatitis virus infection as well as HCC were divided into highand low-expression groups. Using The Cancer Genome Atlas database, we found that the patient group with higher expressions of FOXM1, AURKA, or BIRC5 exhibited a shorter overall survival than did the group with lower expressions of these proteins, regardless of the hepatitis virus infection status ( Figure 8A). In addition, we found that the expression of CSC and EMT markers, such as CD44, SOX2, ABCG2, and VIMENTIN, was closely associated with the survival of patients with HCC. As with the observation for FOXM1, the total cohort of patients with HCC and higher expression of CD44, SOX2, ABCG2, and VIMENTIN exhibited a poorer prognosis than patients with HCC and lower expressions of these proteins ( Figure 8B,C). However, among the patients with hepatitis virus infection and HCC, those with higher expressions of ABCG2 and VIMENTIN showed a longer survival than those with lower expressions of these proteins ( Figure 8B,C). Finally, a xenotransplantation experiment was performed to analyze the role of FOXM1 in tumor progression ( Figure 9A). HepG2 and HepG2_Rego_R cells treated with or without thiostrepton or 0.05% DMSO control for 72 h were injected subcutaneously into severe combined immunodeficiency mice. Tumor weight in the HepG2_Rego_R mice group was approximately 10-fold higher than that in the HepG2 mice group ( Figure 9B). Furthermore, necrosis, number of giant cells, and abnormal mitosis appeared to be significant compared with the control HepG2 group ( Figure 9C). Moreover, the tumor weight in the thiostreptontreated HepG2_Rego_R group was approximately 60% lower than that in the untreated HepG2_Rego_R group. These findings suggest that FOXM1 plays an important role in the progression of HCC and affects the survival rate of patients with HCC. Discussion We previously generated CSC-like cells from the HepG2 cell line using OCT4, KLF4, SOX2, and C-MYC (OSKM), together with the shTP53 knockdown lentivirus, which also exhibited drug resistance and upregulation of EMT markers, such as CD44, EpCAM, and CD133 [8]. Drug resistance during cancer treatment is considered a limiting factor in curing patients with cancer [12]. Multiple signaling pathways and mechanisms are involved in the development of drug resistance during the treatment of various cancers, including HCC. Some studies indicated that resistance to tyrosine kinase inhibitors (TKIs) could enrich the CSC characteristics and subsequently lead to drug resistance [46]. A TKI is designed to inhibit the corresponding kinase from exerting its function of catalyzing phosphorylation [47]. Since the US FDA approved imatinib for the treatment of chronic myeloid leukemia in 2001, multiple potent and well-tolerated TKIs-with targets including EGFR, ALK, ROS1, HER2, NTRK, VEGFR, RET, MET, MEK, FGFR, PDGFR, and KIT-have emerged and contributed to significant progress in cancer treatment. Regorafenib was approved by the US FDA to treat various cancers, including HCC [48]. However, its efficacy has been restricted by the emergence of TKI resistance. In the present study, both regorafenib-resistant HepG2 and Hep3B cells exhibited overexpressions of FOXM1 as well as CSC and EMT markers. FOXM1 plays a major role in the progression of various cancers [49]. Moreover, the FOXM1 pathway is involved in TKI resistance in lung cancer [50]. However, the role of FOXM1 in TKI resistance has not been studied in HCC. Our present in vitro and in vivo analyses showed that CSC and EMT markers and FOXM1 were overexpressed in regorafenib-resistant HepG2 cells. FOXM1 and downstream molecules, such as BIRC5 (survivin), may serve as predictive factors and therapeutic targets in HCC. First, the efficient downregulation of FOXM1 in regorafenib-resistant HepG2 cells reduced cell viability, sphere formation, and colony formation. Second, the downregulation of FOXM1 using thiostrepton (a FOXM1 inhibitor) or shRNA impaired CSC phenotypic characteristics, such as SOX2 and CD44 downregulation, in regorafenibresistant HepG2 cells. Third, the inhibition of FOXM1 significantly decreased tumor growth in mouse xenografts. Importantly, we found that the overexpression of FOXM1 was correlated with an upregulation of CD44 and SOX2 and was associated with a poorer prognosis in patients with HCC. Our findings demonstrate a novel molecular mechanism for regorafenib resistance in HepG2 and Hep3B cells and provide potential predictive factors and therapeutic targets for future clinical applications in human HCC. The upregulation of FOXM1 was detected in regorafenib-resistant HepG2 and Hep3B cells and led to the enhancement of CSC ability and the induction of EMT (Figures 1-3). Moreover, the downregulation of FOXM1 activity using an inhibitor or shRNA increased cell death and decreased cell proliferation, the size of spheres, and the sphere-forming ability in regorafenib-resistant HepG2 and Hep3B cells (Figures 4 and 5) [26]. The knockdown of FOXM1 downregulated the CD44 and SOX2 CSC markers (Figure 6), which are involved in the emergence of CSCs in various types of cancers [50]. It has been reported that FOXM1 could directly upregulate CD44 and trigger stem cell features, to enhance the progression of and cell survival in RAS-driven HCC [51]. Using the CD44 promoterluciferase assay, we demonstrated that the knockdown of FOXM1 decreased the expression of CD44 in regorafenib-resistant HepG2 and Hep3B cells (Figure 7). This reduction caused by shFOXM1 might be due to the binding of FOXM1 to site 4714 in the CD44 promoter region because the chromatin immunoprecipitation (ChIP) assay demonstrated that FOXM1 appeared to be recruited to this site ( Figure S5). Further experiments will be required to conclude whether FOXM1 binds to site 4714 in the CD44 promoter to transactivate this promoter. Taken together, our observations suggest that FOXM1 plays an important role in regorafenib resistance, which is mediated via the overexpression of CD44. In patients with HCC and a higher expression of FOXM1, we also detected high expressions of CD44, SOX2, ABCG2, and VIMENTIN, as well as a poor prognosis ( Figure 8A-C). Previous studies reported that AURKA activity was mediated by FOXM1 activation for reg-ulating mitotic-spindle assembly and chromosome segregation [46]. Moreover, BIRC was found to be induced after FOXM1 activation of the G2/M checkpoint [47]. Consistent with our in vivo findings, a poor prognosis and short overall survival rate were observed for patients with HCC and high expressions of FOXM1 and its downstream proteins, AURKA and BIRC5 ( Figure 8A,B). In the in vivo model, mice transplanted with thiostrepton-treated regorafenib-resistant HepG2 cells showed smaller sizes of the xenotransplanted tumors derived from regorafenib-resistant HepG2 cells than did those transplanted with untreated regorafenib-resistant HepG2 cells ( Figure 9A,B). To demonstrate the role of FOXM1 in vivo, we need to examine the effect of the genetically induced depletion of the FOXM1 gene or its inhibitor in the liver from the regorafenib-resistant mice model. In summary, FOXM1 was overexpressed in regorafenib-resistant HepG2 and Hep3B cells, both of which also showed upregulations of CSC and EMT markers in vivo and in vitro. The inhibition of FOXM1 restored TKI sensitivity and cell death and reduced sphere formation in regorafenib-resistant HepG2 and Hep3B cells. Furthermore, the downregulation of FOXM1 activity using a FOXM1 inhibitor or shRNA impaired CD44 and SOX2 expression. In a mouse xenograft study, engrafted tumor cells with a low expression of FOXM1 led to a reduced tumor size of the xenograft when harvested. Our findings enrich our knowledge about FOXM1, and its downstream genetic networks involved in regorafenib resistance. The present study not only identified the possible mechanisms underlying TKI resistance but also indicates that players in the FOXM1-CD44 network may become promising therapeutic targets to reverse TKI resistance in HCC. Cells The human HCC cell lines HepG2 and Hep3B were prepared as described previously [52]. The regorafenib-resistant HepG2_Rego_R and Hep3B_Rego_R cell lines were generated by culturing cells with regorafenib, starting at the concentration of 2 µM and increasing it up to 6 µM for at least 6 months. Subsequently, the cells were routinely maintained at the 6 µM concentration for each experiment. Plasmid DNA, siRNA, and shRNA Transfections Cells (5 × 10 6 ) were transiently transfected with plasmid DNAs, siRNA, and shRNA for FOXM1 using Lipofectamine 2000 (Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer's instructions. After incubation for 48 h, the cells were collected and harvested for subsequent experiments. After shRNA transfection for 48 h, we selected the best clone for further analysis, as assessed by the significant reduction in FOXM1 expression, which was evaluated using Western blotting and RT-qPCR ( Table 1). Dual-Luciferase Assay Cells (5 × 10 6 ) were transiently transfected with the CD44P pGL3 plasmid (19122; Addgene, Watertown, MA, USA) and the pRL-CMV plasmid encoding Renilla luciferase using Lipofectamine 2000 (Invitrogen, Waltham, MA, USA) according to the manufacturer's instructions. After incubation for 48 h, the cells were harvested and the luciferase activities were measured using the Dual-Luciferase ® Reporter Assay System (Promega, Madison, WI, USA) according to the manufacturer's instructions. 3-D-Sphere-and Colony-Formation Assays Sphere-and colony-formation assays were performed as described previously [52]. Briefly, for sphere formation, cells were grown in serum-free DMEM (HyClone, Cytiva, Tokyo, Japan) supplemented with B-27 (Invitrogen), 20 ng/mL of epidermal growth factor, and 20 ng/mL of basic fibroblast growth factor (ProSpec-Tany TechnoGene Ltd., Rehovot, Israel). Cells were then plated in 6-well ultra-low-attachment plates (Corning, Glendale, AZ, USA) and sphere formation was assessed using a microscope after 6 days of growth. For the colony-formation assay, cells were plated in a gelatin-coated dish at a density of 5 × 10 2 cells. Two weeks later, colonies with a diameter >2 mm were counted after staining with Giemsa staining solution (Wako Chemicals, Tokyo, Japan). Migration Assay Cells were seeded on a transwell plate and incubated without serum. The transwell was then placed on a plate containing DMEM with 10% fetal bovine serum for 48 h. The migrated cells located on the lower surface of the filters were fixed, stained, and counted using a microscope. In Vivo Tumor Xenograft Model HepG2 cells, HepG2_Rego_R cells, and their FOXM1 inhibitor or control 0.05% DMSOtreated (for 72 h) counterparts (1 × 10 6 ) were injected subcutaneously into severe combined immunodeficiency mice (male, 8 weeks; the National Laboratory Animal Center (NLAC), Taipei, Taiwan). Tumor size was calculated according to the following formula: tumor volume = length × width 2 2 (1) Four weeks later, the mice were sacrificed, and the tumors were isolated, subjected to fixation in 4% paraformaldehyde, embedded in paraffin, and then sectioned for hematoxylin and eosin staining. All animal experiments were performed in accordance with the animal welfare guidelines for the care and use of laboratory animals published by the NLAC and Kaohsiung Medical University (KMU 106226) in Taiwan. Statistical Analyses Data are presented as mean ± SEM from triplicate experiments and additional replicates, as indicated. One-way ANOVA (p < 0.0001) followed by two-tailed Student's t-tests was used to assess statistical significance. Survival analysis was performed using the Kaplan-Meier method, and the curves were compared using the log-rank test. p < 0.05 was considered statistically significant. Conclusions FOXM1 directly upregulated CD44 expression and triggered stem cell features, to enhance the progression of HCC and cell survival. According to our findings, FOXM1 and CD44 were upregulated in the HepG2_Rego_R cells. Thus, FOXM1 seems to be involved in drug resistance, such as that to regorafenib, through CD44.
2022-07-17T15:04:38.753Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "732ef5e94f90446fbf57f7b086f6dcc7ef51700b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/14/7782/pdf?version=1657878373", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "06b84277d1dca98b7cbda56cec26c65ead6b66e8", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
254869151
pes2o/s2orc
v3-fos-license
Epidermal inclusion cyst of the axilla with calcifications Epidermal inclusion cyst (EIC) is a benign mass that may occur in any area of abundant hair. It presents as a slowly growing firm nodule that is mostly asymptomatic. It may be confused with malignancy, making a definitive preoperative diagnosis difficult. Herein, we present a case of a 41-year-old patient with an EIC of the axilla containing calcifications on the mammogram. Introduction Epidermal inclusion cyst (EIC) is a frequent benign mass of the skin [1] . It consists of sebaceous materials, keratin debris, and cholesterol, and the cyst wall consists of squamous epithelium [ 1 ,2 ]. EIC can present anywhere with abundant hair, which tends to occur in the back, trunk, neck, and head [1] . However, it may occur more in sites of preceding surgery or injury [2] . EIC commonly presents as an asymptomatic lesion that is firm, dome-shaped, and slowly enlarging [3] . Although diagnosing EIC is easy when it is small and subcutaneous, mammography and sonography can misinterpret it with other benign or malignant lesions [4] . Herein, we present a case of a ✩ Competing Interests: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. * Corresponding author. 41-year-old patient presenting with an axillary EIC with calcifications on the mammogram. Case presentation A 41-year-old patient presented to the clinic complaining of a right axillary mass for 20 years. The mass has been slowly increasing in size for the past 1.5 years. ratory tests were normal. Mammography was ordered, which showed an oval circumscribed high-density mass with a hypodense center subcutaneously in the axilla with peripheral rim calcifications ( Fig. 1 ). No other breast pathology was detected on the mammography. An oval partially circumscribed subcutaneous mass in the right axilla with posterior acoustic shadowing was seen on ultrasound ( Fig. 2 ). The patient was referred to the general surgery department for surgical resection of the mass. A histopathology examination of the excised mass showed a cyst wall lined by stratified squamous epithelium devoid of a granular layer. The cyst content is composed of laminated keratin layers ( Fig. 3 ). The diagnosis of an EIC of the axilla was confirmed, and the patient was doing well in multiple follow-up visits. In addition, she was recommended to adhere to breast cancer screening protocols. Discussion Epidermoid cyst refers to benign cysts that result from the proliferation and implantation of epidermal components within the dermis or an occluded pilosebaceous follicle. These cysts Fig. 3 -A histopathology section stained with hematoxylin and eosin showing: (A) cyst wall, (B) squamous lining, and (C) laminated keratin layers. can occur anywhere in the body, though they commonly involve the scalp, face, and trunk. However, an epidermoid cyst in the axilla is rare and has been mentioned only in one previous case [ 1 ,5 ]. Different mechanisms are supposed to explain the pathogenesis behind such cysts. Congenital or sporadic obstruction of the hair follicle sounds like the most plausible mechanism in our case, given there was no history of previous trauma, surgery, or lesion in the axilla [6] . An epidermoid cyst in the axilla is clinically presented as a slowly growing firm, nodular lump from the skin, sometimes with a central punctum. They may be confused clinically and radiologically with different benign and malignant lesions, and correct preoperative diagnosis may be difficult [5] . Mammographic appearances of EIC are typically benign & usually demonstrate a well-circumscribed mass with homogeneous increased density. However, sometimes the EIC finding on mammography may be challenging to distinguish from breast cancer, especially with superficial mass or irregular shape or architectural distortion [ 5 ,7 ]. In our case, the mass was benign features. However, EIC in our case had peripheral rim calcifications, which stand out from other cases. Because of its easy availability, cost-effectiveness, and absence of radiation exposure, ultrasound is considered the first study for superficial soft tissue lesions. The US can identify and delineate the lesion in the axilla and distinguish the lymph node from other soft tissue masses. The unruptured epidermoid cyst has certain distinctive features in the USsuch as an oval shape and a subcutaneous site with a welldefined margin-that can provide a correct preoperative sonographic diagnosis before performing a biopsy [ 5 ,8 ]. On histopathology, epidermoid cysts are characterized by a transparent background with a lining of stratified squamous epithelium containing an agranular layer [5] . An unruptured epidermoid cyst may be treated with surgical resection (total excision along with its capsule by an el-liptical incision encircling the punctum) or with observation based on the patient's symptoms. The recurrence rate despite complete surgical excision of the cyst is 3% [1] . Conclusion EIC is a frequent benign lesion with mostly an asymptomatic course. It is uncommon to present in the axilla, and it can easily be misdiagnosed as a malignant mass. Treatment depends on the symptoms, and it varies from observation to surgical resection of the cyst. Patient consent A written informed consent for this case report was obtained from the patient.
2022-12-20T16:02:59.879Z
2022-12-18T00:00:00.000
{ "year": 2022, "sha1": "9cfd474ad7e800670ce7c4794fbd73ead7ff783b", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.radcr.2022.11.056", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ebbebf26fe3bd59666e095bc300c74eedbaff7fe", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
250410609
pes2o/s2orc
v3-fos-license
Synthetic analysis of chromatin tracing and live-cell imaging indicates pervasive spatial coupling between genes The role of the spatial organization of chromosomes in directing transcription remains an outstanding question in gene regulation. Here, we analyze two recent single-cell imaging methodologies applied across hundreds of genes to systematically analyze the contribution of chromosome conformation to transcriptional regulation. Those methodologies are (1) single-cell chromatin tracing with super-resolution imaging in fixed cells; and (2) high-throughput labeling and imaging of nascent RNA in living cells. Specifically, we determine the contribution of physical distance to the coordination of transcriptional bursts. We find that individual genes adopt a constrained conformation and reposition toward the centroid of the surrounding chromatin upon activation. Leveraging the variability in distance inherent in single-cell imaging, we show that physical distance – but not genomic distance – between genes on individual chromosomes is the major factor driving co-bursting. By combining this analysis with live-cell imaging, we arrive at a corrected transcriptional correlation of ϕ≈0.3 for genes separated by < 400 nm. We propose that this surprisingly large correlation represents a physical property of human chromosomes and establishes a benchmark for future experimental studies. Introduction The role of spatial heterogeneity in the nucleus in relationship to gene regulation is an enduring question in cell biology (Bohrer and Larson, 2021). Heterogeneity or compartmentalization is visible at all length and genomic scales, starting from gene loops and proceeding through enhancer-promoter interactions, topologically associated domains, A/B compartments, chromosome territories, up to inter-chromosomal interactions such as the nucleolus, Cajal bodies, and histone locus bodies, and extending to prominent nucleus-wide features such as lamin-associated domains and heterochromatin (Misteli, 2020). The synergy between microscopy (mostly light microscopy but also electron microscopy; Ou et al., 2017) and chromosome conformation capture approaches has led to fundamental insights of how molecular features drive genome organization, the influence they have on gene regulation, and the extent to which genome organization varies within individual cells. Yet, the chromatin-transcription relationship at length scales smaller than the wavelength of visible light (~500 nm) remains challenging to dissect. Foundational work from Cook and colleagues introduced the notion of the transcriptional factory. Transcription factories are areas with an enrichment of transcription machinery where genes are thought to be transiently bridged to enable efficient transcription (Feuerborn and Cook, 2015). Ensemble chromosome conformation capture seems to support this model by revealing that promoter-promoter contacts (smaller than 1 Mb) form as transcription levels increase (Hsieh et al., 2021;Zhu and Suh, 2020;Hsieh et al., 2020;Levo et al., 2022). The model is that actively transcribed genes are positioned to transcription factories. The prediction is that genes that are close in 3D space (nm) will 'feel' the same enrichment in transcription machinery and exhibit correlated transcriptional bursts. Indeed, genes on the same chromosome (Deng et al., 2014;Sun and Zhang, 2019;Tian et al., 2020;Quintero-Cadena and Sternberg, 2016;Xu et al., 2019) and genes that share the same (ensemble) topologically associated domain (Tarbier et al., 2020) are more co-expressed in individual cells (RNA). However, correlations were not seen between nascent transcripts (Levesque and Raj, 2013) and the genomic distance between genes was found to show a more dominant role in RNA co-expression than Hi-C contact frequency (Sun and Zhang, 2019). Furthermore, single-cell RNA-seq showed little to no difference in correlation between genes from the same chromosome with an increased contact frequency, given a similar genomic distance between the two, bringing the strength of the hypothesis into question (Tarbier et al., 2020). This static factory view was supplanted by one in which local heterogeneity of the transcription machinery was due to dynamic assembly and disassembly (Cisse et al., 2013;Cho et al., 2018;Henninger et al., 2021). Thus, the 'factory' was not a fixed assemblage but rather a transient and movable conglomeration of RNA polymerase II, general transcription factors, and nascent RNA that arose in connection to active transcription units. It is clear that these diffraction-limited spots observed in the fluorescence microscope exchange constituents with the surrounding nucleoplasm. However, the number of terms used to describe these spots -'factories,' 'foci,' 'hubs,' 'clusters,' 'speckles,' 'compartments,' 'condensates,' 'phases' -emphasizes the lack of a consensus model in the field. Further, it should be noted that many of the utilized super-resolution methodologies are prone to artifacts . Consequently, the physical interactions between protein, DNA, and RNA and the dynamic changes in chromosome structure that precede RNA synthesis are hotly debated. Recent advances in single-cell imaging shed light on these questions and motivate the fully theoretical analysis in this paper. First, the development of chromatin tracing of an entire chromosome using super-resolution light microscopy provides a spatial map of the chromatin fiber at ≈100 nm resolution (Su et al., 2020;Hu and Wang, 2021). When coupled with single-molecule fluorescence in situ hybridization (smFISH) to look at nascent RNA, one can then connect chromatin conformation to transcriptional activity with single-cell resolution (Su et al., 2020). Specifically, the nascent transcription state of ~80 genes as well as the 3D centroid positions of 651 50 kb chromosomal segments was quantified for thousands of individual chromosomes in IMR90 cells ( Figure 1A). Second, the application of single-cell imaging of nascent RNA in living cells provides critical information on temporal heterogeneity to interpret the observations of spatial heterogeneity. For example, transcriptional bursting of human genes expressed in their native genomic context can be monitored with high spatial and temporal precision for hours (Rodriguez et al., 2019;Wan et al., 2021). Here, we take advantage of two single-cell datasets -chromatin tracing in fixed cells and nascent RNA imaging in living cells to address two questions: (1) Do genes reposition upon transcriptional activation? (2) Do genes in spatial proximity show correlations in transcriptional activity? Our analysis indicates that with transcription, chromatin adopts a constrained structure and the gene is positioned toward the centroid of the surrounding chromatin. We then probed the distances between genes and found that genes are positioned closer to each other with transcriptional bursts when the genomic distance between them below 5 Mb, and genes were positioned farther away from each other with transcription if the genomic distance was above 5 Mb. Importantly, by capitalizing upon the fluctuations of distances between genes on individual chromosomes, we found that the physical distance between genes on individual chromosomes is the major factor driving the transcriptional co-bursting between genes. By incorporating temporal information from live-cell imaging of active genes (duration of active periods and mobility of active genes), we can infer the correlation between transcriptional bursts for proximal genes to be ϕ ≈ 0.3 . Overall, our synthetic analysis of these two single-cell Results Active promoters are positioned to locations defined by chromatin organization To investigate spatial changes in the chromatin fiber for active and inactive genes, we reanalyzed data from combined super-resolution imaging of DNA and RNA FISH (Su et al., 2020). We performed a spatial metagene analysis consisting of 'centering' the chromatin around the promoter of the each gene, quantifying the standard deviations (STD) of the distances between the chromosomal loci, and then averaging over all available genes. Note, we utilized the centroid position of the chromosomal segment that contained the transcriptional start site of each gene as the location of the promoter for the gene and only utilized the chromosome tracing by sequential hybridization data (Su et al., 2020). This analysis was done for chromosomal segments where genes were 'off' (0) or 'on' (1) ( Figure 1D and E) -we utilize Boolean logic (0 or 1) throughout to describe transcription states based on the absence (0) or presence (1) of nascent RNA. We observed that chromatin centered around the promoter shows less variability while transcribed, again as determined by the presence of nascent RNA. To more clearly visualize distinctions between chromatin configuration ± nascent RNA, we quantified the difference and found that the distances from a promoter to the surrounding chromatin are more restricted with transcription, indicated by a cross-shape pattern on the heatmap ( Figure 1F). The change in confinement could be the result of repositioning active genes to a different nuclear environment. To probe whether gene positioning varies with transcription, we performed a similar analysis but quantified the median physical distance (MPD) between chromosomal loci with and without transcription and quantified the average over all available genes ( Figure 1G and H). Again, we quantified the difference between them and found a similar red cross ( Figure 1I), suggesting that when a gene is active the promoter is on average closer to the surrounding chromatin and the distances between nonpromoter chromosomal segments are unperturbed. It is conceivable that repositioning is due to enhancer-promoter proximity that might precede transcription activation: the smaller average MPD to the surrounding chromatin with transcription could be due to genes only being active when near surrounding specific enhancers. To investigate, we used the density of H3K27Ac as a proxy for enhancer activity. We quantified the density of H3K27Ac ChIP-seq reads within each 50 kb segment for IMR90 cells using previously acquired data (Appendix 1;ENCODE Project Consortium, 2012). This analysis resulted in varying densities of H3K27ac throughout Chr21 and is shown in Appendix 1-figure 1A. We then partitioned the H3k27ac density into four groups (low, med, high, very high) and investigated the average MPD of each gene to all other loci with and without transcription. Like before (Figure 1), we observed that a gene was indeed closer to the other individual loci when transcriptionally active, but the MPD change did not show a general difference with H3K27ac enrichment when compared to other loci lacking H3K27ac (Appendix 1- figure 1B), suggesting that the observed repositioning may not be a result of enhancer-promoter interaction. Intuitively, a possible reason for the distance to decrease to surrounding chromatin with transcription (on average) is if a gene is located closer to the centroid of the surrounding chromatin for single chromosomes when active. To test this supposition, we calculated the mean distance of the chromosomal loci separated by various genomic distances -all loci with a given genomic distance were used to generate these distributions. (D) An aggregate analysis, calculating the standard deviation (STD) of the distances between chromosomal loci for chromosomes where a gene = 0, centered around the loci containing the promoter, and then averaging over all genes. (E) The same as (D) but with gene = 1. (F) The difference in the average centered STD in (D) and (E). (G) Similar to (D) but quantifying the MPD instead of the STD. (H) The same as (G) but for chromosomes where gene = 1. (I) The difference between the average centered MPD in (G) and (H). (J) The mean distances between chromosomal loci containing genes to the centroid of the surrounding chromatin when the genes were either on (1) or off (0) vs. the amount of chromatin around the promoter included in the centroid calculation. There is also an illustration of this calculation in the far-right corner to aid interpretation. (K) The difference between the mean distances to the local centroid when gene = 0 and gene = 1, showing the results in (J) on a gene-by-gene basis. Boxplots show quartiles and whiskers expand to 1.5× interquartile range, black diamonds are outliers. Significance was defined as a p-value <0.01 with a t-test (Appendix 1). The analysis was done on ≈7600 individual chromosomes and 80 different genes. Figure 1 continued promoter of the gene to the centroid of the surrounding chromatin with and without transcription ( Figure 1J). The centroid was calculated for windows of various genomic size around each genethat is, for a 0.5 Mb chromatin region, 0.25 Mb on both sides of the gene promoter were included in the centroid calculation. Tellingly, we found a definitive difference between active promoters (1) and inactive promoters (0): the active promoters were closer to the centroids of the surrounding chromatin ( Figure 1J). Note that the mean distance from a local centroid to an inactive promoter gives one an idea to natural spread of the chromatin. To understand this phenomenon on a gene-by-gene basis, we quantified the difference between the active promoter and inactive promoter for each gene ( Figure 1K). We found that even though there are overlaps in the distributions in Figure 1J, nearly every gene was closer to the centroid with nascent transcription, suggesting a general phenomenon. Overall, these results indicate that transcriptionally active genes are located toward the centroid of surrounding chromatin. We then sought to assess whether the positioning of the genes toward the centroid was dependent upon transcriptional activity. To investigate, we partitioned the available genes into low activity or high activity depending upon whether fractional occupancy was below or above the median, and then performed the above analysis on each subset of genes. That is, the activity of a gene was determined from the fraction of chromosomes where that gene was active. Interestingly, we found that The mean distances between genes vs. the genomic distance for when both genes were (0,0),(1,1), (0,1), and the mean distances between loci not containing the investigated genes. Boxplots show quartiles and whiskers expand to 1.5× interquartile range, black diamonds are outliers. (B) The difference between the scenarios shown in (A), showing the difference in mean distance on a gene pair by gene pair basis, and a black line is shown to aid in visualization of zero. (C) The same analysis as in (B) but vs. the median physical distance (MPD) between the genes. (D-G) The difference shown in (B) and (C) but vs. either the MPD minus the expected MPD or the genomic distance minus the expected genomic distance (see text). Boxplots show quartiles and whiskers expand to 1.5× interquartile range, black diamonds are outliers. Black lines and dots are means and error bars are SEM from bootstrapping (Appendix 1). Significance was defined as a pvalue < 0.01 with a t-test (Appendix 1). high-activity genes were both less variable (Appendix 1- figure 2A) and showed greater movement with active transcription when compared to the low-activity genes (Appendix 1-figures 2B and 3). Upon closer inspection (Appendix 1- figure 3A), the greater movement for the high-activity genes was not so much due to a different distance to the local chromatin centroid when active but was instead due to larger distances from the centroid when inactive -this is illustrated by the first genomic distance bin in Appendix 1-figure 3A by comparing the first genomic distance bin of the low-activity genes to the high-activity genes. In brief, these results suggest that these processes additionally vary depending upon a genes activity level. Having considered genes individually based on activity (first order moments), we next sought to quantify higher-order moments such as pairwise interactions in promoter-promoter distances based on transcriptional activity. We first quantified the average distances between promoters when [both genes were off, (0,0)], [both were on, (1,1)], [one was off and one was on, (0,1)], and quantified them as a function of the genomic separation between them ( Figure 2A). We also quantified the average distances between chromosomal loci that did not contain the investigated genes as a reference control ( Figure 2A). We found that the distances between genes were consistently smaller with transcription for short genomic distances (<1.5 Mb), as evidenced by the significant decrease in the (0,1) and (1,1) interactions compared to the (0,0) interaction. When we compared (0,0) to the no gene control, we saw essentially no difference. We note that the means of the samples were statistically different in some cases (i.e., no gene to (0,0)), potentially indicating that the distances between the genes are potentially different even when inactive ( Figure 2A). Still, overall, these results suggest transcriptional bursting (or a consequence of bursting) is correlated with the formation of promoterpromoter contacts. To probe the distance changes on a gene pair by gene pair basis, we first calculated the mean distance between inactive genes on the same chromosome (0,0) and then subtracted the mean distance between the genes when active ((1,1) or (0,1)) -similar to the analysis in Figure 1K. This analysis is shown as a function of the genomic distance between genes in Figure 2B. For genomically proximal genes, we observed that when both genes were active the mean distances between the promoters were indeed closer to each other. When we compared the (0,0)-(0,1) to (0,0)-(1,1), the later difference was approximately twice the former difference. Interestingly, we observed that as the genomic distance increased, the difference for both seemed to approach a negative value, suggesting that sufficiently separated genes are positioned to different locations with transcription. However, the spread within the boxplots suggests much variability in whether genes are positioned toward the same or different location with transcription. Overall, these analyses provide strong evidence that the spatial separation between genes depends on individual transcriptional bursts. These analyses suggest a characteristic genomic length scale over which pairwise interactions might occur. However, since genomic distance and physical distance between chromosomal segments are obviously correlated (Sun and Zhang, 2019;Bintu et al., 2018;Su et al., 2020), either might define the length scale and drive repositioning with transcriptional bursting. To probe the general impact of MPD, we characterized the positioning of genes toward the same or different location with transcription based on the 3D distance between the genes. Note that this analysis is only possible with microscopy datasets such as this one (Su et al., 2020). We performed the previous analysis as a function of the MPD between the genes ( Figure 2C) and found a strong decay with increasing MPD. The (0,0)-(0,1) resulted in a strong majority of values being negative for MPD above 1300 nm, indicating that the genes move away from each other with bursting above this spatial threshold. The (0,0)-(1,1) had a majority of negative values for MPD above 1300 nm but the proportion with positive values was higher. Probing further, to disentangle the dependence of this movement on genomic distance and/or MPD, we quantified how deviations from the expected influenced repositioning. Given the stronger trend with the MPD, we first quantified the difference as a function of the MPD minus the expected MPD. The expected MPD was calculated utilizing all chromosomal loci and was defined as the average MPD for each genomic distance ('Methods'). We found that for both scenarios a smaller than expected MPD resulted in genes moving toward each other with transcription and a larger than expected MPD led to the genes moving away from each other ( Figure 2D and E), though the latter was less clear for the (0,0)-(1,1). These results suggest that the positioning of genes in physical space influences the outcome of pairwise interactions: genes which are close to each other (MPD <1100 nm) move closer when bursting, and genes that are far from each other separate when bursting. Similarly, to investigate whether the genomic distance plays a role, we performed the analysis but as a function of the genomic distance minus the expected genomic distance -the genomic distance given the MPD ('Methods'). We found that the analysis did not have a monotonic trend and instead peaked at zero ( Figure 2F and G). If there were a simple relationship between genomic distance and repositioning, one would expect a monotonic trend and therefore it seems unlikely that genomic distance drives this phenomenon. Additionally, we found that the zero peak was enriched for gene pairs with low MPDs -as we just demonstrated: low MPDs lead to genes moving toward each other ( Figure 2D and E). In summary, these results suggest that the MPD is predictive of whether genes move toward or away from each other with transcription. Lastly, we sought to probe the extent to which this phenomenon was dependent upon transcriptional activity (low vs. high as described above). As before, we performed the same analysis but on the two groups of genes separately. Again, the distance change between genes was stronger for more active genes, suggesting these processes also vary depending upon the transcription activity level (Appendix 1- figure 4). Of note for high-activity genes, nearly all of them move away from each other when they were separated by large MPD (>1300 nm), suggesting the process of moving to a different location for transcription may be more deterministic for highly active genes (Appendix 1- figure 4E). The Spearman correlation coefficient between genes as a function of genomic distance, contact frequency, and median distance. Black lines and dots are means and error bars are SEM from bootstrapping (Appendix 1), boxplots show the quartiles as above. (D) Average correlation coefficients of genes given that their genomic distance and contact frequencies were within a specific range. (E) Average correlation coefficient of genes given that their genomic distance and median distance were within the specific range. An * illustrates whether the average correlation coefficients along that dimension are correlated (p-value<0.01) (Appendix 1). Physical distance -but not genomic distance -correlates with coexpression Our analysis of the DNA/RNA FISH dataset indicates that spatial gene positioning is correlated with transcriptional activity both in isolation (repositioning of individual genes with transcription) and in pairwise interactions. One can conceptualize the conclusions of this analysis as understanding spatial position given the transcriptional state. In other words, knowledge of transcription state imparts knowledge of spatial position. We next turned to the inverse question of whether correlations exist between nascent RNA (nRNA, transcriptional state) based on spatial proximity. To do so, we quantified the ϕ correlation coefficient ('Methods') between genes on individual chromosomes ( Figure 1A) and plotted it as a function of the genomic distance ( Figure 3A). Note, due to the binary nature of the data (0 or 1), the ϕ correlation coefficient is equivalent to the Pearson and Spearman. With approximately a twofold increase at smaller genomic distances, the correlation showed a monotonic decay with increasing genomic distance -the 0.025 plateau persisted with even higher genomic distances (data not shown). The increase in co-expression above the asymptotic baseline persists to ≈2 Mb. To determine whether ensemble-chromatin structure is what dictates co-expression, we further quantified the correlation as a function of the contact frequency ( Figure 3B) and the MPD between their chromosomal segments ( Figure 3C). Here, we defined the contact frequency between two genes as the proportion of chromosomes with distances less than 200 nm between the genes' chromosomal segments using the chromatin-tracing data. We observed the predicted monotonic behavior with the average correlation reaching a minimum around 0.025. We then attempted to separate the effects of contact frequency/MPD from genomic distance on the observed correlation, and proceeded to hold one variable constant and quantify the correlation as a function of the other. To do this, we calculated the mean correlation given that the contact frequency/MPD and genomic distance between the genes were within a specified range ( Figure 3D and E). Note that we only included averages if more than 40 data points could be used to calculate the mean. The two showed similar behavior and both had a narrow range for specific genomic distances, making it difficult to uncouple the variables of contact frequency and mean physical distance. For example, we only observed an MPD of 200-400 nm for genomic distances much less than 1 Mb; therefore, we could not determine how the correlation varies with increasing genomic distance for these values. Moreover, most columns and rows did not show significant p-values. In summary, while there is correlation at the nascent RNA level, the limited variability in ensemble-chromatin structure for specific genomic distances obscured the relative contributions of genomic distance, contact frequency, or MPD to co-expression. A primary advantage of the single-cell dataset (Su et al., 2020) is the ability to leverage the large fluctuations of distances between loci across the population (N ≈ 7600 chromosomes) ( Figure 1C). We first quantified the correlation between nascent RNA for genes given that their physical distances were within a specific range, which showed a similar monotonic behavior ( Figure 4A). When calculating these correlation coefficients, we only included gene pairs for specific single-chromosome distance ranges when there were at least 100 chromosomes where the distance between the genes was within that range. We then quantified the mean correlation given that their single-chromosome distance and genomic distance were within specified ranges ( Figure 4B). Again, we only included averages if more than 40 data points (gene pairs) could be used to calculate the mean. Notably, we observed that co-expression of genes was correlated with the single-chromosome distance between those genes (columns, Figure 4B). In contrast, we observed no correlation between co-expression and genomic distance (rows). There appeared to be a general decay for the columns with increasing single-chromosome distance, more closely resembling the curve in Figure 4A, while the rows did not show the behavior. These observations are further solidified by calculations of statistical significance ( Figure 4B). In summary, these results indicate that co-expression -as quantified through correlations in nascent RNA -is driven by the physical distance between genes on individual chromosomes, uncoupled from genomic distance, which shows no statistical correlation with co-expression. Chromosome dynamics can obscure the true correlation between physical proximity and gene co-expression The single-cell DNA/RNA FISH approach provides exceptional spatial resolution coupled with transcriptional activity, but a potential issue with fixed-cell methodologies is the lack of temporal information. For example, in terms of quantifying the distance dependence on co-expression, the lack of time-resolved locus position data could distort the observed distance co-expression relationship. First, the motion of the genes within the on time (defined here as the time it takes for the nascent RNA to dissociate from the DNA) obscures the measurement of the distance at the beginning of a transcriptional co-burst. Second, the stochasticity of the on time would similarly lead to a decrease in the observed co-expression -that is, even if two genes burst at the exact same time, the nascent RNA from one gene will dissociate before the nascent RNA of the other gene, leading to the detection of one and not the other, again decreasing the correlation measured in fixed cells. Third, the error due to the localization precision of the experiment would also distort the distance co-expression curve due to the error in knowing the true distance. Overall, these three sources of noise have the potential to change both the amplitude and distance dependent decay of the co-expression correlation coefficient. Therefore, we utilized a theoretical approach to infer the instantaneous distance co-expression relationship analogous to that shown in Figure 4A and to thereby understand the contribution of dynamic and temporal fluctuations in gene position and activity. The approach is based on coupling measurements of locus diffusion and activity generated from live-cell imaging of nascent RNA with the fixed cell measurements analyzed thus far. Here, we first discuss our theoretical approach and then our results. We sought to link the information from live-cell experiments with that of fixed-cell experiments by incorporating the motion of chromatin into our model. Chromatin has been suggested to show confined diffusion (Marshall et al., 1997;Chubb et al., 2002;Chen et al., 2013;Bronshtein et al., 2015), but this phenomenon is generally quantified over relatively short timescales of <10 min. Considering the on time of a human gene -as measured by the dwell time of nascent RNA -is approximately 10-15 min (Wan et al., 2021), we sought to monitor the diffusion of an active gene over a longer timescale. We first utilized the live-cell transcriptional bursting data of TFF1 from Rodriguez et al., 2019. This data consists of the spatial coordinates of multiple bursting TFF1 alleles through time in individual cells, allowing us to quantify the motion of one allele relative to the other (Chubb et al., 2002). Importantly, time-lapse imaging of multiple alleles naturally corrects for cell movement over these long timescales. We quantified the mean squared displacement (MSD, 'Methods') over a timescale of 3000 s and found that the MSD could be fit with a straight line ( Figure 4C), suggesting Brownian motion of active genes over these timescales (Bohrer and Xiao, 2020). We computed a diffusion coefficient of D TFF1 = .25 × 10 −3 µm 2 /s , which is comparable to previous results (Chubb et al., 2002). We subsequently performed a similar analysis with the previously published live-cell transcriptional bursting data of four different genes and obtained similar results but with slightly varying diffusion coefficients (Appendix 1- figure 5; Wan et al., 2021). Taking into account the multiple diffusing alleles within the TFF1 data (Appendix 1), the four diffusion coefficients of the single-locus genes range from about .25 × D TFF1 up to 1 × D TFF1 . Lastly, we ultimately decided to proceed with the diffusion coefficient of TFF1 due to the natural cell movement correction and the relative similarity with the other diffusion coefficients. We chose to utilize the over-dampened Langevin equation to model the temporal dynamics of the distance between genes located on the same polymer. The model describes the time-dependent distance between loci using an arbitrary energy potential of interaction (see 'Methods') -without the effect of the potential the model exhibits Brownian motion with the determined diffusion coefficient. For each gene pair, we empirically determined a potential that 'biases' the distance motion so the steady-state distribution matches the empirically determined distance distribution ('Methods'). We did this using the equivalent Fokker-Planck equation, which allowed us to directly convert the empirically defined distance distributions into the potential ('Methods'). The central advantage of this approach is that it accounts for the unique distance distributions between the various gene pairs on the same chromosome, the diversity of which can be clearly seen with the MPDs in Figure 1B. The diverse distance distributions result from a multitude of complex context-specific forces that are not considered in the classical polymer models (Osmanović and Rabin, 2017;Vivante et al., 2020). Even with the inclusion of additional factors in polymer models (exp. loop extrusion), reproducing accurate distance distributions is difficult (Gabriele et al., 2022) -and would be even more difficult here due to lack of knowledge as to the underlying forces. Also, more simple first-order approximations of the Langevin equation have been utilized to model the viscoelastic properties of chromatin (Vivante et al., 2020), which has been shown to adequately determine the potential of the Rouse chain (Amitai et al., 2015). Again, we emphasize that these gene-specific terms were determined empirically ('Methods). The stochastic dwell time of nascent RNA is due to variability in the processes of elongation, termination, and splicing. We incorporate this variability in our analysis by setting the nascent RNA decay probability per second (propensity) equal for all genes ( P d ) with a characteristic on time equal to ≈13 min. This assumption is motivated by our recent work on high-throughput imaging of hundreds of human genes labeled at their endogenous loci using MS2 stem loops -where it was found the majority of genes had an average on times between 10 and 15 min (Wan et al., 2021). Again, we note that this is an assumption due to our lack of temporal information. Next we introduce a phenomenological model intended to capture the empirical features of co-expression as observed in the fixed cell datasets. First, we quantified the average fraction of chromosomes with nascent RNA present for gene i as a function of the distance between each pair of genes (genes i and j ), normalized by the average fraction of chromosomes with nascent RNA present for gene i over all distances. This metric is a proxy for the burst frequency and was calculated for each gene for all possible gene pairs. The reasoning is that if this metric is higher at smaller distances, it would suggest that the bursting frequency is dependent upon the distance between genes, hence leading to the higher correlation values at smaller distances. Surprisingly, we found that on a distance binning scale of 200 nm, the metric did not vary, suggesting that the bursting frequency does not generally change as function of distance between genes at this scale ( Figure 4D). Therefore, we set the probability of nascent RNA production per second equal to a constant for each gene ( i ), P tot i , which we determined empirically for each gene ('Methods'). To account for co-expression, we modeled nascent RNA production as coming either from a co-burst or from an individual burst, where the likelihood that a co-burst or an individual burst occurs is dependent upon the distance between the two genes ('Methods'). More specifically, the fact that a pair of genes have differing expression levels allowed us to model the proportion of transcription events that are co-bursts with the incorporation of the function ω(r ij (t)) , which is a function of distance between the genes and ranges between 0 and 1. For a pair of genes where the burst frequency of gene i is less than gene j , ω(r ij (t)) is the proportion of gene i's transcriptional bursts that are co-bursts at each distance ('Methods'). If the expression levels of the two genes are approximately equal, ω(r ij (t)) is equal to the proportion of bursts that are co-bursts at a given distance for both genes. Overall, with a single coupling function ( ω(r ij (t)) ), we modeled all pairs of genes with the following stochastic reactions utilizing the Gillespie algorithm (Gillespie, 1977): More specifically, we simulated thousands of trajectories (15,000 s each) for each pair of genes for a given ω(r ij (t)) akin to the number of chromosomes within the experimental data. If the amount of nascent RNA for a gene was greater than 0 at the end of the trajectory, the gene was considered 'on' (Gene = 1), making our simulation data binary like the experimental data. Lastly, we incorporated the error due to the resolution of the experiment (resolution = 100 nm, 'Methods'). In total, using this numerical simulation approach, we are able to generate curves like Figure 4A, for a given coupling coefficient ω(r ij (t)) , from the underlying spatiotemporal fluctuations of single genes in living cells. Importantly, the diffusive properties of active genes and the dwell time of nascent RNA are derived empirically from experimental data. Of the parameters described above, the coupling coefficient is the least well-determined and lacks an underlying mechanistic motivation at present. Is it possible for a single function ( ω(r ij (t)) ) to adequately reproduce the experimental results ( Figure 4A)? To address this question, we iterated over many possible monotonically decreasing ( ω(r ij (t)) ) functions. More specifically, we investigated all possible monotonically decreasing functions in 0.05 increments, with specific values for distances binned at a 200 nm resolution ('Methods,' Figure 4E). For each ω(r ij (t)) , we quantified the correlation-distance curve for each gene pair and sought to find the one that was closest to Figure 4A ('Methods'). The best-performing ω(r ij (t)) is shown in Figure 4E, which resulted in the correlation-distance dependence in Figure 4F, demonstrating that a single general function can adequately describe this phenomenon at the level of the chromatin-tracing experiment. With this dependence in hand, we are able to computationally remove processes that distort the correlation-distance relationship in an effort to uncover the 'true' observable degree of correlation for a given distance. The correlation-distance relationship in Figure 4F is also shown in Figure 4G with a new y-axis range to aid comparison. We started by simulating all pairs of genes as before but without the resolution error of the experiment with the determined ω(r ij (t)) ( Figure 4H). Removing resolution error associated with light microscopy resulted in a slight increase in the correlation for the first distance bin, resulting in a 66% increase ( Figure 4H). For all other distances, the degree of correlation was basically unchanged. We then simulated the system without resolution error and with a deterministic on time for each nascent RNA -each nascent RNA lasted exactly 800 s. We observed Inactive Genes Active Genes a much greater increase across all distances with the first distance bin rising to 250% of its initial value ( Figure 4I). Finally, we simulated the system removing resolution error, with deterministic on times, and without diffusion. Removing these three noise sources resulted in a large increase in correlation for lower distances and a slight decrease for larger distances ( Figure 4J). This latter decrease is due to the correlated bursts at small distances not being able to diffuse to larger distance. For the first distance bin, the removal of all sources of error in fixed cell experiments leads to an ≈5-fold increase. The correlation is surprisingly high (≈ 0.3) and extends over a spatial distance of ≈ 400 nm. Overall, this analysis suggests that if one was able to monitor the distance between genes with high resolution and at time resolution where one could determine the exact start of each transcriptional burst, one should be able to see this true relationship -a clear direction for future pursuit. Discussion By capitalizing upon the single-chromosomal nature of chromatin-tracing and nascent RNA smFISH data (Su et al., 2020), we discovered a variety of phenomena related to the coupling between transcription and higher order chromosome conformation. Specifically, fixed-cell analysis of chromatin conformation and activity coupled with live-cell analysis of transcription dynamics provides two features that are key to the analysis performed here: fluorescence microscopy reveals true physical distances and the variability across single cells. Leveraging these unique features, we find that (1) the chromatin around a gene is 'constrained' with transcription; (2) during a transcriptional burst genes are positioned toward the centroid of their surrounding chromatin; (3) transcriptional bursts cause promoters to move toward or away from each other depending on the MPD between them (These phenomena are illustrated within the simple model shown in Figure 5); (4) the distance between genes in individual cells is predictive of co-bursting; and (5) the lack of temporal information and limited imaging resolution greatly reduces the true distance-correlation relationship, with the predicted correlation coefficient of ~0.3 for a distance below 400 nm. This last finding relies on theoretical assumptions regarding chromatin mobility and the precise molecular nature of gene co-expression and awaits future experimental validation. At last, we should also note that more datasets from largescale microscopy studies are likely on the way, where similar approaches to this study can be taken. Genes reposition upon transcriptional activation Our finding that individual transcriptional bursts lead to the repositioning of genes and lower chromatin variability suggests the two phenomena could be linked. The traditional view of transcription influencing the dynamics of chromatin is that transcription leads to more 'open' and dynamic chromatin (Babokhov et al., 2020). While the traditional view has some empirical support (Gu et al., 2018), the exact opposite has been observed (Germier et al., 2017;Nozaki et al., 2017;Nagashima et al., 2019). Accepting the variability of distance distributions as a proxy for the motion of chromatin puts our observations in agreement with the latter. One possibility is once a gene is positioned toward the centroid of the surrounding chromatin, the confinement could be due to a new microenvironment. Another possibility -which we favor -is that the movement toward the centroid is a steric effect. Active genes recruit large megadalton complexes such as the pre-initiation complex and RNA polymerase II, which 'pushes' and confines the gene to a specific location due to the occluded volume effect. Our analysis thus suggests behavior consistent with the original factory model (genes reposition to a factory upon activation) and also the dynamic self-assembly model (genes assemble their own transcription factory). The order of events is key to distinguishing these alternatives, and these events are not resolved in the fixed-cell datasets analyzed here (Cisse et al., 2013;Cho et al., 2016a;Cho et al., 2016b;Cho et al., 2018;Henninger et al., 2021). Nevertheless, almost all of the ≈80 genes showed this behavior of repositioning and confinement, suggesting a general phenomenon, illustrating a fundamental aspect of transcription whose mechanistic details await additional study. On a higher level, promoter-promoter distances (Hsieh et al., 2020) are clearly variable with individual transcriptional bursts and are likely important for understanding enhancer biology and other higher order functional assemblies. Considering the functional similarity between promoters and enhancers (Kim and Shiekhattar, 2015), we speculate that the rules of promoter-promoter interaction observed here may apply to enhancer-promoter interaction. In most cases, the distance change of promoters with transcription is small when compared to the MPD, but for MPD < 400 nm a repositioning of 100 nm could be functionally relevant ( Figure 2C; Levo et al., 2022;Heist et al., 2019;Chen et al., 2018;Fukaya et al., 2016) -putting the distances at the scale of enhancer-promoter communication (Chen et al., 2018). On the other hand, transcription factories have also been shown to be highly dynamic (Cisse et al., 2013;Cho et al., 2018;Henninger et al., 2021), raising the question of whether these dynamic promoter-promoter distances are linked to the dynamics of the factories (Heist et al., 2019). The unexpected finding that high MPD promoters tend to move away from each other with transcription suggests the possibility of specific locations for transcription, but this observation might also be used to explain specificity of enhancer-promoter interactions. Intriguingly, whether genes move toward or away from each is dependent upon ensemble chromatin organization, raising the possibility that genes are distributed according to chromatin organization and not genomic distance -given there is an underlying fitness advantage. Finally, it should be noted that for all these results described here there is a lack of temporal information, which obscures the cause and effect of these phenomena (just as we showed for the distance-correlation relationship). It therefore seems likely that these distance changes are likely more significant -a direction for future research. Genes in spatial proximity show high correlations in transcriptional activity: Interpreting ϕ ∼ .3 The hypothesis that genes in close spatial proximity are transcriptionally correlated has long persisted in the field despite conflicting data. Notable studies have taken advantage of single-cell RNA-seq and Hi-C data to disentangle the influence of genomic distance and physical distance on correlation with unclear results (Sun and Zhang, 2019;Tarbier et al., 2020). For example, while genes from the same (ensemble) topologically associated domain are more co-expressed, intra-chromosomal genes separated by similar genomic distances show essentially no difference in correlation with enrichments in contact frequency (Tarbier et al., 2020). The study of Sun et al. even found that the genomic distance is slightly more strongly correlated with co-expression than contact frequency (Sun and Zhang, 2019) -rightly explained away given the contact frequency was of a lower resolution with high error. Further, nascent RNA FISH found intra-chromosomal genes are not more correlated than when in trans (Levesque and Raj, 2013). Yet, single-cell imaging experiments coupled with detailed chromosomal perturbations have revealed spatial interactions that dictate a 'hierarchical' organization in multiple genes in response to stimulus (Fanucchi et al., 2013). Moreover, a recently proposed transcription factor activity 'gradient' model is a diffusion-based model that relies again on the spatial proximity of cis-acting regulatory elements, which might equally well be applied to promoter-promoter interactions (Karr et al., 2022). Overall, the hypothesis has persisted due to the intuitive mechanism even with the lack of definitive experimental demonstration. Our results verify the null hypothesis and explain the negative results of previous single-cell studies. We found an enrichment in correlation for nascent RNA given that the genes are separated by a genomic distance of less than 2.5 Mb ( Figure 3A). The fact that the average genomic distance between genes in the previous work was 3 Mb explains why enriched correlations were not seen at the nascent RNA level (Levesque and Raj, 2013). With our finding that the variability in MPD (or contact frequency) for a given a genomic distance is too low to disentangle these variables ( Figure 3D and E), the defined enrichments in contact frequency for previous studies were likely quite minor in terms of producing a change in correlation (Tarbier et al., 2020). Utilizing the large amount of stochasticity in chromatin structure for individual chromosomes (Finn and Misteli, 2019) definitively shows the physical distance drives co-expression. This result is illustrated with the extremes: we observed an enrichment in correlation for genomic distances up to 10 Mb when the physical distance between genes was less than 200 nm on individual chromosomes, and very low correlations between genes separated by less than.5 Mb given that the physical distance was above 1200 nm ( Figure 4B). In summary, our key finding is a correlation gradient with physical distance but not genomic distance. The lack of temporal data and the spatial resolution limits of the chromatin-tracing methodology greatly obscures both the 'true' transcriptional correlation between spatially proximal genes and also the length scale over which transcriptional correlation is measured. The reasons for this reduced correlation are obvious: both the position and the activity status of genes vary randomly. One can imagine, for example, genes that were far apart at activation and then diffused together and vice versa. Correcting for this behavior requires assumptions about chromatin mobility and also utilization of live-cell nascent RNA data. We predict that if one were able to measure the distances between genes at the initiation of the transcriptional bursts, one should obtain a correlation of ~0.3 if the distance between the promoters of the genes is less than 400 nm. Intriguingly, this level of correlation has been reported between the mRNA levels of adjacent genes in yeast but was attributed to DNA supercoiling (Patel et al., 2022). Considering the shorter lifetimes of mRNA in yeast, this correlation may be comparable to the nascent RNA in humans. Furthermore, other live-cell studies have seen correlated bursts between spatially proximal genes (in trans and cis), but did not specifically investigate this as a function of the physical distance between the genes or account for the variable on times (Fukaya et al., 2016;Lim et al., 2018;Heist et al., 2019;Levo et al., 2022) -finding enrichments in correlation similar to the uncorrected curve ( Figure 4A; Levo et al., 2022). The enrichment in co-bursting for genes separated by <400 nm suggests the working distance of the underlying mechanism is not direct contact. Exactly what mechanism leads to these general correlations is still unknown; however, these results are consistent with the idea of enhancers coordinating transcription with working distances of hundreds of nm (Fukaya et al., 2016;Lim et al., 2018;Heist et al., 2019;Levo et al., 2022;. The analysis suggests co-expression is a general property of the system, that is, unrelated genes show correlated bursts with each other when in spatial proximity. This transcriptional correlation would then be an unavoidable emergent behavior due to the physicality of the system. Hence, the appearance of correlated bursts may not suggest a specific regulatory mechanism. Stated another way: we hypothesize that the physical distance between the vast majority of genes arises from the physical constraints of the nucleus and DNA and is not indicative of a biologically functional relationship requiring coordinated expression conferred by that proximity. Support for this hypothesis comes from the observation that disrupting genomic clusters of metabolic genes such as the GAL genes in yeast have no measurable impact on fitness (Lang and Botstein, 2011). Of course, there are certainly instances where coordinated co-expression conferred by spatial proximity is important, for example, in the segmentation clock genes her1 and her7 located on the same chromosome and separated by 12 kb (Zinani et al., 2021). The corollary to our hypothesis is that one can look for deviations from the ϕ ∼ .3 to identify bona fide regulatory relationships. Thus, we establish a theoretical benchmark that can be used in future studies. Lastly, we should note here that we consider this methodology a first theoretical step due to the lack of information about the underlying mechanisms on the chromosomal scale. Therefore, future work should be the adaption of more complicated chromatin polymer models to refine our understanding of this phenomena -of special note are those that explicitly model the links between chromatin organizations and their influence on transcription regulation (Brackley et al., 2021). These future models will likely need to explicitly model the underlying processes (like loop extrusion) to capture the variability in chromatin structure and dynamics whose specifics are likely to emerge in future studies -either validating or suggesting modifications to our approach above. Expected MPD and genomic distance To determine the expected MPD for a given genomic distance, we simply calculated the average MPD for each specific genomic distance. For example, to determine the expected MPD for a genomic distance of 50 kb, we quantified the average MPD between all loci separated by 50 kb. To determine the expected genomic distance for a given MPD, we used the same curve and found the genomic distance with the closest average MPD. For example, say the MPD between two loci is 500 nm, using the previously quantified curve, the expected genomic distance is the genomic distance whose average MPD is closest to 500 nm. Correlations between genes When quantifying the correlations between a pair of genes (aka. whether they were on or off, 1 or 0), we quantified the ϕ coefficient (used for binary data): where n 11 is the number of observations where both genes are active and n 10 is the number of observations where the first gene is on and the second is off, etc. Here, we should state that ϕ is equivalent to the Pearson correlation coefficient and the Spearman correlation coefficient for this data due to a gene's transcription state being either 1 or 0 -that is, on (1) or off (0). Determining P tot i To determine the bursting propensity for each gene, we first conducted many different simulations with P tot i values ranging from 0 to 0.05 with our set nRNA decay rate. For each propensity, we simulated 2000 trajectories (15,000 s each). Then, with the last timepoints of each trajectory, we classified the gene as being either 'on' or 'off' -if the gene's nRNA was greater than zero, the gene was classified as 'on' (aka 1). We then simply created a lookup table with the average number of 'on' states vs. the bursting propensity. To determine a genes specific propensity, we simply calculated the average number of 'on' state with the experimental data and found the closest match within the lookup table. Modeling co-transcriptional bursts To account for co-expression for a pair of genes, we modeled nascent RNA production as coming either from a co-burst or from an individual burst: Here, P ij (r ij ) is the probability of a transcriptional co-burst per second given the distance between the two genes, P i (r ij ) is the probability of an individual burst per second given the distance, and r ij (t) was determined beforehand utilizing the above Langevin equation specific for that gene pair ('Methods). The fact that genes have different expression levels limits the values of P ij (r ij (t)) . Arranging the pair of genes so that P tot i < P tot j , the maximum value that P ij (r ij (t)) can be is P tot i -or else P i (r ij (t)) would have to be negative. With this, we can then rewrite the above as the following: where ω(r ij (t)) is a function of distance between the genes and ranges between 0 and 1. ω(r ij (t)) is the proportion of gene i's transcriptional bursts that are co-bursts at each distance; if the expression levels of the two genes are approximately equal, ω(r ij ) is equal to the proportion of bursts that are co-bursts at a given distance for both genes. Overall, with a single function ( ω(r ij (t)) ), we modeled all pairs of genes with the following stochastic reactions utilizing the Gillespie algorithm (Gillespie, 1977): Incorporating resolution error The resolution of the experimental data was previously quantified in the work of Su et al., 2020, and the resolution of each chromosomal segment was determined with approximately 100 nm resolution. The 3D resolution error is not Gaussian due to the Pythagorean theorem and was determined by Churchman et al., 2006. Therefore, for our case, the error must be applied to all three dimensions independently -similar to in Su et al. To do this, with the 'true distance' from the Langevin simulation we randomly decompose the distance into three dimensions -so that the distances along each dimension satisfy the Pythagorean theorem. We then added two random variables of Gaussian noise with standard deviations of 100 nm (one for each loci), generating a new displacement for each dimension with localization error. Lastly, we took the displacements along each dimension with the error and quantified the distance in 3D using the Pythagorean theorem. Quantifying best ω(r ij ) To determine the ω that captures the behavior of the experimental data, we first generated a large number of unique monotonically decreasing functions. This was first done in 0.1 iterations and with a distance binning of 200 nm. For example, ω 1 (r ij ) = [.1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] means genes that are within 200 nm of each other (first number in array) have the value 0.1, and the rest of the distances have the value 0. We would then iterate and produce the next ω , ω 2 (r ij ) = [.1, .1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] , etc. We then simulated a large number of trajectories for all gene pairs according to the model in the main text with each function. We then quantified the error between each ω 's distance-correlation relationship and the experimental data with the following: where ϕ ω k ij (r) is the correlation for the gene pair ij given that the observed distances were within the distance bin r (200 nm for each bin) and ϕ exp ij (r) is the correlation for the experimental data for that gene pair. Once we found the ω that resulted in the minimum error was found, we then varied the values for distance bins below 1000 nm by plus or minus 0.05. We then quantified the error again to result in the best-fit function shown in the main text. Mean squared displacement (MSD) We quantified the motion of the TFF1 gene utilizing the multiple allele data from Rodriguez et al., 2019. This live-cell data provided the 2D coordinates of active alleles over extended periods of time, allowing us to monitor the motion of chromatin over a timescale longer than the on time of a gene. To account for the movement of the cell over these long periods, we monitored the motion of one tagged allele relative to another. We then quantified the MSD for a given time (Δt): MSD(Δt)=<[R(t)-R(t-Δt)] 2 . Where R(t) is the position of an allele relative to another, and the arrows are the ensemble average and over all measured trajectories and times. Modeling distance diffusion To model the distance between two chromosomal loci, we utilized the following Langevin equation: Here, r ij is the distance between genes i and j , V ij (r ij ) is the potential (specific to that gene pair, described below), γ ij is a constant specific for that gene pair, and the last term √ 2D × g(t) accounts for the Brownian motion with the determined diffusion coefficient -if the potential is a constant independent of distance, r ij will exhibit Brownian motion. For each gene pair, we empirically determined a 1 γij ∂Vij(rij) ∂rij that 'biases' the distance's motion so the steady-state distribution matches the empirically determined distance distribution (corrected for the resolution of the experiment) -this accounts for the genes being on the same chromosome. The equivalent Fokker-Planck equation is where the initial condition is dropped for simplicity and P ij (r ij , t) is the probability distribution to have a distance r ij at time t specific to that gene pair. We then set the left hand of the equation equal to zero, defining the steady-state distance distribution ( P s ij (r ij ) ). The equation then becomes where C ij is a normalization constant. From the experimental data, we can empirically determine P s ij (r ij ) . To do this, we took the naturally observed distance distribution and performed a deconvolution with the resolution distribution. This provided us with P s ij (r ij ) minus the resolution error, and we can therefore solve for the potential with Vij(rij) γij = D[ln(C ij ) − ln(P s ij (r ij ))] With this we can then simulate the Langevin equation with the Euler-Maruyama method, which results in the proper steady-state distribution with the approximate diffusion coefficient. Appendix 1 H3K27ac analysis To quantify the density of H3K27ac within each corresponding 50 kb segment of Chr21 in IMR90 cells, we utilized the Chip-seq data from the Bing Ren Lab at UCSD. https://www.encodeproject. org/experiments/ENCSR002YRE/. More specifically, we quantified the average number of reads within each 50 kb segment from two biological repeats -this was done using the software packages Samtools and deepTools. We then normalized the reads by dividing by the sum, allowing us to understand these values in relation to the whole -this is shown in Appendix 1- figure 1A. To understand whether there is a dependence upon the transcription-induced repositioning of the genes based on the H3K27ac signal, we then partitioned each locus into one of four groups (low, med, high, very high, Appendix 1- figure 1A) and quantified the repositioning based off of the H3K27ac density (Appendix 1-figure 1B, colors). Method specifics for single-locus diffusion To investigate the diffusive behavior of transcriptionally active genes that were tagged at a single allele, we utilized the live-cell microscopy data for four different genes (MYH9, RAB7A, CANX, SLCA1) from Wan et al. Of note, this data is different from the multiallele diffusion analysis within the main text in that there was no internal nuclear reference point to correct for cellular movement over these long timescales. Still, in order to try and correct for the cellular movement we segmented the nucleus using the background GFP signal, resulting in a binary image of which pixels belonged to the nucleus and which did not. We then utilized the center of mass of the nucleus of the cell to adjust the diffusive trajectory within that cell. Simulation for single-and double-locus diffusion To understand how the diffusion of the single-allele genes relate to the multiallele TFF1 data within the main text, we sought to utilize a simple 2D random diffusion model to simulate the diffusive behavior of the two. This is important as the diffusion coefficient we seek to capture for the model is the distance between two different chromosomal loci. To do this, we simulated a simple random 2D walks consisting of either one particle or two particles with 1000 individual trajectories each with a time of 10,000 s. Each of the particles was simulated with a diffusion coefficient approximately equal to that of RAB7A ( D = .1e − 3µm 2 /sec ). When we quantified the diffusion coefficients of the single particles by fitting the 2D MSDs of the simulated data it resulted in the proper diffusion coefficient (Appendix 1-figure 5B). Then when we quantified the diffusion of the simulations with two particles -taking the distance of one relative to the other, similar to that of TFF1 -the MSD resulted in a coefficient approximately double ( D = .2e − 3µm 2 /sec ) , suggesting that the diffusion of the single-locus data is more similar to the TFF1 . Specifics on statistics Bootstrapping methodology The bootstrapping shown within the box plots of the main text was calculated utilizing the Python plotting software seaborn, with the pointplot function. More specifically, the estimator was the Python software numpy's mean function and the number of bootstraps was 1000. From these, the standard error of mean was quantified and displayed using the seaborn pointplot function. Statistical significance for box plots The significance quantified for the data shown within the boxplots is defined as having a p-value < 0.01 determined using a t-test. The specific software used to perform the t-test was the Python software SciPy with the stats package and the specific function ttest-ind. Statistical significance for average correlation To quantify if the average correlation values were themselves correlated along a specific dimension (Figures 3 and 4), the Python software SciPy was used with the stats package and the spearmanr function. The spearmanr function quantifies the monotonicity between two datasets and also produces a p-value that is equivalent to 'the probability of an uncorrelated system producing datasets that have the same Spearman correlation coefficient.' We, therefore, defined a significant correlation along a dimension (for the average correlation values) those that resulted in a p-value <0.01.
2022-07-11T06:29:09.188Z
2022-07-08T00:00:00.000
{ "year": 2023, "sha1": "97d33d872716e206b4fa7646b7fbe882958c46bf", "oa_license": "CC0", "oa_url": "https://doi.org/10.7554/elife.81861", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3f72124694d9179188c48e033bdaf94eae1aeab5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
256880275
pes2o/s2orc
v3-fos-license
Malaria Disease Cell Classification With Highlighting Small Infected Regions Deep learning-based methods have become an active research area in medical imaging. Malaria is diagnosed by testing red blood cells. Deep learning methods can be used to distinguish malaria infected cell images from non-infected cell images. The small number of malaria dataset may limit the application of deep learning. Moreover, the infected area in the cell images is generally vague and small, requiring more complex models and a larger dataset to train on. Motivated by the tendency of humans to highlight important words when reading, we propose a simple neural network training strategy for highlighting the infected pixel regions that are mainly responsible for malaria cell classification. In our experiments on the NIH(National Institutes of Health) malaria dataset available in public domain, the proposed method significantly improved classification accuracy for our four different sized models, ranging from simple to complex including Resnet and Mobilenet. Our proposed method significantly improved classification accuracy. The result indicate that approach achieves a classification accuracy of 97.2%, compared to 94.49% for a baseline model. In addition, we show the superiority of the proposed strategy by providing an analysis on the magnitude of weight parameters in terms of regularization. I. INTRODUCTION Malaria is a fatal disease caused by Plasmodium parasites that infect red blood cells (RBCs) [1]. Infected mosquitoes transmit the parasite through their bites to humans. Nearly all malaria cases occur in developing countries, primarily in Sub-Saharan Africa. More than 290 million people are infected with malaria annually, and more than 400,000 die [2]. Malaria is diagnosed by testing the red blood cells. In order to determine the presence of malaria, centrifugal machines are used to isolate RBC and WBC so that only RBC can be used for analysis by blood films. Blood smears are used to diagnose malaria and are a standard laboratory test [3]. Deep learning methods can be used to distinguish malaria infected cells from noninfected cells. However, the success of such services depends on the availability of the dataset used. Moreover, the difference between normal and defective images is generally The associate editor coordinating the review of this manuscript and approving it for publication was Yongming Li . small (as shown in Figure 1). In such cases, it is difficult to train a neural network, and the process can be more complex and take a long time. Currently, deep learning methods require a large number of training samples and a significant amount of time to label the training samples [4]. A DNN is rarely suitable for tasks with a small amount of training data. The diversity and size of the data are important factors in improving classification performance [5]. Typically, large datasets lead to a better classification performance, and small datasets may trigger an overfitting [6]. Nevertheless, in the real world, particularly with medical datasets, there are a relatively small number of datasets available. It can be expensive or simply impossible to gather a large dataset. In such cases, it is critical to make the most accurate predictions possible. Motivated by humans' tendency to draw attention to important phrases in reading or writing, we propose a simple neural network training strategy based on highlighting malaria-infected pixels that are mainly responsible for classification. Spurred by highlighting advantage, Serra et al. developed [7] a method to transform sketched images into real images. This method is in line with our highlighting strategy, except that it emphasizes the output layer instead of the input layer. The current approaches to image classification and object recognition requires the use of attention models [8]. The intuition behind this can be best explained using human biological systems. The human brain does not respond to an entire scene. Instead, humans focus their attention selectively on parts of the visual space to acquire information while ignoring other irrelevant information [9], guiding future eye movements and decision making. Minh proposed a model that uses recurrent neural networks to extract useful information from an image by adaptively selecting a sequence of regions and processing only those regions [10]. The use of a self-attentionbased architecture, notably a transformer, has become the de facto approach for natural language processing (NLP). Vision transformers have attained excellent results when trained with larger datasets (14-300 million images). However, transformers lack some of the advantages of a convolutional neural networks (CNN), such as translation equivariance and locality, and therefore do not generalize well when trained on insufficient numbers of data [11]. As a result, it is difficult to apply an attention-based model to small-sized medical datasets. Highlighting serves the same purpose as attention models. Attention models, however, emphasize features in the intermediate layers. Our proposed highlighting strategy differs in the way important regions are highlighted at the input rather than the intermediate layer. We propose a method for guiding a neural network from an input by highlighting the diseased region. By using the highlighting method, we are able to use a light model with fewer parameters. Our model has been tested in four different size modes, including Resnet and Mobilnet. Resnets are one of the most efficient Neural Network Architectures, as they help in maintaining a low error rate much deeper in the network. Hence, proved to perform really well where deep neural networks are required, such as feature extraction, semantic segmentations, various Generative Adversarial Network architectures. We demonstrate the effectiveness of our approach in the deepest neural networks. Mobilenet is a lightweight deep neural network with fewer parameters than Resnet. Mobilenet represents the deepest yet lightest model in our study. To summarize, our contributions are as follows: 1. This paper proposes a simple neural network training strategy of highlighting motivated by humans highlighting important phrases when reading or writing. 2. In this paper, we demonstrate that the proposed method significantly improved classification accuracy for our four different sized models, ranging from simple to complex including Resnet and Mobilenet. The highlighting method allows us to use light model of small number of parameters. 3. This paper also demonstrates that the superiority of the proposed strategy by providing analysis on the magnitude of weight parameters in terms of the regularization. II. RELATED WORK In this section, traditional machine learning methods and optimization strategies for deep learning methods are reviewed. In a traditional machine learning pipeline, the process of malaria diagnosis involves four steps: image preprocessing, cell segmentation, feature selection, and classification of infected and non-infected cells. An image preprocessing method improves the quality of blood smear images. There are a variety of smothering filters, including median, geometric mean, and Gaussian filters that can be used to reduce noise in microscopy images [12], [13]. In addition, a morphological operator can also improve the cell contours, removing impurities and suppressing noise by covering holes in the cell [14]. Cell segmentation is the most significant step in traditional automated malaria detection systems. In [15], the segmentation of RBCs from enhanced images is achieved using the Otsu threshold. In malaria detectors such as in [16], a zach thresholding is applied to microscopic images to segment cells. In addition, various feature extraction methods have also been used to extract features from cell images [17], [18]. Feature extraction methods selects the most relevant features for a model to predict the target variable. Channel selection [19], [20] and feature extraction [21] are also important in signal classification. Traditional malaria detection algorithms have adopted a linear Euclidean distant classifier with a Poisson distribution [22], a support vector machine and an artificial neural network [23], and K-means clustering [24] for classification. In contrast to traditional detection methods, deep learning (DL) tends to solve problems end-to-end, bypassing a feature selection step. CNN models [25] can recognize a pattern in microscopic images with a much higher accuracy rate than other traditional approaches. Several methods have recently been introduced to enable a faster and more stable training in deep learning [26], [27], [28]. In [29], the authors proposed transfer learning for malaria disease classification. The use of novel data augmentation techniques [30] and various CNN schemes have also been investigated for disease diagnosis in blood smear images [31]. A number of important considerations must be taken into account, including enhancing model weight initialization by transfer learning and utilizing dropout as a method of regularization in order to combat overfitting during model training [32], [33], [34]. There have been Regularization aims to enhance the training of neural networks by stabilizing the distribution of the layer inputs [35]. The use of traditional methods as guidance for deep neural networks has not been sufficiently explored. Thus, we propose a method for guiding a neural network from an input by highlighting the diseased region through image processing techniques. The possibilities of using traditional methods as guidance for deep neural networks have not been sufficiently explored. In this paper, we present a highlighting approach as guidance to the CNN. Therefore, we highlight the diseased region of a malaria cell to guide a neural network, enabling a faster training process. III. PROPOSED HIGHLIGHTING STRATEGY The main goal of our proposed approach is to guide the neural network to classify malaria infected cells from healthy non-Infected cells. Guiding neural network can improve the classification accuracy [35]. The proposed approach comprises several stages: The proposed method consist of three parts: Segment infected region, highlighting the region of interest, and classification. The details of each stage are described in the following subsections. Figure 2 shows the architecture of the proposed system. A. SEGMENT INFECTED REGIONS The adaptive threshold is a segmentation method that helps us to identify the areas of an image corresponding to specific diseases in an easy to understand manner. To distinguish objects from backgrounds, we use the difference in intensity between object pixels and background pixels. A thresholding algorithm segments an image according to a certain characteristic of its pixels (for instance, intensity). Adaptive thresholding algorithm determines pixel based threshold on a small region around it. Different thresholds are applied to different regions of the same image, which gives better results for images with varied illumination. The dataset contained images of normal RBCs and images of RBCs infected with malaria. Compared to an image of a healthy cell, an image of an infection displays darker red anomaly regions, indicating the infection. B. HIGHLIGHT INFECTED REGIONS In this study, we propose simple highlighting technique tailored to malaria disease classification. Following the extraction of the regions of interest, the masked images were used to highlight the diseased area. Our goal was to increase the intensity of the Red channel to highlight the diseased region. H is scaler value and is computed from mean and standard deviation of the dataset. First, we split the original image into r, g, and b channels. We add a constraint in the R channel to highlight the selected area based on the masked image, as shown in Eq.1. A scale will first be multiplied by the masked region, then it will be added to the R channel of the image. A scalar value (S) is added to the red channel of the image based on the masked area. where I ′ R (x, y) the highlighted pixel intensity of (x,y) on the R channel, I R (x, y) non-highlighted pixel intensity of (x,y) on the R channel, S is highlighting scale, M(x,y) is segmentation of infected pixels and Min() represent operator of taking minimum. We chose the r channel for adding the scale factor since the malaria dataset has a red infected region. The scale factor is determined by the mean and standard deviation. Following completion of the above steps, we finally merge the b, g, and r channels (Figure 4). Algorithm 1 shows the entire process for the highlighting steps. C. CLASSIFICATION Large deep neural networks have achieved remarkable success with a good performance, particularly in real-world scenarios with large-scale data and extremely complex models. However, medical datasets contain relatively small numbers of data. Moreover, the deployment of deep models in mobile devices and embedded systems is a significant challenge because of the limited computational capacity and memory of such devices. In this study, we guided the neural network by highlighting the regions of interest. Thus, we were able to achieve our goal using a shallow network. Categorical cross-entropy was used as a loss function for our model. In this study, we developed and employed four different size models to demonstrate the effectiveness of our highlighting strategy. The CNN model consists of two components: feature extraction and classification. Feature extraction is performed by the convolution layer, while Classification is performed by fully connected layers. The network consists of only one convolutional layer for feature representation and two dense layers ( Figure 5). The final dense layers serve as the classification layers. Categorical cross-entropy was used as a loss function for our model. Our model was trained using various hyperparameters, i.e., a batch size of 20, a learning rate of 0.001, 25 epochs, an ADAM optimizer, and categorical crossentropy loss. The small model has three filters in the convolution layer. After the conv layer, there is a maxpooling layer followed by two final dense layers as shown in Table 1. 2) MEDIUM MODEL: MODEL-M Model-M has deeper convolution and maxpooling layers than Model-S, resulting in a higher number of parameters. Model-M is composed of three convolutional layers and three maxpooling layers ( Figure 6). Our model was trained using various hyperparameters, i.e., a batch size of 20, a learning rate of 0.001, 25 epochs, an ADAM optimizer, and categorical cross-entropy loss. The medium model has 64 filters in conv layer, which is significantly higher than small model. Two dense layers are added final maxpool layer in which the final dense layer is used as classification layer. Model-M is slightly more complex than Model-S, but it is lighter when compared against both Resnet and Mobilenet(as shown in Table 2). 3) DEEP MODEL: RESNET In our study, we chose Resnet [37] to represent the deepest network. There are several variations of Resnet that are based on the same concept, but have varying numbers of layers. We have used Resnet-50 which is a convolutional neural network that has 50 layers. Images in the medical domain have different structure than those in the normal image domain. Thus, the entire model is trained with the newly added classifier. Resnet was trained using a batch size of 20, a learning rate of 0.0001, 25 epochs, and RMSprop optimizer, and categorical cross-entropy loss. 4) LIGHT MODEL: MOBILENET Mobilenet [38] represents the deepest yet lightest model in our study. Mobilenet-v2 is a convolutional neural network that is 53 layers deep. The Mobilenet v2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models, which use expanded representations in the input. Mobilenet v2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. IV. EXPERIMENTS, RESULTS, AND DISCUSSION In this section, we provide the results of an experiment conducted to validate the proposed method. Our study involves classifying microscopic images of smeared thin blood that had already been verified by trained microscopists as infected or uninfected by malaria. We divided our experiments into three parts. To begin with, we experimented on how to choose scale factor, then we demonstrated the speed of convergence in loss slope, followed by the magnitude of weight, and finally compared accuracy with different models. This research used 19000 images. We evaluated the predictive models through a five-fold cross-validation over five different test sets. We evaluated the predictive models through a five fold cross validation over five different test sets. The training data were randomly partitioned into five equal sized subsets, with one subset used for validation testing, and the remaining four subsets used for training. The cross validation process was then repeated five times for the proposed model, with each of the five subsets used exactly once as the validation data. The results of the validations were averaged to produce a single score. A. DATASET In this paper, we propose a guided model for malaria parasite detection that achieves a higher accuracy. Malaria is a fatal disease caused by Plasmodium parasites that infect red blood cells (RBC). Our experiment was conducted on images of parasitized (infected) and uninfected RBCs from the NIH Malaria dataset [39], which were collected from 201 patients and is classified into two groups, malaria-infected and uninfected, for which there are equal numbers of instances in each cell. Cells with P. falciparum were placed on a conventional light microscope at Chittagong Medical College Hospital, Bangladesh, and photographed using a smartphone. B. SCALE FACTOR FOR HIGHLIGHTING Our primary objective with the proposed system is to guide neural networks by highlighting the diseased regions. To highlight the diseased areas, we must add a scaler factor (H) to the diseased region. We experimented on proposed approach 3 times by varying the scaler factor. We selected the scaler factor (H) based on the mean and standard deviation of the dataset. µ = 113 and σ = 78. We experimented on three values. The highest accuracy was obtained in H3. These values are added to the diseased area to highlight the region of interest. Figure 7 shows the highlighted image with respect to scaling. We highlighted or marked the diseased regions on the cells. By doing so, we made the problem easier and the network is guided to focus on the most significant regions of the image for classification. Then, with each approach, we trained our network. First, the accuracy of each highlighted dataset with a different scale factor is compared. The highest accuracy was obtained in H3 ( Figure 8). For the rest of the experiment, we will use H3 for our proposed approach unless stated otherwise. We compared our results with baseline in order to validate our method. The test accuracy of the proposed approach was 97.21%, which was significantly higher than that of the baseline(Models w/o Highlighting) 94.8% (Table 3). We demonstrated the importance of our model by comparing it with baseline. In all aspects, our proposed approach outperforms the baseline. C. ACCURACY WITH DIFFERENT MODEL The accuracy and parameter values of our 4 models were compared. The Model-S has the smallest parameter and depth. The Model-M is more complex than Model-S. The weight values for models is described below. We have used Resnet to represent complex models, while Mobilenet is used to represent complex light models. The Model-S has the smallest parameter while Resnet model have largest parameter (as shown in Table 4). The weight values for models is described below. To show efficiency of our proposed approach, we show comparison test accuracy over the complex models as well. We obtained 97.21% using the proposed method, outperforming models-s without highlighting at 94.49% and modelsm without highlighting at 95.8%. Additionally, the proposed approach with Resnet and Mobilenet models outperformed the respective baseline model (as shown in Table 5). With all models representing different parameters and depths. D. PRECISION, RECALL AND F1 SCORE We further evaluated the proposed approach by computing the recall, precision, and F1 scores. Precision refers to the proportion of positive predictions that were actually correct (true positives). Recall measures how many positive cases were correctly predicted by a classifier, compared to all positive cases in the data. F1-Score combines precision and recalculation into one measure. For Model-S, optimal results were achieved by using scale factor 180 and F1 measure rate of 97, recall rate of 94.9%, precision rate of 99.21%, while the lowest was achieved in Model-S with scale factor of h-35, F1 score of 96.02%, precision rate of 93.5%, and recall rate of 94.7% (Table 6). VOLUME 11, 2023 E. CONFUSION MATRIX The confusion matrix result for the proposed system in Model-S is demonstrated in figure 9. A predicted class can be either infected or uninfected. Out of 3780 predictions, the classifier correctly predicted ''infected'' 1822 times and ''uninfected'' 1882 times.in reality, 1890 cells are ''infected'' and 1890 cells are ''uninfected''. F. LOSS CONVERGENCE To investigate the effect of the highlighting in the model, we report the loss behavior for different scenarios. The loss by highlighting approach, in overall, is better than the one obtained by baseline. The highlighted approach achieved the lowest loss and reached convergence much faster than the baseline. Loss graphs of the proposed and conventional approaches are shown in Figure 10. We can observe that the slope of the proposed approach is steeper than that of the baseline. To further proof of our idea, we add our strategy to complex model, Resnet and Mobilenet. Loss difference between the proposed approach (highlighted images) and baseline (nonhighlighted image) is clearly visible in Model-M and Model-S. We demonstrate that highlighting approach reaches convergence faster and have steeper slope. The proposed approach still results in lower losses for Resnet and Mobilenet. Given how powerful these models are, it is reasonable to expect that baseline can also achieve comparable performance with the proposed approach. Overall, we found that our proposed strategy can speed up training and reach convergence faster. G. MAGNITUDE OF WEIGHT COMPARISION Representative regularization techniques are the weight decay [34] and dropout [40]. Adding all the parameters (weights) to the loss function would be one way to penalize complexity. This process of regularization is often referred to as weight decay because it reduces the weights. Our highlighting approach also reduces weights. Therefore, it can be considered a regulation approach. The goal of regularizations is to avoid overfitting by penalizing weights in order to fit the function appropriately. The baseline weight norm grows more steadily than the proposed approach, resulting in a larger weight. Regularization encourages the weight values to decrease. Our highlighting method reduces the weight parameter, which can be regarded as regularization. We can state that the proposed approach is superior to the baseline and generalizes well. Figure 11 Shows the norm of weight of Mobilenet and Resnet. V. CONCLUSION In this work, we present a new training strategy based on highlighting infected regions to improve classification accuracy. The advantage of the proposed method is verified on NIH malaria dataset. To show the efficiency of proposed approach, we have tested our approach on four different sized models, ranging from simple to complex models (Resnet). The result showed that the classification accuracy of proposed method is higher than baseline, especially with the smaller model (Model-S), obtaining 97.21% compared to the baseline of 94.49%. Moreover, our approach consistently outperformed the baseline independent of the model size. We further showed superiority of the proposed approach by analyzing weight parameters in terms of the regularization. The proposed approach reduces the weight and improves generalization. The limitation of the proposed approach is that it is customized specifically for these malaria datasets. In the future, we plan to develop a novel method for automatically highlighting a region of interest in complex images, or any images in general and also expand the study to include application of detection, segmentation and scene analysis by highlighting the detection area.
2023-02-16T16:07:57.775Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "cd9d77cf4124f28f18d8f28934dc30699bcacbe4", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/10044090.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "338eda1f45d2cfb25eb34442d0fb7c11c5047130", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
73514260
pes2o/s2orc
v3-fos-license
Silent progression in disease activity–free relapsing multiple sclerosis Objective Rates of worsening and evolution to secondary progressive multiple sclerosis (MS) may be substantially lower in actively treated patients compared to natural history studies from the pretreatment era. Nonetheless, in our recently reported prospective cohort, more than half of patients with relapsing MS accumulated significant new disability by the 10th year of follow‐up. Notably, “no evidence of disease activity” at 2 years did not predict long‐term stability. Here, we determined to what extent clinical relapses and radiographic evidence of disease activity contribute to long‐term disability accumulation. Methods Disability progression was defined as an increase in Expanded Disability Status Scale (EDSS) of 1.5, 1.0, or 0.5 (or greater) from baseline EDSS = 0, 1.0–5.0, and 5.5 or higher, respectively, assessed from baseline to year 5 (±1 year) and sustained to year 10 (±1 year). Longitudinal analysis of relative brain volume loss used a linear mixed model with sex, age, disease duration, and HLA‐DRB1*15:01 as covariates. Results Relapses were associated with a transient increase in disability over 1‐year intervals (p = 0.012) but not with confirmed disability progression (p = 0.551). Relative brain volume declined at a greater rate among individuals with disability progression compared to those who remained stable (p < 0.05). Interpretation Long‐term worsening is common in relapsing MS patients, is largely independent of relapse activity, and is associated with accelerated brain atrophy. We propose the term silent progression to describe the insidious disability that accrues in many patients who satisfy traditional criteria for relapsing–remitting MS. Ann Neurol 2019;85:653–666 O ne of the defining clinical features for many multiple sclerosis (MS) patients is relapses-episodes of neurological worsening that evolve over hours or days and then last for days or weeks, followed by varying degrees of recovery. 1 MS relapses are typically accompanied by radiographic changes on magnetic resonance imaging (MRI) such as the development of new lesions on T2-weighted imaging or new contrast-enhancing lesions. 2 Relapses contribute to meaningful neurological disability over the short term 3 ; however, whether relapses also contribute substantially to long-term disability is controversial. Some observational studies found no substantial impact of relapses on long-term disability progression in participants who had reached specific MS milestones. [4][5][6][7] In contrast, natural history studies suggest that relapse frequency and recovery from relapses within the first few years of disease onset contribute to long-term disability. [8][9][10][11][12] A recent study using the large MSBase dataset found that relapses contribute, at least in part, to long-term disability. 13 The generally accepted model of MS disability proposes a 2-stage process in which poor recovery from relapses underlies disability progression during the relapsing phase of MS, which is followed by insidious decline in function caused by neurodegeneration in the secondary progressive disease phase. 14 Whether the radiographic counterparts of MS relapses documented by the occurrence of new T2 lesions or gadolinium-enhancing lesions seen on brain MRI also contribute to long-term disability is also controversial. Some studies have showed that the number of lesions seen on initial brain MRI or evolution of new lesions following relapsing disease onset correlate with long-term disability, 15,16 whereas others point to a clinicoradiological paradox inherent in MS-that the radiographic burden of tissue injury correlates poorly with disability worsening. 17,18 A methodological limitation to studies that have investigated the contributions of relapsing activity to long-term disability is patient retention. Many studies, including long-term follow-up studies from clinical trial cohorts, are difficult to interpret because substantial proportions of participants are lost to follow-up (33-59% retention) [19][20][21][22][23][24][25] or because interval data are missing. We sought to test the 2-stage hypothesis of disability progression by defining the contribution of relapses and radiographic disease activity to long-term disability and brain atrophy using a well-phenotyped, University of California, San Francisco (UCSF) MS-EPIC (expression/genomics, proteomics, imaging, and clinical) dataset. The MS-EPIC dataset is a single-center prospective observational cohort of contemporary, actively treated MS patients who have been evaluated annually since July 2004 with long-term data ascertained in 91% of study participants. We previously reported that rates of worsening and evolution to secondary progressive MS (SPMS) were substantially lower when compared to natural history studies from the pretreatment era. Nonetheless, more than half of patients with relapsing MS accumulated significant new disability after 1 decade of follow-up. 26 Notably, no evidence of disease activity (NEDA) at 2 years did not predict longterm stability. Because over half of the relapsing-remitting (RRMS) patients in the EPIC dataset developed clinically significant disability worsening by 10 years, but were still considered by their treating physicians to have relapsing MS (ie, they had not been reclassified as having developed secondary progressive disease), we sought to determine whether ongoing relapse activity, assessed clinically as relapses or radiographically as new or enlarging focal white matter lesions, might be the primary contributor to this long-term disability worsening. We also sought to determine to what extent relapsing activity contributes to evolution of brain atrophy-an in vivo measure of irreversible tissue injury that correlates with long-term disability. Patients and Methods The UCSF EPIC cohort is a prospective, longitudinal, actively treated, single-center cohort of patients, now in its 14th year of follow-up. The UCSF Institutional Review Board reviewed and approved the study protocol. Written informed consent was obtained for all participants. This study was conducted in accordance with the Declaration of Helsinki. The 10-year, postbaseline follow-up of this cohort was previously reported. 26 Although the cohort enrolled participants with clinically isolated syndrome (CIS), RRMS, SPMS, and primary progressive MS, here we considered only those participants who had either CIS or RRMS at entry. For baseline data, the annualized relapse rate (ARR; life-time) was calculated from when the first relapse occurred to baseline in the patient self-reported database. At end of study, the ARR was calculated from baseline to last visit. If missing data were due to MS disability in Multiple Sclerosis Functional Composite (MSFC) assessment, then 99 seconds was used in the Timed 25-Foot Walk (T25FW), 300 seconds was used in the 9-Hole Peg Test (9HPT), and 0 scores were used for the 3-second Paced Auditory Serial Addition Test (PASAT) and the Symbol Digit Modalities Test (SDMT). R median algorithm (type 7) was used to calculate median and interquartile scores. Some patients had partial visits due to missing MSFC at last follow-up. In this case, the most recent available MSFC visit was used. As previously reported, 26 clinically significant disability was defined as worsening by an increase in the Expanded Disability Status Scale (EDSS) 27 of 1.5 points if the baseline EDSS score was 0, 1.0 point if the baseline EDSS score was between 1.0 and 5.0, and 0.5 point for baseline EDSS scores of 5.5 or higher. Relapses were patient-reported and assessed systematically for the year prior at each annual visit. Relapses were defined as new, focal neurological symptoms evolving over days to weeks that lasted for >24 hours, were not associated with an intercurrent infection, and were typically followed by at least partial recovery of function over time. Examples of relapses included vision loss, double vision, weakness in one or more limbs, sensory disturbances including paresthesias, or loss of coordination including imbalance. Symptoms that were nonspecific such as headache, malaise, and generalized weakness or were insidious and progressive in nature were not considered relapses. To assess the effect of relapses on disability, we compared 1-year intervals, with disability assessments performed annually. For each 1-year 654 Volume 85, No. 5 interval, a short-term impact of relapse on MS disability was defined as an increase in EDSS between the 2 annual visits during a year in which a relapse occurred. As such, each subject was considered serially for annual assessment of the impact of relapses on MS disability. Confirmed disability was defined as worsening maintained for 2 consecutive annual visits. Lastly, long-term worsening was defined as increase in disability between baseline and the midpoint of the study (median years = 5, range = 4-6), with confirmation of worsening 5 years thereafter (sustained worsening). Disability was also assessed using the T25FW, 9HPT, and PASAT. Because the SDMT became generally available for MS studies during the course of the study, this test of cognitive function was performed after the 5th study year. Clinically meaningful worsening was defined as a 20% increase in the T25FW (average of 2 trials), a 20% increase in the 9HPT time for either arm (single trial), an increase in the reliable change index for the PASAT, and a 4-point worsening in the SDMT. The contribution of relapse to disability was determined by Pearson chi-squared test with Yates continuity correction or Fisher exact test. To simplify the analysis of treatment on relapses, we grouped therapies into 2 tiers: "platform" (eg, interferons, glatiramer acetate) and "high-potency therapy" (eg, natalizumab, mitoxantrone, rituximab, cyclophosphamide). 26 We also considered a 3-tiered model grouping together therapies by relative relapse rate reduction: modest (interferons, glatiramer acetate, teriflunomide), moderate (fingolimod, dimethyl fumarate), and high (natalizumab, anti-CD20 monoclonal antibodies, alemtuzumab) efficacy. However, because oral treatment options and other monoclonal antibodies were not generally available during the first 6 years of the study, we present the simpler, 2-tiered model. Multivariate logistic regression was used to model the impact of treatment tier on relapses, with disease duration, disease course, and HLA-DRB1*15:01 included as covariates. HLA-DRB1*15:01 was included in this and other analyses because we previously reported an effect of this allele on certain clinical features of MS. 28,29 Age-adjusted baseline brain volume was calculated by regression (baseline brain volume~age + sex + disease duration) and was divided into quartiles. Logistic regression was used to assess relationships between long-term disability worsening and age-adjusted brain volume. A linear mixed model was developed to consider the impact of relapses and disability worsening on relative brain volume loss. Four subject groups were considered: (1) participants with increased disability but without relapses, (2) participants without increased disability and without relapses, (3) participants with increased disability and with relapses, and (4) participants without increased disability but with relapses. The annual percentage change in relative brain volume is defined as the slope of follow-up year divided by the relative brain volume at baseline. If brain volume is a 0 and cerebrospinal fluid (CSF) is b 0 at time 0, and brain volume is a 10 and CSF is b 10 at time 10, then the percentage change of brain volume is (a 10 − a 0 ) / 10 / a 0 = a10 −a0 10a0 . The percent change of relative brain volume is (a 10 / [a 10 + b 10 ] − a 0 / [a 0 + b 0 ]) / 10 / (a 0 / [a 0 + b 0 ]) = a10b0 −a0b10 10a0 a10 + b10 ð Þ . The ratio of percentage change of brain volume to the percentage change of relative brain volume is (a 10 − a 0 )(a 10 + b 10 ) / (a 10 b 0 − a 0 b 10 ). In this dataset, the ratio of relative brain volume to percentage change in brain volume is 5-to 6-fold. To assess the effect of new brain lesions on silent progression and on brain atrophy in treated and untreated participants, 4 subgroups were identified: (1) treated participants without new lesions (new T2 or Gd + ), (2) treated participants with new lesions, (3) untreated participants without new lesions, and (4) untreated participants with new lesions. Logistic regression was used to identify influences on disability and a linear mixed model for brain atrophy. The MRI acquisition protocol and analytic pipelines were previously published. 26 Briefly, Lesion Segmentation Tool 30 was used to segment lesion on fluid-attenuated inversion recovery images and corrected using the Mind Control platform. 31 These lesions were used as input to SIENAX 32 using optibet 33 for brain extraction. Registration and multinormal segmentation methods were used to propagate lesions backward and forward within a subject over time (unpublished methods). Gadolinium-enhancing lesions were visually assessed on postcontrast T1-weighted MRI. Gadolinium was not routinely administered for all brain MRI studies at long-term follow-up. Results Relapses Are Associated with Short-Term but Not Confirmed or Long-Term Disability Worsening The baseline characteristics of this cohort are described in Table 1 scans: baseline only, n = 5; year 1, n = 4; year 3, n = 8; year 4, n = 10; year 5, n = 1). Relapse occurrence was associated with clinically meaningful EDSS worsening at the next annual examination; 29.7% of yearly intervals in which participants experienced relapses were associated with disability worsening at the next visit, compared to 22.7% of yearly intervals during which participants did not relapse (odds ratio = 1.44, 95% confidence interval = 1.09-1.90, p = 0.012). However, there was no impact of relapses on confirmed disability worsening, defined as disability worsening at the visit following the relapse and confirmed at the subsequent year; 12.9% of yearly intervals with relapse, and 14.4% without relapse, were associated with confirmed worsening ( p = 0.551; Fig 1). Similarly, there was no association between relapses during the first 6 study years and long-term disability worsening. For this analysis, long-term follow-up was assessed at a median of 11 years after baseline (mean = 10.68, standard deviation = 0.65, minimum = 9 years, maximum = 11 years). In patients with clinically significant long-term disability worsening, there was no difference in the proportion of patients who experienced relapses during the first 6 years of the study (38.1%) and those who were relapse-free (35.9%, p = 0.736). The long-term outcomes of the relapsing population are summarized in Table 2. Pyramidal and cerebellar functional scale scores worsened in participants with increased long-term disability independently of relapse occurrence. Baseline scores in these scales were not predictive of long-term outcomes (Supplementary Table 1). Relapses were associated with short-term worsening of the T25FW (20% increase) with borderline statistical significance ( p = 0.039; see Fig 1); 9.1% of yearly intervals in which participants experienced a relapse were associated with this increase in the T25FW, compared to 5.7% of yearly intervals without relapses. Relapses were not associated with a significantly confirmed change in the T25FW or with long-term worsening of the T25FW. Relapses were not associated with clinically significant worsening (20% increase) of the 9HPT (see Fig 1); 15.5% of yearly intervals in which participants experienced a relapse were associated with a clinically significant increase in the 9HPT, compared to 11.7% of yearly intervals without relapse ( p = 0.091). Similarly, there was no association between relapses and confirmed change in the 9HPT (increased in 4.2% of those with relapses vs 3.2% in those without, p = 0.475) or long-term worsening ( p = 0.228). For the PASAT, 12.6% of annual intervals during which participants relapsed also experienced short-term worsening on this outcome compared to 11.1% of intervals without relapse (p = 0.500; see Fig 1). There was no discernible effect of relapses on confirmed worsening of the PASAT (p = 0.902) or long-term worsening (p = 1.000). Data on the SDMT were limited to assessments performed after the 5th study year, and no correlation between relapses and subsequent worsening on this test was found in the near term (p = 0.819) or for confirmed worsening (p = 0.755). Clinical and Genetic Factors Associated with MS Relapses Binomial logistic regression was used to assess the association of treatment with MS disease-modifying therapies and relapse occurrence ( Table 3). The comparison of SPMS to RRMS subjects showed a numerically lower risk for relapse occurrence in SPMS patients, although this comparison was not statistically significant. Similarly, the relatively small group of subjects classified with an unclear disease course (RRMS patients suspected of transitioning to SPMS) had a similar risk of relapse as SPMS subjects. Platform therapies were 2.4-fold more likely to be associated with relapses compared to high-potency therapies. The major MS susceptibility allele, HLA-DRB1*15:01, was associated with an increased risk of relapse albeit with marginal statistical significance, suggesting a potential genetic contribution to relapses, although the relatively small sample size limited analysis of copy number. Thus, a longer disease duration, a secondary progressive versus relapsing disease course, treatment with disease-modifying therapies, and absence of the HLA-DRB1*15:01 allele were associated with lower relapse risk (see Fig 1), although the overall model accounts for only a fraction of relapse variance (McFadden pseudo-R 2 = 0.034). Remarkably, factor analysis (analogous to principal component analysis but with both continuous and categorical variables contributing to individual clusters) of these mixed data showed that participants clustered together by annual relapse frequency based on the clinical and genetic factors listed in Table 3 ( Fig 2). That subjects who are grouped together by commonality of these factors also share similar numbers of relapses suggests that the variables identified in the binomial logistic regression are biologically relevant contributors to relapse occurrence. White Matter Lesions Contribute to MS Relapses As expected, radiographic disease activity as defined by new brain lesions on T2-weighted imaging correlated strongly with clinical relapses (see Fig 1). New lesions (defined as T1 gadolinium-diethylenetriamine pentaacetic acid enhancing lesions or new T2 lesions) were detected in 47.1% of annual intervals during which participants relapsed, compared with 25.3% of annual intervals without relapse (p = 4.0 × 10 −16 ). However, the development of new T2 lesions did not correlate with EDSS worsening measured at the next annual visit (p = 0.521), with confirmed worsening (p = 0.430), or with long-term worsening (p = 0.116). Table 3) that contribute to relapse frequency. Participants appear to cluster together based on annual relapse frequency. Participants with no relapses cluster separately from participants with more than one relapse. Even participants with a single relapse appear to cluster together as a subset of participants with no relapses. Long-Term Brain Volume Loss and Disability Progression The relationship between relapses, disability progression, and changes in relative brain volume was assessed using a linear mixed-effects model in the 4 subject groups that were defined by the presence or absence of relapse and/or increased disability (Table 4). At baseline, a significant effect on relative brain volume loss was observed only in the group of subjects with worsening disability and relapses. Age at baseline, disease duration at baseline, male sex, and years of follow-up were all associated with decline in relative brain volume loss. The interaction term of subject group with years of observation in the study was significant for all 3 groups relative to the reference group of clinically stable participants (without disability worsening and without relapses). This interaction term indicates that each subject group modifies the impact of time on relative brain volume loss. That the interaction terms for all groups relative to clinically stable patients (those without worsening disability and without relapses) was significant indicates that patient group modifies the effect of time on relative brain volume loss. Therefore, the rates of relative brain volume loss for participants with either relapses or increasing disability are significantly greater than the rate found in clinically quiescent participants (Fig 3, Table 5). Significant differences were found for comparisons between the "stable disability without relapse" and "increased disability without relapse," "increased disability with relapse," and "stable disability with relapse" groups, but no differences were observed for other comparisons. The most statistically significant comparison is for the group of subjects who experienced increased disability without relapse in comparison to the group of subjects who were clinically stable. Nonsignificant p values suggest that there is no difference between the 2 groups being compared on rates of relative brain volume loss. Although relapses may contribute to relative brain volume loss in subjects without increasing disability (p = 0.027), there was no apparent additional impact of relapses in the group of subjects with worsening disability (p = 0.486). Similarly, there was no apparent additional impact of increased disability in the group of subjects with relapses (p = 0.999). The baseline brain parenchymal fraction was not significantly different between the 4 groups, and, over the course of the study, brain volume declined in each group. In considering the group of patients who did not experience relapses during the first 6 years of the study, relative brain volume declined at a more pronounced rate in participants whose disability progressed compared to those who remained stable (see Fig 3). Among these nonrelapsing participants, CSF volume marginally increased in participants who experienced long-term increased disability ( p = 0.022, nonsignificant following multiple comparison correction). Changes in T2 lesion volume, white matter volume, gray matter volume, cortical gray matter volume, and brain volume were similar between these 2 groups. Baseline Brain Volume and Disability Progression Several models were developed to determine whether greater baseline age-adjusted brain atrophy placed participants at greater risk for disability. Results indicated that age-adjusted baseline brain atrophy was associated with both an increased risk of long-term disability (Supplementary Table 3) and silent progression (Supplementary Table 4). By contrast, the occurrence of relapses did not appear to worsen the risk of long- Table 5). Associations with Treatment To control for potentially deleterious effects of treatment on brain volume loss, we first assessed the impact of platform therapies versus no treatment on brain volume loss (Supplementary Table 2a) and then assessed the impact of treatment escalation to natalizumab (Supplementary Table 2b), the most commonly used escalation therapy in this dataset, on brain volume loss using linear mixed models. These analyses showed that treatment with platform therapy reduced brain atrophy and that escalation to natalizumab is potentially associated with further stabilization of brain volume loss despite natalizumab-treated participants having lower baseline brain volumes, which we interpret as a marker of disease severity. Of the 138 participants who experienced disability worsening, 46 were clinically recognized as having developed or developing SPMS, whereas the remaining 92 were considered still having a RRMS disease course at the time of the last observation. Of these 92 RRMS participants, 34 experienced long-term disability worsening without relapse. In comparison to these 34 participants who experienced progression that was not clinically recognized, the patients who developed clinically recognized SPMS scored higher on the EDSS and Multiple Sclerosis Severity Score and had longer disease durations at baseline (Table 6). However, there was no difference in rates of brain volume loss or T2 burden of disease between the participants with disability worsening who were classified as having SPMS versus those who remained classified as having RRMS, suggesting an underlying physiologic/anatomic similarity. In contrast, participants with silent progression had lower EDSS scores at baseline and a shorter disease duration, yielding a difference in the baseline Multiple Sclerosis Severity Score. Logistic regression was used to analyze whether the development of new lesions in treated and untreated participants was associated with long-term disability worsening (Supplementary Table 6a). Although statistically significant effects were not observed for any subgroup, this analysis suggested that participants with new lesions could be at increased risk for long-term disability independent of treatment. A linear mixed model was used to assess whether new brain lesions in treated and untreated participants influenced brain atrophy. Rates of relative brain volume loss were comparable across these 4 groups without a trend to suggest that new lesions influenced long-term brain atrophy (Supplementary Table 6b). Sensitivity Analyses To address potential confounding by inclusion of CIS subjects who remained stable over the long term, we repeated our analyses excluding these subjects. Excluding stable CIS subjects (n = 13) did not significantly alter the observations regarding the short-term and long-term impact of relapses on disability (Supplementary Table 7a). Similarly, excluding stable CIS participants did not influence the multivariate regression model of relative brain volume over time (Supplementary Table 7b). Discussion These data reveal that long-term worsening is common in RRMS patients and is largely independent of relapses or new lesion formation on brain MRI. Thus, insidious progression accrues in many early RRMS patients who remain classified as having relapsing MS. The current definition of SPMS is worsening of disability independent of relapses over at least a 6-month interval. Our data suggest that this process occurs earlier than is clinically recognized by either patients or physicians; 92 of the 138 patients who experienced insidious worsening of clinically meaningful disability in this dataset were still considered by their clinicians to have RRMS. It is possible that the loss of function over time is so gradual as to be unnoticed by the patient or physician. Typically, these patients have low EDSS scores and are for the most part fully functional. Many clinicians do not consider a diagnosis of SPMS in patients with EDSS scores of 3 or less. The recently suggested MSBase definition of SPMS requires a minimum EDSS score of 4.0. 34 The EDSS is nonlinear, and patients with scores <3 are less likely to show more year-to-year changes than those with scores between 3 and 6. Moreover, during the relapsing phase, disability measures are confounded by clinical attacks followed by variable recovery. All of these factors can obscure recognition of an underlying neurodegenerative process that we labeled silent progression to highlight its subtle emergence over the course of RRMS. It seems likely that the same underlying process that causes silent progression is responsible for SPMS when the march of clinical worsening is more evident. In this regard, it is notable that patients experiencing silent progression had accelerated brain atrophy over the long-term course of the EPIC study, as well as more age-adjusted brain atrophy at the time of their initial enrollment. Our data are consistent with 2 simultaneous processes; one results in the appearance of new focal demyelinating lesions visible on brain MRI that correlate with relapses, and the other is more diffuse and contributes to brain atrophy. This second process of global tissue injury is largely independent of relapses or focal lesion formation and appears to represent the most important contributor to long-term MS disability. That brain volume loss occurs early in MS and correlates with long-term disability was shown in prior studies. [35][36][37] However, uncertainty remains as to whether this process is dependent or independent of the development of new focal lesions. Because we found no correlation of new brain MRI lesions with long-term disability, our data are more consistent with the hypothesis that either diffuse injury or perhaps focal lesions too small to be detected by current methods lead to irreversible tissue loss. The pathologic substrate responsible for progressive tissue injury in MS is likely to result from some combination of a white matter axonopathy and direct neuronal injury. Slowly enlarging white matter lesions associated with chronic inflammation at the leading edge is one possible mechanism for progressive symptoms, and more diffuse injury might also play a role. 38 Histopathological studies in chronic MS show that focal inflammatory changes, microglial activation, and astrocytosis typically accompany axonopathy and myelin injury. [39][40][41] Furthermore, chronic demyelination appears to predispose axons to early death. 42 In addition, ectopic B-cell-containing immune aggregates located in the overlying meninges and in Virchow-Robin perivascular spaces could contribute to progressive cortical injury. Thus, histopathological studies support a model in which both microscopic and macroscopic tissue injury occur with CNS inflammation. These data also indicate that silent progression is not an invariable accompaniment of RRMS, and that measurement of whole brain atrophy might serve as a surrogate marker to identify patients with insidiously progressive ongoing disability. Incorporating a threshold for preservation of brain volume was proposed as a component for no evidence of disease activity (NEDA-4). 43 For fingolimod-treated patients, incorporating a minimal acceptable threshold for preservation of brain volume reduced the proportion of patients meeting the NEDA criteria from 31% for NEDA-3 to 19.7% for NEDA-4, underscoring the remaining unmet need for therapies that are more effective in arresting axonopathy and brain volume loss than current treatments. Regulatory agencies have approved more than a dozen therapies for treatment of RRMS. All are proven, with varying degrees of efficacy, to reduce the occurrence of clinical relapses and prevent development of focal lesions measured on brain MRI. For the most part, the selection of which treatment to use is based on these measures of efficacy as well as consideration of safety and tolerability. Our observation that tissue injury distinct from focal white matter lesions underlies long-term disability in RRMS suggests that treatment selection should also consider the impact on silent progression and on associated measures of brain atrophy. Importantly, recent studies indicate that the high-efficacy therapies natalizumab 44 and ocrelizumab 45 reduce progression independent of relapse activity in RRMS, although these effects were partial. Additional studies are needed to determine whether long-term benefits on disability prevention are mediated by prevention of brain volume loss. Our conclusions in regard to the impact of relapses on long-term disability are consistent with those from the British Columbia cohort 7 but appear to diverge from the observations from MSBase. 13 One explanation for this difference could be sample size. The MSBase group drew their conclusions from a larger dataset of 2,466 relapsing onset participants and has greater power to detect predictors of long-term disability with small effect sizes. However, advantages of the current EPIC study are its prospective ascertainment, systematic MRI acquisition and analysis, and a high rate of retention that reduces the impact of bias introduced by missing information from participants lost to follow-up. The mean annual relapse frequency was slightly higher in MSBase than EPIC (0.36 compared with 0.25, p < 0.001), possibly increasing the longterm effect of attacks on disability. Differences in prescribing practice between these datasets could also play a role, as treatment likely influences not only relapse frequency but also relapse severity. The methods used for assessing the impact of relapses on long-term disability between these 2 studies are also different; in EPIC we used a minimal threshold for disability worsening from our baseline observation and correlated relapse occurrence with both short-term and long-term disability worsening, whereas MSBase used a linear-regression model that considered the impact of the ARR over the 10 years of study on median 10-year EDSS change. The 16-year follow-up study of the pivotal interferon β-1b trial 46 also found an impact of relapses on long-term disability. Participants in this study had higher relapse rates (1.2 ARR) and baseline EDSS scores (3.0), indicating that this cohort was more clinically active and had worse disability compared to the EPIC dataset. At the time of the pivotal interferon β-1b trial, therapeutic options for escalation treatment were limited. Perhaps even more importantly, these studies used different analytic methods: the interferon β-1b long-term follow-up study assessed whether the ARR during the first 2 years of the study correlated with increased EDSS by year 16. When relapses have been correlated with long-term disability, their impact typically was observed primarily during the first 2 to 5 years from disease onset. [8][9][10][11][12] With earlier diagnosis of MS based on evolving diagnostic criteria and hence earlier treatment, it seems likely that many patients with high relapse frequency would be treated with disease-modifying therapies, thereby attenuating the association between early frequent relapses and long-term disability. Given that the mean disease duration at baseline in the EPIC dataset was 7.6 years (see Table 1), the majority of the observations regarding relapses in this dataset occur after the first 5 years of disease onset. Therefore, due to differences in the windows of observation, our finding that relapses do not contribute to long-term disability may be consistent with these prior studies. Lastly, our findings should not be interpreted to suggest that MS relapses are without clinical significance. Within this dataset, in addition to the impact of relapses on short-term disability, there are circumstances in which severe relapses caused permanent disability (data not shown). Rather, our results argue that long-term disability in RRMS is not primarily driven by cumulative injury from relapses. Our study has several limitations. We were not able to identify a statistically significant impact of treatment on long-term disability or brain volume loss; however, this dataset is likely underpowered to detect small treatment effects mediated through reduction of new white matter lesions detected by MRI. In addition to the relatively small size of our dataset, information on relapses was patientreported and thus is not directly comparable to clinicianvalidated relapses obtained from controlled clinical trials. In the clinical trial setting, participants are assessed within a defined window of relapse onset, and relapses are defined by an objective change in EDSS score. Such rigorous relapse assessment was beyond the scope of our study, which was designed for long-term characterization of the MS phenotype. We therefore relied on patient-reported relapses that were assessed through structured interview. When our participants were clustered by factor analysis of mixed data (see Fig 2), participants with the same relapse frequency grouped together, suggesting validity of self-reported relapses. Moreover, in the CombiRx study, the impact of treatment on patient-reported relapses was similar to that of protocol-defined relapses, suggesting that patient-reported relapses are likely valid. 47 Another potential limitation is the possibility that focal gray matter lesions or spinal cord lesions, not quantified in this study, could contribute to long-term disability. Finally, results obtained from any single-center design may not be replicated in other datasets. Work is underway to address this issue; we have partnered with other groups who have similar deeply phenotyped long-term datasets for the purpose of increasing statistical power and performing validation studies for these and other observations. 48 Despite these limitations, the characteristics of this dataset are consistent with previously published observations in that MS relapses were associated with new lesions on brain MRI and were associated with increased short-term MS disability. 2,3 The consistency of our findings regarding short-term outcomes lends credence to our longerterm observations. In summary, the high degree of effectiveness of MS therapies against clinical attacks and new white matter lesions made it possible to prospectively assess long-term outcomes in RRMS when these elements of focal disease were silenced. The appearance of silent progression during the RRMS phase and its association with brain atrophy suggest that the same process that underlies SPMS likely begins far earlier than is generally recognized and support a unitary view of MS biology, with both focal and diffuse tissue destructive components, and with inflammation and neurodegeneration occurring throughout the disease spectrum. In addition to brain atrophy, other markers of neural degeneration, such as quantitative spinal cord imaging, 49 optical coherence tomography, 50 and serum neurofilament light chains, 51 may also prove useful in identifying patients with silent progression. Moreover, as relapses and focal white matter lesions are brought under excellent control by disease-modifying therapies for RRMS, the effectiveness of these agents against silent progression is likely to represent a key determinant of their relative value.
2019-03-11T17:25:40.683Z
2019-03-30T00:00:00.000
{ "year": 2019, "sha1": "a2e3e06bc1f3076bdd03eb3d4a8170181fdfc299", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ana.25463", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a2e3e06bc1f3076bdd03eb3d4a8170181fdfc299", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221985728
pes2o/s2orc
v3-fos-license
Management of Achilles and patellar tendinopathy: what we know, what we can do Tendinopathies are challenging conditions frequent in athletes and in middle-aged overweight patients with no history of increased physical activity. The term “tendinopathy” refers to a clinical condition characterised by pain, swelling, and functional limitations of tendons and nearby structures, the effect of chronic failure of healing response. Tendinopathies give rise to significant morbidity, and, at present, only limited scientifically proven management modalities exist. Achilles and patellar tendons are among the most vulnerable tendons, and among the most frequent lower extremity overuse injuries. Achilles and patellar tendinopathies can be managed primarily conservatively, obtaining good results and clinical outcomes, but, when this approach fails, surgery should be considered. Several surgical procedures have been described for both conditions, and, if performed well, they lead to a relatively high rate of success with few complications. The purpose of this narrative review is to critically examine the recent available scientific literature to provide evidence-based opinions on these two common and troublesome conditions. Background In the United Kingdom, soft tissues disorders have a prevalence of 18 cases for 1000 individuals, and account for 40% of new rheumatology consultations [1]. Tendons can undergo degenerative and traumatic processes. The most vulnerable tendons are those of the rotator cuff, the long head of the biceps, the wrist extensors and flexors, the adductors, the posterior tibial tendon, the patellar tendon and the Achilles tendon, with tendinopathy commonly secondary to overload [2], though one third of patients with these pathologies do not practice regular physical activity [3]. Microscopic examination of abnormal tendon tissues normally shows a non-inflammatory process [15] with disordered arrangement of collagen fibres, increased vascularisation [16], and poor tendency to healing [17]. An angioblastic reaction is present, with a random orientation of blood vessels, sometimes at right angles to collagen fibres [18]. Inflammatory lesions and the presence of granulation tissue are uncommon and, if present, they are associated with tendon ruptures [17]. Six different subcategories of collagen degeneration have been described, but usually degeneration is of either mucoid or lipoid variety [19]. The characteristic hierarchical structure of collagen fibres is also lost [11]. Furthermore, in the paratenon, mucoid degeneration, fibrosis, and vascular proliferation, with a slight inflammatory infiltrate, have been reported [20]. In 163 patients (75% of whom participated in nonprofessional sports, particularly running) with classical symptoms and signs of Achilles tendinopathy (AT) for a median of 18 months, changes in collagen fibres' structure, with loss of the normal parallel bundles, were evident [21]. The areas of altered collagen fibre structure and increased interfibrillar ground substance exhibit an increased signal at magnetic resonance imaging (MRI) [1,18], and are hypoechoic on ultrasound (US) [22] (Table 1). The aim of the present narrative review is to critically examine the recent available scientific literature to provide an evidence-based opinion regarding these two clinical syndromes, which are the most common in athletes population with high economic and social relevance, and are not easy to treat. Achilles tendinopathy AT is a common cause of disability in many athletes for the continuous, prolonged and intense functional demands imposed on the Achilles tendon [23], and is common in runners and athletes participating in racquet sports, track and field, volleyball, and soccer [24,25]. To date, the incidence and prevalence of AT remain non-established, given the lack in scientifically sound epidemiological data [26]. AT is common in athletes, accounting for 6-17% of all running injuries [27,28], and athletes who participate in repetitive impact physical activities such as running and jumping present an incidence and a lifetime prevalence, respectively, of 9 and 52% in recreational runners [29,30]. The etiopathogenesis of AT remains unclear but is currently considered multifactorial, and an interaction between intrinsic and extrinsic factors has been postulated [11]. Changes in training pattern, poor technique, previous injuries, footwear, and environmental factors, such as training on hard, slippery, or slanting surfaces, are extrinsic factors that may predispose the athlete to AT [11]. However, also dysfunction of the gastrocnemius soleus, age, body weight and height, pes cavus, marked forefoot varus, and lateral instability of the ankle have been reported as risk factors [11]. Several other factors may play an important role in the etiopathogenesis of tendinopathies such as drugs (i.e fluoroquinolones, in particular, ciprofloxacin, and corticosteroids [14,31]), imbalance in MMPs activity in response to repeated injury or mechanical strain [32][33][34][35][36], metabolic diseases (i.e. diabetes [37][38][39][40]) and/or genetic predisposition [41][42][43]. AT is clinically characterised by pain and swelling, in and around the tendon, mainly arising from overuse, but often presenting in middle aged overweight individuals with no history of increased physical activity [10]. AT can be categorised as insertional and non-insertional, with different underlying pathophysiology and management options [23,44,45]. Pain is the most common AT symptom, but it is not understood how pain arises [6]: it may originate from both mechanical and biochemical causes [46]. Pain typically occurs at the beginning and a short while after the end of a training session. As the pathologic process progresses, pain may occur during the entire exercise session, and it may interfere with activities of daily living [40]. At clinical examination, patients commonly report pain 2 to 6 cm above the insertion of the tendon into the calcaneus [47]. Commonly used and reliable clinical diagnostic tests for Achilles tendinopathy are palpation of the area to ascertain whether pain is elicited, the painful arc sign, and the Royal London Hospital test [48]. Diagnostic imaging, such as plain radiography, US and MRI, may be required to verify a clinical suspicion or to exclude other musculoskeletal disorders [49]. The management of AT lacks evidence-based support, and patients with AT are at risk for long-term morbidity with unpredictable clinical outcome [50]. The management is primarily conservative, and many patients show good outcomes. However, if conservative management fails, surgery is recommended after 6 months of conservative management [51,52]. Conservative management Despite morbidity associated with AT in athletes, management is far from scientifically based, and many of the [53,54]. In the last few decades, several non-operative treatments modalities have been introduced, with an increasingly relevant role of local drug injections (such as sclerosing agents, corticosteroids, and high-volume image guided injections (HVIGI)) and physical therapy (i.e. shockwave and ultrasound therapy). Cryotherapy has been regarded as a useful intervention in the acute phase of AT; however, recent evidence in upper limb tendinopathy indicates that the addition of ice did not offer any advantage over an exercise program consisting of eccentric and static stretching exercises [55]. Nonsteroidal anti-inflammatory drugs (NSAIDs) are commonly used, even though AT is not regarded as a classical inflammatory condition. Although NSAIDs may provide some pain relief, they do not actually result in sustained improvement in the healing process [56]. Sclerosing injections can be an option, but contrasting results have been reported [6]. HVIGI injections likely produce local mechanical effects, causing the neovascularity to stretch, break, or occlude, obtaining pain relief given the destruction of sensory nerves [26]. Several substances have been investigated and injected in and around tendons including normal saline, corticosteroids, and local anaesthetics [57,58], but it is not possible to draw firm, evidence-based conclusions on their effectiveness [6]. Exercise programs with both eccentric and concentric exercises are widely used as first line management of AT, and no studies report adverse effects [6]. Eccentric exercises are superior to wait-and-see treatment [59], and both eccentric and concentric exercises could be considered as equally good for patients with AT. However, given the lack of high-quality studies with clinically relevant results, no strong conclusions can be made regarding the effectiveness of eccentric training (compared with control interventions) in relieving pain, improving function or achieving patient satisfaction [6]. The treatment regime most commonly used comprises 3 sets of 15 repetitions, carried out twice daily, 7 days a week for 12 weeks. Conservative treatment with shockwave therapy is proving successful, and moderate evidence indicates that ESWT is more effective than eccentric loading for insertional AT [60] and equal to eccentric loading for midportion AT in the short term. Additionally, there is moderate evidence that combining ESWT and eccentric loading in midportion AT may produce superior outcomes to eccentric loading alone [61]. However, the randomised controlled trials on this subject are statistically and clinically heterogeneous, making conclusions from pooled meta-analyses difficult to interpret [6]. Ultrasound therapy is a widely available and frequently used electrophysical agent in sports medicine, but systematic reviews and meta-analyses have repeatedly concluded that there is insufficient evidence to support a beneficial effect of ultrasound at the dosages currently being used in clinical practice [6]. Recent evidence in patients with AT demonstrated strength deficits in the triceps surae of the affected limb compared with the uninjured side or with an asymptomatic control group [62]. When clinicians approach AT, they may need to optimise rehabilitation, implementing a regimen of calf muscle eccentric exercises with heel lifts. These are effective to decrease pain, improve ankle function, and reduce joint dorsiflexion and the strain on the Achilles tendon [63]. Heel lifts reduce tensile loads on the Achilles tendon, counteracting the effect of footwear observed in the above-mentioned studies, and supporting the addition of orthotic heel lifts to footwear in the rehabilitation programme [64]. Surgical management In 24 to 45.5% of patients with AT, conservative management is unsuccessful, and surgery may be recommended, generally, often after 6 months of non-operative treatment [65,66]. However, long-standing AT is associated with poor postoperative results, with a greater rate of reoperation being required before reaching an acceptable outcome [67]. Open surgery for tendinopathy of the main body can be considered, using multiple longitudinal tenotomies, which can be implemented with a side-to-side repair and tendon augmentation or transfer, if significant loss of tendon tissue occurs. In chronic Achilles non-insertional tendinopathy, minimally invasive management can be performed. Under local, regional or general anaesthesia, the patient is placed prone with the ankles clear of the operating table with a tourniquet, if used, applied to the exsanguinated limb and inflated to 250 mmHg [40]. Generally, the longitudinal incision is made on the medial aspect of the tendon to avoid injury to the sural nerve and short saphenous vein [6]. Based on preoperative imaging studies, the tendon is incised sharply in line with the tendon fibre bundles. Tendinopathic tissue can be identified as it generally has lost its shiny appearance, and frequently contains disorganised fibre bundles that have more of a "crabmeat" appearance: this tissue is sharply excised [6] (Fig. 1). The remaining gap can be repaired using a sideto-side repair, but in our practice, we leave it unsutured. If significant loss of tendon tissue occurs during the debridement, a tendon augmentation or transfer can be considered. The limb is immobilised in a below-knee synthetic weight-bearing cast with the foot plantigrade [40]. Rehabilitation is focused on early motion and avoidance of overloading the tendon in the initial healing phase [6]. The surgical procedure described above is relatively straightforward, but on occasion it may require concomitant transfer of tendon tissue to reinforce the weakened tendon [6]. The peroneus brevis, the ipsilateral free semitendinosus, and the flexor hallux longus tendons can be used as tendon grafts [68][69][70] (Fig. 2). When conservative management has failed, another less invasive option is multiple percutaneous longitudinal tenotomies which can be used in patients with isolated tendinopathy with no involvement of the paratenon and a welldefined nodular lesion less than 2.5 cm long [71]. If multiple percutaneous tenotomies are performed in the absence of chronic paratendinopathy, the outcome is comparable to that of open procedures [6]. This procedure can be performed in the clinic under local anaesthesia without a tourniquet, but it is important to be careful, since even in minimally invasive procedures complications are possible. The tendon is accurately palpated, and the area of maximum swelling and/or tenderness marked, and checked by US scanning [6]. A #11 surgical scalpel blade is inserted parallel to the long axis of the tendon fibres in the marked area in the centre of the area of tendinopathy. The cutting edge of the blade points caudally and penetrates the whole thickness of the tendon [40]. During this procedure, full passive ankle flexions is made, with the scalpel blade being retracted and inclined several times. Active dorsiflexion and plantar flexion of the foot are encouraged early after surgery [40]. A systematic review of the literature regarding four categories of surgical management (open tenotomy with removal of abnormal tissue, paratenon stripped or not; open tenotomy with longitudinal tenotomy; and percutaneous longitudinal tenotomy) showed successful results in more than 70% of cases for each surgical category [72], but these relatively high success rates are not always observed in clinical practice [72]. In chronic painful AT, there is neovascularisation outside and inside the ventral part of the tendinopathic area [11,73]. A minimally invasive management modality can be considered [40] through neovessels stripping from the Kager's triangle of the Achilles tendon (Fig. 3). This achieves a safe and secure breaking of neovessels and the accompanying nerve supply decreasing pain [6]. The procedure is performed using four longitudinal skin incisions, each 0.5 cm long, and may provide greater potential for the management of recalcitrant AT by breaking neovessels and the accompanying nerve supply to the tendon [6]. The rationale behind this management modality is that the sliding of the Ethibond through the incisions breaks the neovessels and the accompanying nerve supply, decreasing the pain in patients with chronic Achilles tendinopathy [6]. Surgery is successful in up to 85% of patients [65], even though postoperative US examination often shows a widened tendon with hypo-echoic areas. This has led to hypotheses of a possible denervation of the tendon Non-insertional AT can be also treated with minimally invasive open debridement with resection of the plantaris tendon. This technique has also shown promising results with minimal complications in elite athletes and regular patients with non-insertional AT [75][76][77][78][79]. Whatever the chosen treatment, it is important to stress that patients must be encouraged to weight bear as soon as possible after surgery [6]. A recent systematic review reported that the average success rate of minimally invasive techniques and open procedures is, respectively, 83.6 and 78.9%, while the complication rate was, respectively, 5.3 and 10.5% [80]. The success rates of minimally invasive and open treatments are similar, but there is a tendency for more complications to occur in open procedures. Therefore, minimally invasive surgical treatment would appear to be a useful intermediate step between failed conservative treatment and formal open surgery [81]. Patellar tendinopathy Patellar tendinopathy (PT) typically presents with anterior knee pain at to the inferior pole of the patella. The term "Jumper's knee" was introduced in 1973 by Blazina et al. [82] to describe the condition, as it occurs more commonly in athletes who participate in jumping sports such as basketball and volleyball [83]. Cook et al. [84] found that more than one-third of athletes presenting for treatment for PT were unable to return to sport within 6 months. Several theories about its pathogenesis, including vascular [85], mechanical [86], impingement-related causes, have been hypothesised, but the most commonly proposed is chronic repetitive tendon overload [87,88]. The increased strain is located in the deep posterior portion of the tendon, especially with increased knee flexion, between the inferior pole of the patella and the rotation centre of the knee [89]. Microscopic failure occurs at high loads within the tendon and leads alterations at the cellular level, with fibril degeneration which decrease the mechanical properties of the tendon [88]. Studies in vitro and in vivo have shown neovascularisation and increased quantity of proteins and enzymes which can contribute to tendon degeneration [85]. Other studies showed that vascular endothelial growth factor (VEGF) and matrix metalloproteinase (MMP) activity have also been linked to tendon breakdown [36,90]. A second hypothesised aetiology is the impingement of the inferior patellar pole showed on MRIs during flexion of the knee [91]. The hallmark clinical features of PT consist in pain localised to the inferior pole of the patella [92] and load related pain that increases with the extension of the knee, notably in activities that store and release energy in the patellar tendon [83,93]. Tendon pain occurs with loading and usually decreases almost immediately when the load is removed [94]. In patients with symptomatic PT, the Royal London Hospital test showed lower sensitivity and higher specificity than manual palpation. Both tests should be performed to formulate a clinical diagnosis of PT. Imaging assessment should be performed as a confirmatory test [48]. PT imaging does not confirm the pain; indeed, intratendinous abnormalities may be observed using US in asymptomatic individuals [95]. Serial imaging is not recommended because, often, symptoms improve without changes in US or MRI [96]. There is no consensus regarding the best management. Avoidance of jumping activities with stretching after physical activity may help in the early phases [92]. Conservative management As for Achilles tendinopathy, the first line of conservative management is cryotherapy for its analgesic effect and because it counteracts the neovascularisation process. However, several non-operative treatments modalities have been proposed: oral drugs (NSAIDs and corticosteroids), injections (such as platelet-rich plasma) and physical therapy (i.e. shockwave therapy). NSAIDs are a mainstay for the management of tendinopathic pain but they are useful only in the short term (7-14 days), in particular in shoulder tendinopathy [97], but there is no long-term benefit. Corticosteroids have been used in various tendinopathies [98][99][100][101]. Compared to physical therapy, corticosteroids improve walking pain at 4 weeks, but, while at 6 months physical therapy group had good results, the corticosteroids (CIs) group experienced a relapse [100,101]. Eccentric exercises (EEs) are the most popular nonoperative treatment but there is no consensus on which the best is [102]. Many EEs protocols are used with different duration and/or frequency, drop squats versus slow eccentric movement, a decline board, and exercising until tendon pain. A study compared primary surgery with an EEs program on a decline board, and at 12 months there Fig. 3 Minimally invasive percutaneous stripping for chronic Achilles tendinopathy. The 4 small incisions are visible, with the surgical instruments passing through was significant improvement in both groups without any significant differences [103]. Visnes et al. [104] compared a decline board program with normal training in elite volleyball players during the playing season and found no significant difference at 6 weeks and 6 months. An attractive management option is platelet-rich plasma (PRP) injection [105][106][107], which has some good outcomes [107,108], but there are no level 1 or 2 studies, and no standards for dosage, injection technique, timing, or number of injections are validated. Regarding physical therapy, Extracorporeal shock wave therapy (ESWT), generating high strains in the tendon, may produce analgesic benefits through stimulation of tissue healing [109,110]. There is no consensus on the method of application, generation, energy level, number, frequency of treatments, and the use of anaesthesia [111,112]. Surgical management Approximately 10% of patients are refractory to conservative treatment, and in these patients surgical treatment is indicated [113]. There no consensus on the ideal surgical technique, including whether open techniques are preferable to arthroscopic methods [114][115][116][117][118]. The use of arthroscopy is another possibility, and some surgeons have reported their experience with debridement of the patellar tendon alone [119], while others have described treating both the tendon and bone [120]. Arthroscopic management may be used to debride the adipose tissue of the Hoffa's body on the posterior aspect of the patellar tendon, to remove the area of neovascularity, to debride the abnormal portion of the patellar tendon, and excise the lower pole of the patella. The surgical approach starts with examination of the knee to exclude coexistent lesions: hypertrophy of the Hoffa fat pad and mucous ligament can often be present, and moderate to severe synovial hypertrophy can be present around the lower pole of the patella [121][122][123]. The removal of these tissues also allows visualisation of the articular side of the tendon, its insertion to the patella, and the lower pole of the patella. The amount of abnormal patellar tendon is estimated using preoperative MRI and US and used as a guide before surgical debridement. Debridement of the abnormal tendon tissue is carried using an arthroscopic shaver, until abnormal tendon is visualised. Plain radiographs and MRI are used to guide the amount of patella excised, particularly where an inferior spur is present. The inferior pole of the patella is carefully prepared using the radiofrequency probe, and excision of the lower pole of the patella is then performed. Arthroscopic surgery for patients with PT, refractory to nonoperative management, appears to provide significant improvements in symptoms and function [124], with improvements of the International Knee Documentation Committee (IKDC) [125], Lysholm knee score [125], and Victorian Institute of Sport Assessment (VISA) -P scores [126] maintained for 3 years' follow-up [127]. Recent studies show that partial resection of the distal pole of the patella achieved 90% (18/20) good to excellent results [120], while arthroscopic removal of hypertrophic synovium and fat pad without resection of patellar tendon showed a 76.7% (23/30) return to play rate and 90% good or excellent outcomes [128]. Unfortunately, lack of prospective randomised controlled trials limit the significance of the related studies [124]. Open surgical techniques include opening of the peritendon, removal or drilling of the patellar pole, multiple longitudinal tenotomies and excision of the tendinopathic area [129,130] (Fig. 4). These are not technically demanding, are reasonably fast to perform and inexpensive, and provide a high rate of good and excellent outcomes in the long term in patients unresponsive to non-operative treatment [131]. Using a midline longitudinal incision and after excision of the paratenon, the tendon is exposed and separated from the Hoffa's body by blunt dissection. The tendon is palpated to locate any tendinopathic lesions, which usually present as an area of intratendinous thickening. Three longitudinal tenotomies from the lower patellar pole to the tibial tubercle are made, and the tendinopathic areas are excised. The tendon and paratenon are not repaired. A wool and crepe bandage is applied and kept in place for 2 weeks. The tissues excised can be fixed in 10% buffered formalin and sent for histology analysis [132,133]. Maffulli et al. [131] evaluated the return to sport activity using open technique in two group of patients, one with unilateral and the other with bilateral tendinopathy. At the final follow-up, in both group, the VISA-P scores [126] were significantly improved compared to preoperative values, with no intergroup differences, concluding that Fig. 4 Open surgery for patellar tendinopathy. Excision of the tendinopathic area this procedure provide a high rate of good and excellent outcomes in the long term. A recent systematic review reported an average success rate of 87% for the open treatment and of 91% for the arthroscopic surgery, with an average rate of return to sport of 78% after open surgery and 82% after arthroscopic surgery. The average time for return to sport was faster in patients treated arthroscopically compared with open surgery (3.9 vs. 8.3 mouths, respectively) [134]. Moreover, if good to excellent results have been reported after surgical treatment, in about 10% of patients surgery is unsuccessful [135]. Refractory PT after surgical treatment involves a small number of patients. Nevertheless, it is serious and debilitating, particularly in young athletes. We consider that a patient is a failure of surgical treatment if they failed to return to sport and are still experiencing pain after at least 1 year of the procedure [135]. Regardless of the first procedure, open or arthroscopic, we use a formal open approach. If the procedure had been performed in an open fashion, surgery is performed through the old incision, with the knee flexed to 90 degrees. The paratenon is opened longitudinally and the patellar tendon is exposed, then, after identification of tendinopathic areas, three longitudinal tenotomies are made (Fig. 5). A wool and crepe bandage are applied and kept in place for 2 weeks. Immediate postoperative mobilisation is recommended with crutches, weight-bearing is allowed as tolerated, and isometric exercises of the quadriceps muscles are encouraged as soon as patients could tolerate them. Patients are reviewed at 2 weeks from surgery, when active mobilisation is encouraged. At 6 weeks, if full active and passive motion have been regained, patients are prompted to start concentric exercises [135]. Conclusions The management of tendinopathy remains a major challenge. Advances in operative management are being made and are underpinned by a greater understanding of the pathologic changes of overuse tendon injuries within sport. The lesion is a failed healing response of the tendon, with differences dependent on the site of the lesion. Initially, a nonoperative regimen consisting of physical therapy with eccentric exercises is the mainstay of patellar tendinopathy treatment. Evidence-based guidelines regarding their use are inconclusive. Good outcomes have been obtained in refractory cases in both Achilles tendinopathy and patellar tendinopathy following surgery. However, we need further controlled studies to evaluate and improve novel treatment approaches. Authors' contributions RA and AO wrote the first draft. RA and NM undertook the literature search. AO and NM contributed to the interpretation of the data included in the cited references. Each author has contributed in the articles search and to draft and revise the manuscript. All authors read and approved the final version of the manuscript. Funding The authors declare that they did not have used any funding.
2020-09-29T14:22:45.431Z
2020-09-29T00:00:00.000
{ "year": 2020, "sha1": "f3857cd95eb0a85153cba323bf173f0246bb62cc", "oa_license": "CCBY", "oa_url": "https://jfootankleres.biomedcentral.com/track/pdf/10.1186/s13047-020-00418-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f3857cd95eb0a85153cba323bf173f0246bb62cc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
217166546
pes2o/s2orc
v3-fos-license
Broadband frequency translation through time refraction in an epsilon-near-zero material Space-time duality in paraxial optical wave propagation implies the existence of intriguing effects when light interacts with a material exhibiting two refractive indexes separated by a boundary in time. The direct consequence of such time-refraction effect is a change in the frequency of light while leaving the wavevector unchanged. Here, we experimentally show that the effect of time refraction is significantly enhanced in an epsilon-near-zero (ENZ) medium as a consequence of the optically induced unity-order refractive index change in a sub-picosecond time scale. Specifically, we demonstrate broadband and controllable shift (up to 14.9 THz) in the frequency of a light beam using a time-varying subwavelength-thick indium tin oxide (ITO) film in its ENZ spectral range. Our findings hint at the possibility of designing (3 + 1)D metamaterials by incorporating time-varying bulk ENZ materials, and they present a unique playground to investigate various novel effects in the time domain. M axwell's equations describe how an electromagnetic wave is modified by a material. The spatial boundary condition associated with Maxwell's equations can be used to derive the well-known Fresnel equations and Snell's law. A spatial variation in refractive index leads to reflection and refraction of a light beam incident on the boundary. As a consequence, the wavevector of the transmitted light changes, whereas the frequency is conserved. The spatial boundary can be abrupt (nonadiabatic) in refractive index variation such as at a glass-air interface. Or, the boundary can be smoothly varying, i.e., adiabatic in space, such as in a gradient-index lens. In both cases, the refracted beam of light must have a different k-vector (Fig. 1a), where |k| = 2πn/λ, n is the refractive index of the medium, and λ is vacuum wavelength of light. As the equations describing the paraxial wave propagation are unchanged upon the interchange of time and a spatial coordinate, one can define a boundary of refractive index in the time coordinate in a dual fashion to that in the spatial coordinates [1][2][3][4][5] . This effect is known as time refraction. The concept of time refraction is presented in Fig. 1a. Let us assume that an optical pulse of frequency f 1 is traveling in a dispersionless medium with a refractive index of n 1 . At t = t 1 the refractive index changes from n 1 to n 2 . As a consequence of the broken time translation symmetry, the frequency of light has to change because of the change in the refractive index while leaving the wavevector unchanged 6 . The change in frequency, according to the dispersion relation c/f = nλ 7 , can be expressed as n 1 f 1 = n 2 f 2 = (n 1 + Δn)(f 1 + Δf), where Δf = f 2 −f 1 is the change in the frequency of light after it encounters the temporal boundary; Δn = n 2 −n 1 is the change in the refractive index; and c is the speed of light in vacuum. Consequently, we can express the change in frequency as Δf = −Δn·f 1 /(n 1 + Δn). Thus, the frequency shift may be red (blue) if the change in index Δn is positive (negative). This effect is strongest when Δn/(n 1 + Δn) is large. In a regular dielectric medium such as silicon 8 , Δn/(n 1 + Δn) can only be on the order of 10 −3 . In contrast, in a highly nonlinear low-index medium, Δn/(n 1 + Δn) can approach unity due to the near-zero linear refractive index n 1 and the large nonlinear index change Δn 9-11 . Thus, a highly nonlinear low-index medium is a natural platform with which to generate a large frequency translation using time refraction. In addition to frequency conversion, a timevarying medium with a large index change can also be used to investigate many novel effects in the time domain such as alloptical nonreciprocity 12,13 , negative refraction 14 , photonic topological insulators 15 , photonic time crystals 16 , achromatic optical switches 17 , and the dynamic Casimir effect 18 . Here, we show that we can simultaneously overcome all of the above-mentioned shortcomings by using a homogeneous and isotropic epsilon-near-zero (ENZ) medium of subwavelength thickness. Using a series of pump-probe measurements, we demonstrate optically controlled total frequency translations of a near-infrared beam of up to 14.9 THz (redshift of 11.1 THz and blueshift of 3.8 THz)-that is, over 6% of the bandwidth of the carrier frequency-using a 620-nm-thick ITO film. The effect of frequency translation is broadband in nature, i.e., the central wavelength of the degenerate input pump and probe pulses can be tuned over a 500 nm range. We also find that the effect is maximum near the zero-permittivity wavelength of ITO. Results Nonlinear optical response of the ENZ material. An ENZ material is defined as a medium that has a near-zero linear permittivity, and consequently low linear refractive index. The near-zero permittivity in such a medium leads to highly nonintuitive linear effects [41][42][43] and strong nonlinear light-matter interactions 9,44-47 . In order to implement a temporal boundary with a large index change, we make use of the large and ultrafast optically induced change in refractive index of a 620 nm thick ITO film in its near-zero-permittivity spectral range. ITO is a degenerately doped semiconductor and near its zero-permittivity wavelength (1240 nm), the linear permittivity of the ITO sample can be well described by the Drude model (Fig. 1b). The temporal nonlinear optical response of ITO can be described by the twotemperature model when excited by an optical pulse with a central wavelength close to the ENZ region 9,44 . The optical excitation of ITO near the ENZ region leads to a strong modification of the Fermi-Dirac distribution of the conduction band electrons. The highly nonequilibrium distribution of electrons, within the formalism of the Drude model, leads to an effective redshift of the plasma frequency owing to the momentumdependent effective mass of the electrons. According to the twotemperature model, the rise time of the change in the refractive index is limited by the thermalization time of the conduction band electrons owing to electron-electron scattering. The rise time also depends on the energy deposition rate in the ITO film and thus has a strong dependence on the temporal envelope of the pump pulse. Once the pump pulse peak leaves the ITO film, the index returns to the initial value within a sub-picosecond time scale through electron-phonon coupling (Fig. 1c). Owing to the time-dependent nature of the index change induced by the intensity of the pump pulse, the frequency of probe pulse can be redshifted or blueshifted depending on the pump-probe delay time (see Fig. 1d). Measurements at the near-zero-permittivity wavelength. In order to measure the magnitude of the frequency translation using ITO, we performed a set of degenerate pump-probe experiments with~120 fs pulses and recorded the spectra of the probe beam as a function of the delay between the pump and the probe for varying pump intensities. The ITO film has two 1.1mm-thick glass slabs on both sides. Both pump and probe beams are p-polarized, and the intensity of the probe beam is kept low to avoid nonlinear effects (See Methods and Supplementary Note 1 for more details). The results for λ 0 = λ pump = λ probe = 1235 nm at the pump-probe delay time of ±60 fs is shown in Fig. 2. The pump induces a nonlinear change in the refractive index of ITO with a rate that depends on the pump intensity, the temporal envelope of the pump, and the intrinsic nonlinear dynamics of the ITO. When the pump pulse is delayed with respect to the probe, i.e., pump-probe delay time t d < 0, the probe experiences a rising refractive index and thus its spectrum redshifts (Fig. 2a). If the probe reaches the ITO after the peak of the pump pulse is passed (t d > 0), it experiences a falling refractive index change and the spectrum of the probe blueshifts (Fig. 2b). We also note that for t d ≈ 0 both blueshift and redshift can occur (Fig. 1d). As the thickness of the ITO film is only 620 nm, 120 fs pump, and the probe pulses never reside entirely within the ITO thin film (Fig. 1d). Thus, the magnitude of the frequency shift of the probe pulse becomes dependent on the index change rate Δn/Δt it experiences while transiting through the ITO film. We extract the effective values of the index change rate based on the experimental data through numerical simulations (see Supplementary Note 2). In numerical simulation we use the slowly varying envelope approximation and, as a result, the predictions of our model are only dependent on the envelope-averaged dynamics of the ITO. We find that both the pump intensity and the value of the pump-probe delay time modify the spectra of the transmitted probe. We present the results for λ 0 = 1235 nm for three pump intensities in Fig. 3a-c. In general, the time refraction leads to the modification of amplitude, bandwidth, temporal width, and the carrier frequency of the probe pulse (see Supplementary Note 3). In order to focus on the spectral shift, the magnitude of spectrum for each pump-probe delay value is individually normalized in Fig. 3. We find that when the absolute value of the pump-probe delay time |t d | is increased, the magnitude of the frequency translation for the probe decreases. Furthermore, when pumpprobe delays are small, the leading portion of the probe pulse experiences an increase in refractive index (thus redshifts), whereas the trailing portion experiences a decrease of refractive index (thus blueshifts). This is evident in Fig. 3a-c by the presence of two peaks at t d ≈ 0. For a fixed pump-probe delay time an increase in pump intensity leads to larger change in index and, as a result, a larger shift in the central frequency of the probe pulse. Furthermore, we find that the fall time of the index change is slower than the rise time of the index change. The fall time of the index change is longer because it-within the formalism of the two-temperature model-is dictated by the intrinsic electron-phonon coupling rate, the maximum temperature of the conduction band electrons, and the thermodynamical properties of the lattice. As a result, the rate of decrease in index after the pump leaves the ITO film is smaller compared with that of the rising edge, and therefore the magnitude of the achievable redshift for a constant pump intensity is larger than the achievable blueshift. At a sufficiently high pump intensity, we observe an appearance of a large blueshifted spectral peak when the pump is at 1235 nm owing to higher-order nonlinear optical effects. At a peak pump intensity of 483 GW cm −2 the blueshift can be as large as 10.6 THz (~52 nm in wavelength), and the total maximum frequency translation can be larger than 20 THz (see Supplementary Note 4). This value corresponds to a fractional frequency shift (Δf/f 0 ) of~9%. We model the time-refraction effect in ITO using the nonlinear Schrödinger equation, and the split-step Fourier method is used to numerically solve the Schrödinger equation 48 . We use an iterative algorithm to calculate the approximate shape of the time-varying nonlinear phase variations induced by the index change to fit the experimentally measured spectra (see Supplementary Note 2). The simulation results are shown in Fig. 3d-f. Our numerical model is in excellent agreement with the experimental data, confirming that the origin of the shift is owing to the rapid change of index experienced by the probe pulse while transiting through the ITO sample. Measurements over a broad spectral range. Next, we investigate the dynamics away from the zero-permittivity wavelengths. We repeat the measurements at different excitation wavelengths from λ 0 = 1000 nm−1500 nm. For each excitation wavelength and pump intensity, we extract the maximum frequency translation of the probe over a range of pump-probe delay time (see Supplementary Note 5). We summarize the wavelength-and intensitydependent maximum frequency translations in Fig. 4a-e. Here, we limit the pump intensities to avoid the occurrence of significant higher-order nonlinearities. Our results reveal a number of trends. First, both the total achievable frequency translation (redshift and blueshift) and the maximum achievable redshift for a constant pump intensity are the highest near 1235 nm (where Re(ε) ≈ 0) than at other wavelengths. For example, at λ 0 = 1495 nm the measured maximum magnitude of the redshift (5.4 THz) is a factor of two smaller than what can be achieved at λ 0 = 1235 nm using a lower pump intensity. Nevertheless, we find that the total maximum fractional frequency translation (Δf/f 0 ) at near-zero permittivity is unprecedentedly large (Fig. 4f). The maximum total frequency translation of 14.9 THz (redshift of 11.1 THz and blueshift of 3.8 THz) at λ 0 = 1235 nm (redshift plus blueshift) is over 53 times larger than what was achieved using a silicon ring resonator of a 6 μm diameter exhibiting a Q-factor greater than 18,000 20 . In contrast, the propagation distance in our material is only 620 nm which is 30 times shorter in physical length and four orders of magnitude smaller than the effective interaction length in a high-Q cavity. Moreover, our results show the operation bandwidth of ITO is much larger than what can be achieved using high-Q resonant structures. Discussion As the refractive index of the ENZ material depends on the intensity of the pump, the work presented here may be formally described by cross-phase modulation with a delayed response 8 . However, the concept of time refraction is independent of the source type of the index change (e.g., thermally, optomechanically, or electrically induced index change) and is a more general effect than the simple cross-phase modulation that arises when the temporal boundary is specifically induced by an optical pulse. Furthermore, in contrast to a typical four-wave-mixingbased frequency conversion, the frequency shift obtained through time refraction does not depend on the frequency difference between the pump and the probe and is completely free from phase-mismatching. Although in this work the pump and the probe are frequency degenerate and produced from the same source using a beam splitter, it is not necessary for the beams to be frequency degenerate. Nevertheless, the maximum frequency shift with minimum energy expenditure can be achieved when both the pump and the probe lie within the ENZ spectral range. As the maximum index change happens at the zero-permittivity wavelength, the probe will undergo maximum frequency shift if its wavelength is at or near the zeropermittivity wavelength, whereas the energy expenditure will be minimum if the wavelength of pump is also at or near the zeropermittivity wavelength. In conclusion, we have shown that a subwavelength-thick ITO film can be used to obtain unprecedentedly large (~6.5% of the carrier frequency), broadband and tunable frequency translation. The large time-refraction effect in the ENZ material raises the intriguing possibility of wavelength conversion over an octave using a time-varying ENZ medium. The magnitude of the frequency translation is primarily limited by the linear loss, higherorder nonlinear optical effects, dispersion, and the interplay between the pulse width and the interaction time. We note that the ENZ spectral region of ITO and other conducting oxides can be tuned at any wavelength between 1 µm and 3 µm by choosing the appropriate doping level 49,50 . Furthermore, because the effect is present in a bulk, homogeneous and isotropic material, one can engineer nanostructures incorporating ENZ media such as plasmonic waveguides, photonic crystal waveguides, and dynamic metasurfaces to arbitrarily control the sign and the magnitude of the frequency shift in order to build efficient octave-spanning frequency tuners while simultaneously lowering the required pump power by a few orders of magnitude 9 . For example, an appropriately engineered ITO-based platform can be used to shift an entire band of optical signals in the frequency domain. Such devices may find practical usage in quantum communication protocols requiring conversion of visible photons to infrared 51 and in classical coherent optical communications 52,53 . We anticipate that the large time-refraction effect, we report here, can be exploited to engineer magnet-free nonreciprocal devices 54,55 , spatiotemporal metasurfaces 13 , and to investigate photonic time crystals and other topological effects in the time domain 16,56 using free-space or on-chip ENZ-based structures. Methods Measurements. We use a tunable optical parametric amplifier (OPA) pumped by an amplified Ti:sapphire laser of~120 fs for the experiments. The output of the OPA is split into two beams to produce the degenerate pump and probe beams using a pellicle beam splitter. Both beams are rendered p-polarized. The pump beam is focused onto the sample by a 25 cm lens yielding to a spot size of~100 µm. The probe beam is focused by a 10 cm lens and its spot diameter is~45 µm at 1235 nm. Although the spot size can change when the wavelength of the OPA output is adjusted, we always keep the probe beam spot size significantly smaller than the pump beam so that the probe beam experiences a nearly uniform change in the refractive index in the transverse dimensions. The angles of incidences are 15º and 10º for the pump and probe, respectively. The transmitted probe light is coupled to an optical spectrum analyzer via a multimode fiber with a 50 µm core diameter. The commercially available ITO thin film (PGO GmbH) has a thickness of 310 nm and is deposited on a 1.1-mm-thick glass substrate. We sandwich two such ITO films to make the 620 nm thick ITO sample by using a customized sample holder with adjustable tightening screws (See Supplementary Note 6). We use a translation stage to control the delay time between the pump and the probe beams. The experimental setup is presented in Supplementary Note 1. Data availability All data supporting this study are available from the corresponding author upon request. Code availability All relevant computer codes supporting this study are available from the corresponding author upon request.
2020-05-01T13:55:51.868Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "05742cfeda7d26f63084c1443e48730465e4eea2", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-020-15682-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "63cc3216f9f58c3c4cda88ae0a81cc203260abc9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
95313637
pes2o/s2orc
v3-fos-license
Microwave-assisted rapid synthesis of anatase TiO2 nanosized particles in an ionic liquid-water system Nanosized anatase TiO2 crystals were rapidly prepared in a mixture of an ionic liquid (1-butyl-3-methylimidazolium chloride) and water under microwave irradiation for 5min. The crystal size in the c axis was varied in the range from 35 to 317 nm with prolongation of the irradiation time. A high susceptibility of the ionic liquid to microwave and a catalytic property of the imidazolium cation are effective for the rapid synthesis of the highly crystalline nanosized particles. Introduction Anatase-type TiO 2 particles have been widely used in the field of transparent protective films, photocatalysts, oxide semiconductors, and UV-protection coatings. 1)6) Control of crystallinity, size, morphology, and doped elements has been studied to increase the capability of anatase particles. A smaller size and higher crystallinity generally provide a greater activity for various photocatalytic reactions. 7) A wide variety of preparation methods have been reported for the production of anatase particles. 8) 13) The solgel route is frequently employed to prepare the semiconducting oxide because of its simplicity. The conventional sol gel process usually involves uncontrollable fast hydrolysis and condensation, which result in the formation of amorphous titanium compounds. A subsequent thermal treatment above 500°C is commonly required to obtain highly crystalline anatase. 14) However, a decrease in the specific surface area and a collapse of the micro-and mesoporous structure were induced by the high temperature treatment. 15 17) Crystalline anatase particles are directly produced in the solution by using hydrothermal synthesis below 150°C. 18), 19) However, this process involves a high-pressure condition that requires autoclaves for the fabrication. The preparation of the crystalline TiO 2 was achieved at low temperatures below 100°C using highly acidic aqueous solution systems. 20), 21) In these cases, the crystallization required a reaction period longer than 24 h. Dufour et al. 22) synthesized crystalline TiO 2 using a microwave (MW)-assisted hydrothermal process. Fine crystalline particles 1020 nm in diameter were produced by the irradiation for 2 h. The MW irradiation promoted the formation of nanosized crystals through rapid nucleation. Recently, ionic liquids (ILs) have attracted much attention because of their unique properties, such as high thermal stability, low volatility, and microwave absorbability. Because of their thermal stability and high polarity, several ILs are utilized as a medium for the production of metal oxide particles. 23),24) Nakashima et al. 23) reported that TiO 2 microspheres with controlled morphologies were obtained in 1-butyl-3-methylimidazolium hexafluorophosphate. However, this process requires a subsequent calcination at 500°C to obtain the crystalline phase. Kunlun et al. 25) synthesized TiO 2 nanoparticles in 1-butyl-3methylimidazolium tetrafluoroborate using a MW-assisted process. Although well-defined crystalline TiO 2 particles were directly obtained in the IL, high temperature over 230°C is probably essential for crystallization. Alammar et al. 26) reported the synthesis of TiO 2 particles in various ILs using sonochemical, MW-assisted, and ionothermal methods. A mixture of spherical anatase and brookite was obtained by the MW-assisted method, while pure anatase nanorods were grown by the ionothermal synthesis. Liu et al. 27) presented that the effect of MW irradiation on crystallization of solgel-derived TiO 2 in a mixture of 1-butyl-3-methylimidazolium tetrafluoroborate (BMIMBF 4 ) and water. The crystalline TiO 2 was obtained in the mixture by the MW irradiation for 2030 min. According to the previous works, the combination of the ion liquid-based media and MW irradiation is effective for the synthesis of the crystalline TiO 2 nanoparticles. However, detailed conditions have not been clearly described for the rapid formation of nanosized TiO 2 crystals in the ion liquidbased media. In the current study, we achieved a rapid synthesis of crystalline anatase nanoparticles using a high MW susceptibility of an IL. Nanosized anatase particles were formed at 115°C under the ambient pressure by 5-min MW irradiation. We studied the effective ratio of the IL and water for crystallization and size control of the TiO 2 particles in the nanometer region. This system would be an effective fabrication route for the production of nanosized metal oxides. An excess amount of acetic acid was used to suppress the rapid hydration of TTIP. After being stirred for 30 min at room temperature, the mixture was added to 20 g of a mixture of BMIMCL and purified water and then was stirred for 60 min. We varied the mass ratio of BMIMCL and water (R) from 0/1 (pure water) to 2/1 in the reaction medium. The suspension was then heated by an MW (2.45 GHz, 126 W) from a MW irradiation device (IDX Green Motif Ib) with reflux for 30 min. The same reaction was performed with conventional heating in an oil bath as a reference. The phase identification was performed with X-ray diffraction (XRD) patterns recorded using a Rigaku Ultima IV diffractometer. We also evaluated the crystallinity of the products by Raman spectroscopy with a laser of 532.1 nm using a Horiba XploRA because the obtained crystallite was very small for the XRD technique. The size and morphology of the samples were observed with a transmission electron microscope (TEM) (Hitachi H-7100 and FEI Tecnai F20) equipped with an energy dispersive X-ray analysis (EDX) module (Oxford X-Max 80T). The specific surface area was measured by nitrogen adsorption isotherms using a Micromeritics ASAP 2020. Results and discussion 3.1 Influence of the mass ratio of BMIMCL and water as a reaction medium The temperature of the solution rapidly increased to the maximum value for 3 min under MW irradiation. The maximum temperature in refluxing, depending on the mass ratio (R = IL/water), ranged between 100 (R = 0/1) and 128°C (R = 2/1). The amount of precipitates in the solution increased with a 5-min irradiation. Figure 1 shows the XRD patterns and Raman spectra of the precipitates prepared with a 5-min MW irradiation at various R values. As shown in Fig. 1, the crystallinity of the precipitates was highly influenced by the R value. Intense diffraction peaks assigned to anatase are clearly observed for the sample prepared at R = 1/2. We found the highest Raman peak at 150 cm ¹1 , due to the mode of¯6 (E g ), which is the characteristic signal of anatase, for the same sample. The hydrolysis and subsequent condensation of the titanium alkoxide were not sufficiently induced with an amount of water smaller than that at R = 1/2. When the amount of BMIMCL decreased below that at R = 1/2, the susceptibility of the medium to the MW irradiation would be weak for promoting crystallization of the titanium oxide network. The increase rate of temperature under the MW irradiation reflected the susceptibility, although its variation could not be quantitatively estimated in the present work. The susceptibility simply lowers with a decrease in the R value. Influence of the MW irradiation time We studied the influence of the MW irradiation time for crystallinity and particle size using the reaction medium at R = 1/2 and 0/1. Figure 2 shows the XRD patterns of TiO 2 after MW irradiation for 5 and 30 min. Anatase was clearly formed at R = 1/2 after reaction for 5 min, although the signals assigned to the crystalline phase are weak for the sample prepared at R = 0/1 after the short-term reaction. The precipitate was almost amorphous with MW irradiation for 5 min in water (R = 0/1), whereas anatase was formed with a reaction time of 30 min. The yield by the 30-min MW irradiation at R = 1/2 and 0/1 were 82 and 87%, respectively. Figures 3(a)3(d) shows typical TEM images of anatase particles formed by MW irradiation at R = 1/2. We observed the lattice of anatase in nanoscale spindles formed in the solution system. The major axis is deduced to correspond to the c axis from the lattice image in Fig. 3(b). The size of the spindles estimated from the TEM images is shown in Figs. 3(e)3(g). The length of the major axis increased from 5.5 to 16.6 nm as the irradiation time increased from 5 to 30 min. On the other hand, the length of the minor axis, ca. 5 nm, was slightly changed by prolonging the reaction. The specific surface area of the particles produced by the 30-min MW irradiation was 255 m 2 /g. The particle size estimated from the specific surface area was ca. 6 nm. The imidazolium cation may suppress the crystal growth through the adsorption on the specific faces. However, the presence of the organic molecule on the particles was not confirmed by FT-IR spectra. Further investigation is needed to clarify the growth mechanism of the crystalline TiO 2 with the organic cations. Oil bath heating As a reference, we prepared TiO 2 particles in the mixture of BMIMCL and water at R = 1/2 heated by a conventional oil bath. Figure 4 shows the XRD patterns and Raman spectra of the precipitate formed by oil-bath heating. The (101) diffraction peak and characteristic Raman peak at 150 cm ¹1 gradually increased as the reaction time increased from 30 min to 17 h. These results indicate that crystallization slowly occurred over 30 min, even at a high temperature without MW irradiation. This indicates that the MW irradiation effectively promoted the crystallization of TiO 2 . The MW irradiation vibrates the imidazolium ions effectively and enhances the solgel reaction and subsequent crystallization with the ion collisions. The length of the major axis slowly increased from 10.7 to 25.3 nm as the reaction time was prolonged from 30 min to 24 h. On the other hand, the length of the minor axis was ca. 6.5 nm and slightly changed after 30 min. Conclusion We studied the rapid formation of TiO 2 nanocrystals in an ILbased media under microwave irradiation. Anatase TiO 2 nanocrystals were prepared in the mixture of 1-butyl-3-methylimidazolium chloride and water under the microwave irradiation. When the ratio of the IL and water was adjusted, nanosized anatase was rapidly produced with a 5-min irradiation. The length of the spindles increased from ³5 to ³17 nm as the irradiation time increased to 30 min. The combination of the microwave irradiation and the ion liquid-based media would provide an effective route for the production of nanosized metal oxides.
2019-04-05T03:37:45.518Z
2015-02-01T00:00:00.000
{ "year": 2015, "sha1": "619a7ce7e194bace221f32ebf97fe0d51d556815", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/jcersj2/123/1434/123_JCSJ-P14197/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1ef6ac4241874dce3a342ac92bf40d28e8071fde", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
234906048
pes2o/s2orc
v3-fos-license
DETERMINATION OF BIOGENIC AMINES BY USING AMPEROMETRIC BIOSENSORS BASED ON GRASS PEA AMINE OXIDASE AND OAT POLYAMINE OXIDASE Grass pea amine oxidase (GPAO) and oat polyamine oxidase (OPAO) were immobilized along with horseradish peroxidase (HRP) and an Os-redox polymer (Os-RP) onto the surface of a graphite electrode by crosslinking with poly(ethylenglycol) diglycidyl ether. The resulted reagentless amperometric biosensors were inserted in a flow injection setup and used as electrochemical detectors for the biogenic amines (BA) detection. Both biosensors were operated at low applied potential (-50 mV vs. Ag/AgCl, KCl0.1M) where electrochemical interferences are minimal. The quantification of ten BA (tyramine, putrescine, cadaverine, histamine, cystamine, phenylethylamine, agmatine, tryptamine, spermine, and spermidine) either individual or in mixture (after a preliminary separation by using cation exchange chromatography) was reported. G/(Os-RP)-HRP-GPAO biosensor detected all ten BA, while G/(Os-RP)-HRP-OPAO biosensor detected only spermine and spermidine. Finally, a simple and low-cost method for free and acetylated polyamines determination in human urine samples, by using the highly selective G/(Os-RP)-HRP-OPAO biosensor, was proposed. INTRODUCTION Biogenic amines (BA) are organic bases present in a wide range of food products and living organisms deriving mainly from microbial decarboxylation of amino acids or from amination and transamination of aldehydes and ketones [1]. Histamine (Hist), putrescine (Put), cadaverine (Cad), tyramine (Tyra), tryptamine (Trypt), phenylethylamine (PEA) and agmatine (Agm) are among the most important BA in food. Polyamines like spermine (Spm) and spermidine (Spd) are found only in small quantities in food like legumes and meat, but they take part in growth and development of cells [2,3]. However, it has been demonstrated that high amounts of Spm and Spd in blood and urine are reliable markers for cancer therapy monitoring [4]. Oxidation of BA can be catalyzed by different types of amine oxidases (AOs). AOs are widely spread in bacteria, fungi, higher plants and animals [5]. Copper-containing AOs catalyze the oxidative deamination of BA, generating the corresponding aldehyde, ammonia and hydrogen peroxide [6,7]. The reaction of flavin-containing polyamine oxidases (PAOs) results in the production of an aminoaldehyde (alternatively, typically in plants, they may form 1,3-propanediamine, 1,3-pn) and hydrogen peroxide, but not in ammonia [7,8]. Plant PAOs show very restricted substrate specificity, oxidizing only Spm and Spd including their N-acetyl derivatives [4,5]. The determination of BA by enzymatic analysis using AOs and PAOs was previously studied. For example, the use of PAOs immobilized in reactors [9,10] or cross-linked with glutaraldehyde onto electrochemical biosensors [11] as well as electrochemical enzyme probes based on oxygen electrodes and AOs [12] have been reported in the literature. AOs-based amperometric biosensors for biogenic amines detection were formerly developed, both in single [13][14][15][16] and coupled enzyme-based designs (with peroxidase) [16,[17][18][19][20]. The single AO-based biosensors required high applied potentials (>200 mV vs. Ag/AgCl), which can lead to high background currents and interfering signals when complex matrices are analyzed. At the same time, the bienzyme electrodes, operated at lower applied potentials (-50 or 0 mV vs. Ag/AgCl) allowed a considerably reduction of the matrix interferences. However, such devices have a major drawback because they are not able to discriminate between different BA due to the low selectivity of the detecting AO used. Various chromatographic techniques are frequently applied for separation of biogenic amines. Among these techniques HPLC [21][22][23], thin layer chromatography [24] or electrophoresis [25] are the most frequently used. Thin layer chromatography is simple and inexpensive but requires extensive analysis time and the obtained results are only semi-quantitative. Capillary electrophoresis has been a popular tool for BA separation, but since the BA could not be detected directly in a sensitive manner [25], HPLC became a better alternative. In a previous work [26], a new analytical system based on coupling a weak cation exchange column with an AO-based amperometric biosensor for determination of biogenic amines, with application in food analysis, was described. Here we report on the construction and assembling of a highly specific and sensitive amperometric biosensor for polyamines incorporating polyamine oxidase from oat seedlings (OPAO), denoted G/(Os-RP)-HRP-OPAO, and its potential application in biomedical analysis. At the same time, the extended use of a previously reported amperometric biosensor [26], based on amine oxidase from grass pea (GPAO) (denoted G/(Os-RP)-HRP-GPAO) is discussed. A comparison between the two amperometric biosensors has been performed under similar experimental conditions. Thus, both OPAO and GPAO were cross-linked to horseradish peroxidase (HRP) and an Os-based redox polymer (Os-RP) by using the poly(ethylenglycol) diglycidyl ether and were immobilized onto the surface of solid graphite (G). The detection of BA has been carried out amperometrically, by monitoring the H 2 O 2 generated by the enzymatic reaction. At first, both biosensors, operated at a low applied potential (-50 mV vs. Ag/AgCl, KCl 0.1 M ), were integrated in a single line flow injection (FI) setup. Further, the amperometric detection was coupled with a cation-exchange column and the method was optimized for the separation and quantification of ten BA (Tyra; Put; Cad; Hist; cystamine, i.e. Cyst; PEA; Agm; Trypt; Spm and Spd) from a synthetic mixture. Finally, preliminary experiments were carried out by using G/(Os-RP)-HRP-OPAO biosensor to estimate the content of Spd and Spm in sample of human urine. RESULTS AND DISCUSSION A bienzymatic approach, based on GPAO or OPAO in combination with HRP, was considered for biosensors development. The biosensor design involved the immobilization of both enzymes (i.e., the oxidase and the peroxidase) on solid graphite, and the detection was carried out by mediated electron-transfer using an Osmium-based redox polymer (Os-RP). The detection principle of the resulting biosensors is schematically presented in Figure 1. Due to the presence of Os-RP and to the low applied potential (-50 mV vs. Ag/AgCl, KCl 0.1 M ), this approach confers simultaneously a high sensitivity and an excellent selectivity to the amperometric measurements. The G/(Os-RP)-HRP-GPAO biosensor responded to all tested amines (Figure 2A), whereas G/(Os-RP)-HRP-OPAO biosensor could only detect Spm and Spd ( Figure 2B), proving that GPAO is an enzyme with a much broader selectivity than OPAO. As expected, the calibrations curves recorded for all tested amines follow a Michaelis-Menten pattern. The main kinetic (K M , Michaelis-Menten constant; I max , maximum current intensity) and analytical parameters (linear range, sensitivity -estimated as the slope of the linear domain-, and detection limit), were calculated from the calibration curves shown in figure 2 and are summarized in Table 1. It can be stated that, under the specified FI conditions, the sensitivities observed at the GPAO based biosensor decrease in the following sequence: It is worth noticing that, for both investigated enzymes, the highest sensitivities correspond to the lowest Michaelis-Menten constants (Table 1). This fact suggests that, for the actual biosensor design, a high substrate-enzyme affinity can be considered as the main factor determining the biosensor sensitivity. Next, G/(Os-RP)-HRP-GPAO and G/(Os-RP)-HRP-OPAO biosensors were alternatively coupled to a single line FI setup incorporating a cationexchange column. A synthetic mixture containing all investigated BA was injected in the chromatographic column and the biosensor signal was recorded. In this way, by using the cation-exchange chromatography the BA mixture was firstly separated in its components and, subsequently, each resulted individual BA was detected by the biosensor used as a chromathographic electrochemical detector ( Figure 3). When the G/(Os-RP)-HRP-GPAO biosensor was used as chromatographic detector, the chromatogram recorded for a mixture of ten BA (Tyra, Put, Cad, Hist, PEA, Cyst, Agm, Spd, Trypt and Spm) evidences a complete separation of the initial mixture ( Figure 3A). Contrarily, the G/(Os-RP)-HRP-OPAO biosensor was able to detect only Spd and Spm ( Figure 3B). The total time spent for analysis was ∼ 53 min. The sudden decrease of the baseline occurring around the 35 th minute is due to an increase of the carrier concentration from 16 to 24 mM. (Figures 2A and 2B) **R, correlation coefficient; N, number of experimental points ***DL was estimated for a signal/noise ratio equal to 3. The electrochemical detection of biogenic amines, carried out after their separation by using high performance liquid chromatography, was found much more efficient than the ultraviolet detection [27,28]. However, in these studies the separated amines were quantified by oxidation at unmodified electrodes poised at substantial positive potentials (+700 mV and +400 vs. Ag/AgCl, respectively). Aiming to show the efficiency of the developed biosensors, the amperometric responses observed for Spd at G/(Os-RP)-HRP-GPAO and G/(OS-RP)-HRP-OPAO were compared with those recorded at bare graphite electrodes, poised at two different applied potentials ( Figure 4). As it can clearly be seen from Figure 4, the bare graphite electrode, even poised at a relatively high positive potential (+550 mV vs. Ag/AgCl, KCl 0.1 M ), is practically insensitive to the Spd presence. Contrarily, both biosensors are highly sensitive to Spd, G/(Os-RP)-HRP-GPAO being the most efficient. Moreover, due to the low applied potential, the Spd detection at both biosensors is practically without electrochemical interferences. It is worth mentioning, that the response of the G/(Os-RP)-HRP-OPAO biosensor was examined for the acetylated Spm and Spd (AcSpm and AcSpd), since Spm and Spd are frequently found in these forms in real samples ( Figure 5). The biosensor was found sensitive to both acetylated polyamines (AcSpm, K M = 136.5 ± 33.8 µM; I max = 67.2 ± 5.9 nA; AcSpd, K M = 120.7 ± 50.0 µA; I max = 28.1 ± 3.3 nA), but the signals were ~10 and ~100 times smaller than those corresponding to the unacetylated amines, respectively. As a proof of concept, taking into account that the presence in human urine of high amounts of Spm and Spd and their N-acetylated forms are markers for a serious illness (e.g. cancer, osteoporosis, or hepatic cirrhosis) [9], some preliminary investigations using the G/(Os-RP)-HRP-OPAO biosensor were performed in order to estimate the polyamine content in real samples of human urine. Because polyamines exist in urine mainly in their conjugated form, an acid hydrolysis was carried out before the detection to ensure that all polyamines are present in their free form. Due to the low concentrations of Spm and Spd that normally exist in the urine of healthy persons, they were not detected in fresh urine samples by using the G/(Os-RP)-HRP-OPAO amperometric biosensor coupled to a cation exchange column. However, the biosensor in contact with hydrolyzed samples of urine provided a signal ≈40 times higher than that corresponding to the non-hydrolyzed urine, indicating significant concentrations of acetylated polyamines (AcSpm and AcSpd). In order to facilitate the comparison, the estimated concentrations of Spd and Spm were expressed in mg/L urine and in mg/g creatinine, as well ( Table 2). The obtained values were found similar to those reported in the literature [29], suggesting the suitability of the developed method for clinical assays. Table 2. Spermine and spermidine concentrations in hydrolyzed human urine samples estimated by using the G/(Os-RP)-HRP-OPAO amperometric biosensor and FI measurements. For experimental conditions see Figure 5. CONCLUSIONS The chromatographic separation and electrochemical detection of ten BA (Tyra, Put, Cad, Hist, PEA, Cyst, Agm, Spd, Trypt and Spm) from a synthetic mixture were attempted in order to prove the full functionality of two amperometric biosensors, G/(Os-RP)-HRP-GPAO and G/(Os-RP)-HRP-OPAO, as electrochemical detectors for liquid chromatography. At the same time, the kinetic and analytical parameters of both biosensors were estimated by using the data shown in the calibration curves. Irrespective of the BA nature, the G/(Os-RP)-HRP-GPAO biosensor showed a broad selectivity, a good linear response with low detection limits (from 0.4 µM for Spd, Cad and Cyst, to 20 µM for Spm) and upper limits of quantification ranging from 100 µM (Hist, Cyst, Agm) to 10 mM (Spm). Interestingly, the G/(Os-RP)-HRP-OPAO biosensor was found much more selective detecting only Spm and Spd. Finally, the G/(Os-RP)-HRP-OPAO biosensor was used to estimate the polyamines content in human urine, after their hydrolysis with NaOH. Thus, an attempt for a simple and low-cost method for BA detection in biological fluids, suitable for clinical analysis was made. If not otherwise indicated, all solutions were prepared in purified water obtained from a Milli-Q system (Millipore, Bedford, MA, USA). Equipment A single line flow-injection (FI) system, consisting of a manual injection valve (Valco Instruments Co. Inc., Houston, TX, USA) with an injection loop of 100 µL, a peristaltic pump (Alitea AB, Stockholm, Sweden), a wall-jet electrochemical cell, a low current potentiostat (Zäta-Elektronik, Höör, Sweden) and a single channel chart recorder (Model BD 111, Kipp & Zonen, Delft, The Netherlands), was used to operate the amperometric biosensors. The "Peaksimple" software (SRI Instruments, Torrance, CA, USA) was employed for the data acquisition. The tubing connecting the peristaltic pump to the flow-through electrochemical cell was made of teflon (0.5 mm i.d.). The enzyme-modified graphite electrode was the working electrode. An Ag/AgCl, KCl 0.1 M electrode and a Pt wire were used as reference electrode and counter electrode, respectively. The biosensors, used as electrochemical detectors, were incorporated in the chromatographic system by coupling the effluent of the analytical column to the electrochemical cell, as described elsewhere [26]. In order to neutralize the acidic effluent before coming in contact with the amine oxidase based-biosensor, a post-column T-connection was used to mix the column eluate (0.9 mL min -1 ) with a secondary flow containing phosphate buffer (0.9 mL min -1 ). In the case of G/(Os-RP)-HRP-GPAO electrode 5 µL of a mixture containing 2.5 mg/mL GPAO, 2.5 mg/mL HRP, 1 mg/mL Os-RP, and 1 mg/mL PEGDGE were placed on the top of a graphite electrode and left 1 day to dry at room temperature. Similarly, in the case of G/(Os-RP)-HRP-OPAO electrode 5 µL of a mixture containing 2 mg/mL OPAO, 2 mg/mL HRP, 0.8 mg/mL Os-RP, and 0.8 mg/mL PEGDGE were deposited on the top of a graphite electrode and left 1 day to dry at room temperature. Real samples preparation The extraction of polyamines from urine was performed as previously described by Lipton et al. [33]: 1 mL of 10 M HCI was added to 1 mL of urine, pre-filtered through a 0.2 µm filter (Millipore) and then hydrolyzed with NaOH for 15 h at 110°C. The hydrolyzed urine was evaporated to dryness on a Buchler rotary evaporator (Buchler Instruments Div., Searle Diagnostics Inc., Fort Lee, N. J.) and reconstituted in 1 mL of wat er. The same hydrolysis procedure was carried out for 1 mL of AcSpm, AcSpd and buffer (as control experiments).
2020-10-28T19:09:16.416Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "276b7247a98b6a2c9226fd87919b1e93b68b0230", "oa_license": null, "oa_url": "https://doi.org/10.24193/subbchem.2020.3.01", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c48bcae255995aa21cba29a12df73e357edffada", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
52897670
pes2o/s2orc
v3-fos-license
Feasibility of Endoscopic Inspection of Pedicle Wall Integrity in a Live Surgery Model ABSTRACT Background: Perforations of the pedicle wall during cannulation can occur with experienced surgeons. Direct endoscopic visualization has not been used to inspect pedicles previously due to bone bleeding obscuring the camera visualization. The hypothesis of this study was that endoscopic visualization of pedicle wall integrity was technically feasible and would enable identification of clinically significant pedicle breaches. Methods: A live porcine model was used. Eight lumbar pedicles were cannulated. Clinically significant breaches were created. An endoscope was introduced and was used to inspect the pedicles. Results: All lumbar pedicles were endoscopically visible at a systolic pressure of 100 mm Hg. Clinically relevant anatomic structures and iatrogenic pathology, such as medial, lateral, and anterior breaches, were identified. There were no untoward events resulting from endoscopic inspection of the pedicle endosteal canal. Conclusions: Endoscopic inspection of lumbar pedicles was safe and effective. The findings on endoscopic inspection corresponded with the ball-tip probe palpation techniques. Additional techniques, such as selection between 2 tracts, was possible with the endoscopic technique. INTRODUCTION Pedicle screws have revolutionized the treatment of spinal disorders. With screws, surgeons are able to immobilize and manipulate the spine in 3 dimensions. Pedicle screw instrumentation is the standard of care in the surgical management of degenerative scoliosis, degenerative spondylolisthesis, and trauma. To place pedicle screws, surgeons utilize handheld instruments to displace soft, cancellous bone within the pedicle whilst simultaneously preserving the external hard bony wall of cortical bone. The surgeon uses carefully defined anatomical landmarks and tactile feedback to create a screw tract through the cancellous bone of the pedicle. Once positioned, the screw position is often checked for encroachment upon spinal nerves by measuring the electrical conduction through triggered electromyography. Intact cortical walls create significant electrical impedance. 1 Correctly placed pedicle screws will require higher amperage to detect electrical activity in adjacent spinal nerves. 1 Other techniques, such as fluoroscopy 2 or intraoperative computed tomography (CT) scan, 3,4 are also used by some surgeons to guide or check screw placement. Perforations of the outer cortical pedicle wall can occur 5 using manual freehand techniques, 6 fluoroscopy, 2,7 and even intraoperative CT. 3 Perforation rates as high as 30% have been reported in some studies. 8 Pedicle screw malposition can result in unanticipated readmission and reoperation, [9][10][11] dural laceration, 12 nerve injury, [13][14][15][16][17] pedicle fractures, 17,18 and vascular injury. 19,20 Direct visualization of the cannulated bony channel would provide valuable information to confirm accurate trajectory of the tract and documentation for the medical record confirming the absence of a cortical wall breach. Furthermore, if a cortical wall breach is observed endoscopically, a handheld ball-tip probe can be used to palpate the direct area of concern versus complete reliance of proprioception and blind palpation. Endoscopy has revolutionized visibility of other hard-to-see loca-tions within the body such as the knee, shoulder, and abdomen. Endoscopic visualization is superior to traditional open surgery that relies on line-ofsight vision in many cases. However, endoscopy has not been widely used in the placement of spinal instrumentation due to resident bleeding within the pedicle obscuring visualization, the small diameter in which visualization must occur, and the inability to exploit the unique property difference between a Newtonian fluid (water) and non-Newtonian fluid (blood). [21][22][23][24] Recently, an endoscopic instrument was developed to facilitate endoscopic inspection of the internal pedicle channel. The hypothesis of this study was that endoscopic visualization of pedicle wall integrity is technically feasible and will enable identification of clinically significant pedicle breaches. Overview One skeletally mature female pig (approximately 82 kg) underwent posterior lumbar exposure and pedicle cannulation followed by endoscopic verification of pedicle wall integrity. The investigation was performed using an approved Institutional Animal Care and Use Committee protocol at an accredited facility. The investigators were 2 orthopedic surgeons with familiarity in intraoosseous endoscopy and spine surgery. Surgical Procedure General anesthesia was induced. The animal was anesthetized according to veterinarian's protocol. An endotracheal tube was attached to an anesthesia machine. Replacement fluids (0.9% NaCl) were administered via the intravenous catheter. An area on the abdomen was shaved to accommodate an electrode return patch. The animal was placed prone on the surgical table. The area around the lumbar spine was shaved, prepped, and draped in preparation for surgery. Normal systolic and diastolic blood pressure was maintained throughout the procedure (mean arterial pressure 70). An arterial line monitor was used to monitor blood pressure. Midline, posterior open dissection was performed through the skin and subcutaneous tissues on the posterior lumbar spine at the level of the iliac crests. Subperiosteal exposure of the spinous processes, laminae, facets, and transverse processes was performed. Meticulous hemostasis was obtained. The pedicles were then cannulated in the usual fashion. The upslope of the transverse process onto the superior articular process was identified. The intersection of a horizontal line through the midpoint of the transverse process and the lateral border of the facet was considered the starting point of the pedicle screws. The starting point was decorticated with a rongeur. The pedicle was entered with a curved, tapered gearshift with pronosupination in the usual fashion. When necessary, a radiograph was taken to confirm the trajectory and spinal level. The pedicles were cannulated 35 mm according to the depth on the gearshift. The pedicles were entered 35 mm according to the depth on the gearshift. Eight holes were made in the spine with an attempt to establish correct pedicle canal holes as well as to breach pedicle canals. Following cannulation, the inner aspect of the pedicles was palpated with a ball-tipped probe as carefully as possible. A consensus was reached between the investigators about whether the pedicle was intact or breached. If a breach was suspected, the direction was also reported. Endoscopic Pedicle Inspection The endoscopic instrument's outer trochar was connected to normal saline irrigation in 3-L bags. The endoscopic instrument uses common endoscopy monitors available in most hospitals. No epinephrine was present in the normal saline bags. The endoscopic instrument (with inner stylette and outer trochar together) was then placed into the pedicle tract. Saline was allowed to flow at gravity pressure. No specialized pumps or pressure bags were used. The inner stylette, which has a diameter of 3.2 mm, was removed. A 3.0-mm endoscope was then introduced into each pedicle. The pedicle wall was inspected with a 0-degree and a 30-degree endoscope. Breaches were deliberately made in the medial wall, anterior vertebral body, and lateral muscle tissue. The breaches were confirmed by palpation with a ball-tip probe. The endoscopic instrument was used to inspect the pedicle walls and confirm the breach locations. successfully visualized with the endoscopic instrument. The internal blood within the pedicles was cleared immediately once the endoscopic instrument was seated within the pedicle and irrigation was commenced. There were no pedicles (0/8) that were not able to be studied due to bleeding. Complete examination of the interior of each pedicle from posterior to anterior could be accomplished within 1 minute. Comparison to Manual Palpation Using both the 0-degree scope and the 30-degree scope, the medial, lateral, superior, and inferior walls and the floor were visualized on all pedicles ( Figure 1, Supplemental Videos 1A and 1B; http://www.ijs surgery.com/lookup/suppl/doi:10.14444/5030/-/ DC1/video1a.mp4; http://www.ijssurgery.com/look up/suppl/doi:10.14444/5030/-/DC1/video1b.mp4). The visualized pedicle wall integrity corresponded in all cases (8/8 pedicles) to the investigator's assessment with ball-tip probe. The 30-degree scope was used in 4 pedicles. The major difference during endoscopic visualization between a 0-degree scope and a 30-degree scope is that a more direct view of the endosteal surface is achieved with a 30-degree scope. This may be relevant when a more direct view of a breach is required when determining the significance of a breach. Identification of Breaches Breaches were deliberately made using the curved gearshift in the pedicles medially (n ¼ 2), laterally (n ¼ 2), and anteriorly (n ¼ 2). Perforations of the pedicular walls were easily identified with the endoscope in all 6 cases. Medially, the exposed dura and epidural fat could be visualized ( Combination with Other Techniques The endoscopic instrument was used in combination with a ball-tip probe to visualize the defect into which the ball-tip probe was subsiding ( Fluid Extravasation There was no significant extravasation of fluid into the spinal canal. There was no thecal sac compression visualized or nerve root compression visualized due to irrigation fluid. Once the irrigation was ceased, there was no retrograde flow of irrigation fluid from the extraosseus structures into the pedicles. There was no significant soft tissue swelling on breached cases. In 2 instances, irrigation was deliberately stopped. When irrigation was stopped, blood flow from the pedicle walls resumed immediately. (Figure 8, Supplemental Videos 8A and 8B; http://www.ijssurgery.com/lookup/suppl/ doi:10.14444/5030/-/DC1/video8a.mp4; http://www. DISCUSSION Safe placement of spinal instrumentation is of paramount importance in overall successful spinal surgery. These results indicate that low-pressure endoscopic inspection is technically feasible in a live animal model. Endoscopy provided valuable information about anatomical defects and guidewire placement in an efficient manner. There were no complications directly identified from the endoscopic technique. There were no specialized anesthetic requirements from the endoscopic technique. Current techniques to verify pedicle screw placement are not universally accurate or effective. Manual pedicle palpation with a ball-tip probe has low accuracy 25,26 and the potential for inadvertent neurological injury with the probe. 27 Even stereotactic image guidance cannot entirely prevent pedicle perforations. 28 Electrical neuromonitoring adds cost and time to surgical procedures. In some circumstances, neuromonitoring can fail to identify a pedicle screw breach. 29,30 False-positive alerts can also occur with neuromonitoring, requiring addi-tional operative time and steps. A recent systematic review concluded ''There is no evidence to date that IOM [intraoperative neurological monitoring] can prevent injury to the nerve roots. Unfortunately, once a nerve root injury has taken place, changing the direction of the screw does not alter the outcome.'' 1 Other pressure-and electrical conduction-based techniques to verify pedicle accuracy, such as specialized piezoelectric piercer probes, can also lead to misplaced pedicle screws 31 and can result in false-negative errors. 32,33 Other emerging techniques such as intraosseous ultrasound, [34][35][36][37][38][39] robotic guidance, or near-infrared spectroscopy, require substantial equipment and are not without false-negative errors. 40,41 To our knowledge, our study is the first to report successful endoscopic visualization of the internal aspect of a lumbar pedicle with low-flow irrigation. Endoscopy has been extensively used in other areas of surgery to minimize exposure. Current endoscopic spine techniques include foraminal discectomy or decompression techniques, but endoscopic instrumentation and fusion has not been widely performed due to technical challenges. Most previous descriptions of endoscopic pedicle screw instrumentation have focused on endoscopic soft tissue dissection, endoscopic identification of screw starting points, and endoscopic placement of rods. [21][22][23] In the aforementioned studies, there is no description of inserting the endoscope into the pedicles to inspect the wall integrity. [21][22][23] There are a few studies that describe visualization of the intraosseous anatomy of the lumbar pedicles with high-flow irrigation. 24,42,43 However, in contrast to the current technique, the authors of these studies describe high-pressure irrigation. Essentially the endoscope is continually flushing away active bleeding in an open system. In our study, the unique design of the endoscopic instrument exploits the differing fluid properties between Newtonian and non-Newtonian fluids, thereby producing crystalclear images so that relevant anatomy can be viewed and documented. Figure 8 illustrates laminar streaks of blood (non-Newtonian fluid) flowing through the saline medium. Such laminar flow requires that the saline (a Newtonian fluid) within the pedicle is otherwise motionless. Thus the endoscopic instrument seals the pedicle channel, prevents fluid from escaping from the pedicle, and creates a closed system. If the saline were flowing in a high-pressure environment, it would disrupt the laminations and thus turn the image cloudy. The current endoscopic instrument, through the attached fluid column, equilibrates the systolic pressure with only a minimal volume of saline in an intact pedicle. Therefore, minimal or low-pressure irrigation is all that is necessary for successful instrumentation. High-flow irrigation could create complications such as edema, neural element compression, or compartment syndrome. Highpressure lumbar irrigation has been demonstrated to increase cervical epidural pressure and possibly lead to intracranial hypertension. 44 In our study, clinically relevant breaches were identified. These breaches were confirmed by manual palpation under direct endoscopic visualization. In this manner, even the palpation with a ball-tip probe under direct visualization was more controlled and perhaps less likely to cause inadvertent neurological injury. There were no complications from the usage of the endoscopic technique. Furthermore, the time per pedicle was approximately 1 minute. Once the endoscopic instrument was seated within the pedicle, the entire intraosseous space was sealed. Therefore only a minimal pressure was needed to cease blood flow and in fact to create retrograde flow. The animal's blood pressure was normal. No abnormal hemodynamic conditions were necessary, such as severe hypotension to reduce bony bleeding. Limitations of the current study include the use of a porcine model. However, porcine models have been used in spine surgery previously for feasibility studies. 41,[45][46][47][48][49][50][51][52][53] Additional studies by other, nonconflicted, investigators are needed. Other limitations include the small number of pedicles tested. In contrast to previous studies, threads were not tapped into the pedicles in this study 24 because many modern pedicle screw systems are now deemed ''self-tapping.'' Therefore, the experimental conditions were designed to simulate the current surgical technique. Another limitation is that actual pedicle screws were not placed in this study. We acknowledge that there is potential that pedicles could be correctly cannulated but that errant screws could be placed due to misdirection after cannulation. However, the purpose of this study was to enhance the ability to identify a correctly cannulated tract. Despite the assistive techniques currently available, pedicle screw revision is one of the leading causes of reoperation after spine surgery, particularly in the hyperacute postoperative period. 10,11 The main advantage of using an endoscopic technique for lumbar pedicle trajectories is the direct visualization of pedicle wall integrity. This feature lends itself readily to photographic and video documentation and education of trainees. In a recent cadaveric study, a pedicle breach rate of 51% was reported with resident physicians attempting to cannulate thoracic pedicles. 54 The endoscopic instrument does not require any special setup, placement of needles or electrodes into the patients, alternative anesthetic techniques, monitoring equipment, special training, a change in technique, or capital expenditures. The endoscopic instrument allows the surgeon to visually identify cortical breaches before compression and neurological injury by screws occur. The endoscopic instrument does not employ any ionizing radiation. An additional advantage is that the endoscopic instrument continuously irrigates the pedicles with an antibiotic solution of the surgeon's choice, and may thereby reduce the risk of surgical site infection. Other indirect techniques, such as electrical stimulation, have been associated with false-positive and false-negative errors. Electrical stimulation, the most widely used method to check pedicle screw placement, also requires specialized anesthetic technique involving the absence of chemical paralysis. The endoscopic technique is versatile and similar to an open technique, which allows surgeons a smoother learning curve. Importantly, if a treating surgeon continues to observe breaches, with the added benefit of the location of the breach, he/she could then make real-time adjustments to the technique used to cannulate the pedicle as a way of improving patient safety. There is no ionizing radiation in the endoscopic technique. Further developments in endoscopic imaging and fusion techniques will help refine this procedure. Currently, the screw system does not permit full endoscopic placement of longitudinal connecting rods. Additionally, the technique is diagnostic only. The system does not currently incorporate endoscopic ''drilling'' features to cannulate a pedicle. Ultimately, we believe that endoscopically assisted posterolateral lumbar instrumentation will reduce perioperative complications, costs, and the risk of return to the operating room.
2018-10-14T17:56:41.851Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "673c377b988de4ee3883dde944dfb80f9bb1a6cd", "oa_license": null, "oa_url": "http://www.ijssurgery.com/content/ijss/12/2/241.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "09dfaa99255b22e3bcb626caa0d6edbbd136274d", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
15846083
pes2o/s2orc
v3-fos-license
Symbolic Summation and Higher Orders in Perturbation Theory Higher orders in perturbation theory require the calculation of Feynman integrals at multiple loops. We report on an approach to systematically solve Feynman integrals by means of symbolic summation and discuss the underlying algorithms. Examples such as the non-planar vertex at two loops, or integrals from the recent calculation of the three-loop QCD corrections to structure functions in deep-inelastic scattering are given. INTRODUCTION Symbolic summation amounts to finding a closed-form expression for a given sum or series. Systematic studies have been pioneered by Euler [1], and for specific sums, exact formulae have been known for a long time. Today, general classes of sums, for example harmonic sums, have been investigated (see e.g. Refs. [2]) and symbolic summation has further advanced through the development of algorithms suitable for computer algebra systems. Here, the possibility to obtain exact solutions by means of recursive methods has lead to significant progress, for instance in the summation of rational or hypergeometric series, see e.g. Ref. [3]. In quantum field theory, higher-order corrections in perturbation theory require the evaluation of Feynman diagrams, which describe real and virtual particles in a given scattering process. In mathematical terms, Feynman diagrams are given as integrals over the loop momenta of the associated particle propagators. These integrals may depend on multiple scales and are usually divergent, thus requiring some regularization. The standard choice is dimensional regularization, i.e. an analytical continuation of the dimensions of space-time from 4 to D, which keeps underlying gauge symmetries manifest. Analytical expressions for Feynman integrals in D di-mensions may lead to transcendental or generalized hypergeometric functions, which have a series representation through nested sums with symbolic arguments. The main computational task is then to obtain the Laurent series upon expansion of the relevant functions in the small parameter ǫ = (D − 4)/2. ALGORITHMS The basic recursive definition of nested sums is given by [4] S(n; m 1 , ..., m k ; x 1 , ..., x k ) = n j=1 x j 1 j m1 S(j; m 2 , ..., m k ; x 2 , ..., x k ) . ( where generally all |x i | ≤ 1. The sum of all m i is called the weight of the sum, while the index k denotes the depth. This definition actually includes as special cases the classical polylogarithms, Nielsen functions, multiple and harmonic polylogarithms [5,6,7] in their series representations. For all x i = 1, the above definition reduces to harmonic sums [1,8,9,10] and, if additionally the upper summation boundary n → ∞, one recovers the (multiple) zeta values associated to Riemann's zeta-function [2]. As an important property the S-sums in Eq. (1) obey the well-known algebra of multiplication. Specifically, any product S(n; m 1 , ..., m k ; can be expressed again as a sum of single nested sums, hence in a canonical form, which is an important feature for practical applications. The underlying algebraic structure in Eq. (2) is a Hopf algebra, being realized as a quasi-shuffle algebra here, see e.g. Refs. [4,11,12,13] . The algorithm can be implemented very efficiently on a computer, see e.g. Refs. [10,14]. For the manipulation of the S-sums, we classify certain types of transcendental sums. All sums in these classes can be solved recursively, i.e. they can be expressed in canonical form. The underlying algorithms realize a creative telescoping. They either reduce successively the depth or the weight of the inner sum, so that eventually the inner nestings vanish. Finally, the results can be written in the basis of Eq. (1), (as S-sums with upper summation limit n) or as multiple polylogarithms (which are S-sums to infinity). Besides the quasi-shuffle algebra of multiplication in Eq. (2), the procedure relies on algebraic manipulations, such as partial fractioning of denominators, shifts of the summation ranges and synchronization of summation boundaries of the individual sums. Specifically, we consider convolutions, and binomial convolutions, In all cases, the upper summation boundary should be consistent with the defining range of the binomials and the S-sums. APPLICATIONS In perturbation theory, one way to classify Feynman integrals is according to the number of scales, i.e. the number of non-vanishing scalar products of external momenta or particle masses. According to this criterion, analytical expressions in D dimensions for the Laurent series in ǫ either lead to transcendental numbers like (multiple) zeta values or (multiple) polylogarithms. Other classification criteria are, of course, the topology of a Feynman integral (number of loops and external legs). One-scale problems Prominent examples of one-scale problems are massless two-point functions [15,16,17]. In particular, the massless two-loop self-energy T1 has not only been of practical importance from a phenomenological perspective, but received also quite some interest from number theorists. For arbitrary powers of propgators, it is given by where Graphically, it is displayed in Fig. 1. Here the interesting question has been which types of (transcendental) numbers appear in the ǫ-expansion of this integral. For powers of the propagators of the form ν i = 1 + ǫ, i = 1, . . . , 5 it was known from explicit calculations up to the ǫ 9 -term (heavily relying on symmetry properties) that multiple zeta values occur [15]. However, it was unclear whether this suffices to all orders in ǫ. Eventually, by deriving a double sum of (generalized) hypergeometric type, it was proven that multiple zeta values are indeed sufficient [16]. Currently, with the help of symbolic summation the ǫ-expansion is known to the ǫ 13 -term [18]. The depth is limited by the fact that harmonic sums in infinity are expressed in a basis of transcendental numbers only up to weight 16. Another example for a one-scale problem, that received attention recently [19,20] is the nonplanar vertex at two loops, which enters in calculations of the quark and gluon form factors [19,21] in QCD. Here the basic integral (displayed in Fig. 2) is given by where p 5 = p 4 − p 7 , p 6 = p 1 − q, p 7 = p 2 − p 1 , p 8 = p 2 − p 3 and q 2 = (p 3 − p 4 ) 2 . For general powers of propagators, V NO can be written as a double sum over Gamma functions, and if all ν i = 1, an expression in terms of hypergeometric functions 3 F 2 and 4 F 3 has been given in Ref. [20]. After expansion, the sum can be solved in terms of the Riemann zeta function to any order in ǫ using the algorithms for harmonic sums [10,14] coded, as all our symbolic manipulations, in [22]. We find the following expansion to order ǫ 5 , V NO (1, 1, 1, 1, 1, 1 where we have taken out the usual MS-scheme factor This completes the section of examples with one-scale. Two-scale problems Nice examples of two-scale problems are provided by the recent calculation of the three loop corrections in Quantum Chromodynamics (QCD) to the structure functions of deep-inelastic scattering [23,24,25]. Here, the two scales are the virtuality of the exchanged gauge boson Q 2 = −q 2 and the scalar product of the boson's and nucleon's momenta, 2p · q, both combining to Bjorken's dimensionless variable x = Q 2 /(2p · q). The Feynman integrals under consideration can be expressed in nested sums and solved with the help of symbolic summation as follows. Imagine a mapping of a given integral I(x) depending on x, 0 ≤ x ≤ 1 to the space of discrete variables I(N ), N ∈ N, which is accomplished by means of an integral transformation, e.g. a Mellin transformation. Then one can obtain difference equations for the Feynman integral I(N ), which may be written as [26] a 0 (N ) I(N ) + a 1 (N ) I(N − 1) + . . . then its solution will be In the case that the functions a i can be factorized in linear polynomials of the type N + m + n ǫ with m, n being integer and N being symbolic, the products can be written as combinations of Gamma functions. In the presence of parametric dependence on ǫ the Gamma functions should be expanded around ǫ = 0, leading to factorials and harmonic sums. If the function G(N ) is expressed as a Laurent series in ǫ with the coefficients being combinations of harmonic sums in N + m and powers of N + m, m being a fixed integer, the sum in Eq. (12) can be done and I(N ) will be a combination of harmonic sums in N + k and powers of N + k with k being a fixed integer. Eq. (10) is an example of a recursion for Feynman integrals with dependence on symbolic parameters. Solutions such as Eq. (12) allow for an efficient implementation in computer algebra systems like [22] resulting in a largely automatic build-up of nested sums. For calculation of QCD corrections to structure functions mentioned above, a systematic evaluation of nested sums was required for all integrals occurring in approximately 10.000 Feynman diagrams. Because of the expressions being of excessive size at intermediate stages this task was well suited for the computer algebra system and the Summer package [10] for nested sums. Multi-scale problems Multi-scale problems arise in the calculation of cross sections with more kinematical invariants, like e.g. jet cross sections. The methods and algorithms for generalized sums have already been used in full-fledged QCD calculations, for instance in the evaluation of higher order corrections to e + e − → 3jets [27]. In general, Eqs. (3)-(5) may also be used to expand higher transcendental functions in a small parameter around integer values. Starting from the series representation of, e.g. the first Appell function or the second Appell function we see that Eqs. It should also be stressed at this point, that although the definition of the S-sums in Eq. (1) is very general, the specific algorithms for convolution, conjugation etc. are subject more restrictive assumptions. In particular, the algorithms underlying the evaluation of Eqs. (3)-(5) do rely on the fact that the modulus of the summation index in the argument of the S-sums or in the denominators is always one. This changes, if for instance hypergeometric functions J F J−1 (or more generally Gamma-functions) are expanded around half-integer values. Such a situation occurs for example in the calculation of massive higher loop integrals in Bhabha scattering [28]. Some extensions of the summation algorithms for expansion around rational values, leading e.g. to binomial sums and inverse binomial sums are discussed in Ref. [29,30]. CONCLUSION Symbolic summation has advanced to an important method for the calculation of higher order corrections in perturbative quantum field theory. The field has seen significant progress during the past years and we have given various examples from complete calculations, e.g. the recent evaluation of the third-order contributions in perturbative QCD to the structure functions of deepinelastic scattering [23,24,25]. These cutting edge calculations show that the method of symbolic summation provides very powerful means for the practical computations of Feynman diagrams. In closing, we note that all practical applications do heavily rely on computer algebra implementations of the algorithms discussed here. In the symbolic manipulation program [22], which is a fast and efficient computer algebra system to handle large expressions, harmonic sums can be manipulated with the Summer package [10]. For the S-sums of Eq. (1) there exists an extension, the XSummer package [14] in , which implements algorithms of Eqs. (3)- (5). As an alternative within the GiNaC framework [31] the package nestedsums [32] provides similar functionalities. Very recently, also Ref. [33] appeared, which limits itself to the problem of expanding hypergeometric functions J F J−1 around integer parameters to arbitrary order and provides an implementation in Mathematica . We believe all these packages may also be useful for a larger community.
2014-10-01T00:00:00.000Z
2005-09-26T00:00:00.000
{ "year": 2005, "sha1": "0feb1fb9642d08a6b7f8b3fcdd583a93b2109aeb", "oa_license": null, "oa_url": "http://arxiv.org/pdf/math-ph/0509058", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0feb1fb9642d08a6b7f8b3fcdd583a93b2109aeb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
224973447
pes2o/s2orc
v3-fos-license
Benchmark-Driven Investments in Emerging Market Bond Markets: Taking Stock This paper reviews the role of benchmark-driven investments in EM local bond markets. We provide an overview of how key EM bond benchmark indices are constructed, how they affect the behavior of investment funds, and what are the likely implications for capital flows and policy-making. Several methods are presented suggesting that the amount of assets benchmarked against widely followed EM local-currency bond indices have risen fivefold since the mid-2000s to around $300 billion. Our review suggests that the benefits of index membership may be tempered by portfolio outflow risks for some countries. This is because benchmark-driven investments may increase the importance of external factors at the expense of domestic factors, raising the risks of outflows unrelated to recipient country fundamentals. Some countries may be disproportionately exposed to these risks, reflecting the way the indices are constructed. I. INTRODUCTION Benchmark driven investments (BDI) have received growing attention in the recent literature on portfolio flows to emerging markets. An investment is benchmark-driven to the extent that its portfolio allocation across countries is guided by the country weights in a benchmark index (IMF 2019, Arslanalp and Tsuda 2015). A number of influential studies have found evidence that investment benchmarks are increasingly shaping portfolio allocation dynamics in emerging market (EM) countries. For example, Raddatz and others (2017) find that 70 percent of country allocations of investment mutual funds are influenced by benchmark indices. The growing role of benchmark-driven investments appears to reflect the confluence of several factors and may be contributing to more synchronized portfolio flows. These factors include rising assets under management of passive funds, increasing number of active funds "hugging" the index, and a growing number of countries represented in the benchmark indices. Together, these factors may have contributed to more correlated portfolio flows to countries included in benchmarks indices. For example, portfolio flows to EM local bond markets have traditionally been much more synchronized for countries in a benchmark index (the J.P Morgan GBI-EM Global) than those that were not in the index (Figure 1). A burgeoning literature suggests that the growing role of benchmark-driven investments entails both benefits and risks for portfolio flows to EMs. On the upside, inclusion in major benchmark indices provides countries with access to a larger and more diverse pool of external financing (Sienaert 2012). On the downside, benchmark-driven investments may raise the sensitivity of portfolio flows to global factors and, more generally, to factors common to EMs included in benchmark indices (IMF 2019, Arslanalp and Tsuda 2015). This is because benchmark-driven investments are investment in EMs as an asset class, focusing mainly on factors that affect EMs as a group, rather than on country-specific developments. As a result, benchmark-driven portfolio flows may be more sensitive to common factors and therefore more correlated across countries. The study by Raddatz and others (2017) empirically demonstrates that benchmark indices affect asset allocations, capital flows, and asset returns of international mutual funds in a statistically significant and economically important way. The study shows that the use of benchmarks generates pricing and flow distortions that represent deviations from the theoretical predictions of standard models (such as the CAPM). Related research shows that the use of benchmarks can give rise to correlated behavior on the part of even "actively" managed funds (Miyajima and Shim 2014) and that EMs with higher shares of financing from investment funds are more sensitive to external/push factors (Cerutti and others 2015). ©International Monetary Fund. Not for Redistribution Building on this literature, this paper takes stock of the rise of benchmark-driven investments in EM bond markets. We focus particularly on local currency government bonds-the single largest segment of EM bond markets. First, we provide an overview of how key EM bond benchmark indices are constructed, how they affect the behavior of investment funds, and what are the likely implications for capital flows dynamics. Second, we provide up-to-date estimates of the size of benchmark-driven investments in local EM bond markets, estimated at around $300 billion as of end 2019 before the COVID-19 shock in early 2020 (Figure 2). Third, we provide empirical results demonstrating the heightened sensitivity of benchmark-driven investments to external factors, which leads to an elevated correlation of such flows across countries. During the COVID-19 pandemic, the combined effect of this heightened sensitivity and the growing assets under management appears to have contributed to the record outflows of foreign portfolio capital from emerging markets (IMF 2020) (Figure 3). In this episode, portfolio outflows from local currency bond markets were quite strongly correlated with the weight of the country in the GBI-EM benchmark index, suggesting that benchmark-driven investors may have played a significant role in driving the reversal of portfolio flows (Figure 4). Overall, the picture that emerges from our review is that benchmark-driven investments are changing the landscape for EM bond markets, with important implications for capital flows dynamics. Different investor types exhibit fundamentally different behaviors in deciding their portfolio allocations, and different countries exhibit widely different investor bases. As a result, the same kinds of external and domestic shocks can prompt different capital flow responses in different countries, in turn, shaping the way such shocks are transmitted to the domestic economy. 1/ Red crosses indicate estimates based on country-level event studies as discussed in Section VI of this paper, building on Arslanalp and Tsuda (2015). The rest of this paper is organized as follows. Section 2 explains key concepts for benchmark-driven investments and its implications for capital flows. Section 3 provides country-level estimates of benchmark-driven investments through regression analysis and event studies. Section 4 discusses the effects of benchmark-driven investments on capital flows dynamics. Section 5 provides concluding remarks. II. WHAT ARE BENCHMARK-DRIVEN INVESTMENTS AND WHAT ARE THE IMPLICATIONS FOR CAPITAL FLOWS? Benchmark-driven investments come about mainly in the context of investment funds, most of which use benchmark indices to guide their portfolio allocation. Investment funds vary in the degree to which they are benchmark-driven in that some follow the country weights of their benchmark more closely than others. Passive investment funds explicitly aim to replicate the performance of their benchmark, while active funds deviate from their benchmarks to varying degrees as fund managers try to outperform their respective benchmarks (Figure 5). By contrast, unconstrained funds choose portfolio allocations irrespective of benchmark indices. It is worth noting that the ultimate investors of benchmark-driven funds can be retail or institutional investors, though retail investors generally have only limited access to unconstrained funds. A. What are Key EM Bond Benchmarks and How are They Constructed? Among the multitude of investment funds focused on emerging market local bonds, most are benchmarked to a small number of indices. The most widely followed benchmark indices for emerging market bonds are the J.P. Morgan Emerging Market Bond Index (EMBIG; for dollar-denominated bonds) and the J.P. Morgan Government Bond Index-Emerging Markets (GBI-EM; for local currency bonds), both of which come in several variants. Benchmarkdriven investments discussed in this paper primarily refer to investment funds benchmarked to these two indices (as discussed in greater details in Annex I). Investment Vehicles Ultimate Investors Institutional investors ©International Monetary Fund. Not for Redistribution The number of countries represented in benchmark indices has grown substantially over the years, especially among emerging markets. Since 2007, the number of countries in the EMBIG has doubled to more than 70 with the inclusion of many countries that have issued in international bond markets for the first time. In addition, the growing size and liquidity of local bond markets in many emerging markets have allowed the number of countries in the GBI EM to increase from 12 to 18. By contrast, although many EMs have earned investment grade status from rating agencies, the number of countries with local currency debt represented in global investment grade bond indices has been relatively stable, given the more demanding investability criteria of these indices (Annex I). Due to the way indices are constructed, some countries are disproportionately exposed to benchmark-driven investors. This reflects both index inclusion rules and discretionary choices of index providers. Different inclusion criteria for each index lead to varying compositions of investor types, exposing issuers to diverse portfolio flow dynamics. Inclusion criteria help determine the universe of investors attracted to a given country (Annex II). In particular: • Global bond benchmarks: Investors tracking global bond benchmarks, are less likely to react to risks specific to EMs, given that these countries represent only a small fraction of the total index and hence make a small contribution to their overall returns. However, EMs that are part of global bond benchmarks have a larger share of rating-sensitive investors, given that these indices use ratings as criteria for index inclusion/exclusion. As a result, they can face large selling from benchmark-driven investors in the event of a loss of investment grade status (e.g., Pemex and South Africa in April 2020, Petrobras in September 2015). 2 • EM bond benchmarks: Contrary to global bond benchmarks, the most widely used EM bond benchmarks are not rating sensitive and bonds remain eligible as long as the liquidity criteria are satisfied. This may have a stabilizing effect on portfolio dynamics given the procyclical nature of credit ratings that can aggravate crisis episodes (Ferri, Liu, and Stiglitz 2003). An important feature of the most commonly followed version of EM benchmarks is a weighting method that reduces the weight of larger issuers and redistributes the excess to smaller countries. This can create distortions for asset allocation portfolio flows. For local currency government bonds, for example, these "diversified" indices limit the maximum weight to 10 percent, which leads to more concentrated positions of benchmark driven investors in some smaller issuers. For example, Brazil's weight is capped 8 percentage points lower than it would be under the market capitalization weights used in global benchmarks (Figure 5). On the other hand, smaller issuers such as Colombia, Hungary and Peru see an increase in their weights by 1 to 2 percent. As the index is tracked by an estimated $300 billion (see next section), a 2 percentage points higher weight would mean $6 billion additional benchmark driven investments due to index rules, which can be substantial for smaller countries. Index inclusion decisions by index providers can also manifest at the security level depending on whether certain securities meet liquidity criteria. One recent example of a large liquidity related weight change is Malaysia's weight reduction from in GBI-EM GD from 10 percent on February 2016 to 6 percent on August 2017. Part of this reduction was due to measures taken by the authorities in December 2016 that, in the view of J.P Morgan, impacted the liquidity of Malaysian government debt securities due to changes in the non-deliverable forward (NDF) market (J.P. Morgan, 2017). 3 Such a weight reduction, unlike reductions driven by market value, would naturally lead to a reduction of benchmark driven investors who may need to mechanically rebalance their positions. Benchmark effects can affect countries due to the inclusion of new countries in the benchmark index. Index inclusion decisions can lead to substantial rebalancing of portfolios and can alter the risk characteristics of the asset class. For example, J.P. Morgan recently included China in the GBI-EM Global index (Annex II). Over the transition period, in addition to boosting flows to the Chinese bond market, this will also lead to an index weight reduction for other countries and consequently to some rebalancing by benchmark-driven funds. B. How Do Benchmark Indices Affect the Behavior of Investment Funds? Previous work has shown that even supposedly active investment funds tend to follow their benchmark relatively closely. This is evident in standard metrics of fund behavior, such as the "active share" of a fund, which is defined as the sum of the absolute value of deviations of the fund's country weights from those of the benchmark (Cremers and Petajisto, 2009). Miyajima and Shim (2014) show that the median "active share" is only 17 percent, among active EM local-currency bond funds tracked by the Emerging Portfolio Fund Research (EPFR) Global. This indicates a very high (83 percent) overlap in country weights between fund portfolios and the relevant benchmark. Accordingly, the authors label funds with an active share of 0-10 percent as "closet index" funds, and funds with an active share of 10−20 percent as "weakly active" funds. Based on this definition, they show that nearly 70 percent of actively managed EM bond funds tracked by EPFR Global are either a "closet index" or a "weakly active" funds-that is, benchmark driven. We confirm that this trend has remained in place by calculating the active share for EM local currency bond funds. We find that the active share has been declining steadily since 2008 among EM local currency bond funds tracked by the EPFR (Figure 7, panel 2).This finding is corroborated using other metrics, such as the average tracking error-the difference between the return of a fund and its benchmark-of EM local bond funds (IMF, 2019). The trend that active funds are becoming more passive may partly be explained by underperformance of active funds over the last decade as well as the trend towards low-cost funds by retail investors. 1/ The active share of a fund is defined as the sum of the absolute value of deviations of the fund's country weights from those of the benchmark (Cremers and Petajisto, 2009). For this analysis, we use the EPFR Global database of funds to calculate the average country level allocations of all bond funds benchmarked to JP Morgan's GBI-EM Index. The difference between this country level allocation and the benchmark weights are a measure of how closely EM local currency bond funds follow their benchmark index. C. What are Potential Implications for Capital Flows? Previous work has shown that the role of benchmark-driven investments results in a higher degree of similarity in cross-country asset allocations and capital flows, especially when comparing emerging market investors. For example, Raddatz and others (2017) find that benchmarks can have significant effects on international investments and affect capital flows through both direct and indirect channels. In particular, the authors find that benchmarks explain, on average, around 70 percent of country allocations even after controlling for macroeconomic, industry, and country-specific effects. They define the "benchmark effect" as the channels through which "prominent international equity and bond market indices affect asset allocations and capital flows across countries," differentiating it from the role domestic and external factors play in country allocations. Similarly, Cerutti and others (2019) find that EMs relying more on global mutual funds are more sensitive to external factors in terms of their gross equity and bond inflows. Recipient market liquidity and inclusion in global benchmark indices also increase sensitivities. They also find little robust evidence that institutional and macroeconomic fundamentals dampen sensitivities. At the same time, in a multiple event study, Pandolfi and Williams (2020) show that index inclusion events lead to significant declines in government bond yields and to appreciation of the domestic currencies. Relatedly, studies have shown that mutual funds can propagate shocks through several channels, including (i) directly via their holdings (Broner, Gelos, and Reinhart, 2006), (ii) indirectly through overlapping ownership of emerging and advanced economies (Jotikasthira and others, 2012); and (iii) via fire sales (Coval and Stafford, 2007). Miyajima and Shim (2014) specifically explore the use of benchmark indices in EMs. They find that the use of benchmarks can give rise to correlated behavior on the part of even "actively" managed funds. They argue that managers of those funds tend to be evaluated by whether the returns of their investments match or exceed those of a particular index. As a result, although active managers do not replicate the portfolio weights of the benchmark, the career risk of short-term underperformance against their peers can induce them to form similar portfolios or to "hug" their benchmarks, increasing correlation of asset managers' portfolio choices. For policymakers, two contrasting implications from these effects are worth highlighting. On the one hand, a rising share of benchmark-driven investments can be a source of vulnerability because it increases the country's exposure to external shocks, as capital flows become more sensitive to external conditions (Miyajima andShim 2014, Raddatz andothers 2017). On the other hand, a rising share of benchmark-driven investments can be a source of resilience because it reduces the country's exposure to domestic shocks, as capital flows become less sensitive to deteriorating country conditions/fundamentals. Figure 8 illustrates these two effects based on a stylized financial accelerator framework (Bernanke and others 1994 Severe financial distress typically arises through some combination of domestic and external shocks, which set off a vicious cycle of deteriorating economic and financial conditions (Kamin and others, 2001). In this framework, a low share of benchmark-driven investments means that foreign portfolio flows are highly responsive to domestic factors (i.e., feedback loop A is relatively strong), but not so responsive to external developments (i.e., feedback loop C is relatively weak). An example would be Argentina, which has only a small weight in EM local bond indices and most of its local debt is owned by unconstrained institutional investors. Conversely, a high share of benchmark-driven investors means that portfolio flows are less responsive to domestic factors (i.e., feedback loop A is relatively weak), but highly responsive to external developments (i.e., feedback loop C is strong). An example would be Colombia, which has a relatively high weight in the EM local bond indices and whose foreign investor base has been relatively stable despite several domestic shocks. III. HOW LARGE ARE BENCHMARK-DRIVEN INVESTMENTS IN EM BOND MARKETS? The past decade has seen a remarkable rise in the importance of asset managers for portfolio flows to emerging markets and a consequent increase in the importance of various benchmarks. Over that period, foreign investors doubled their holdings of EM government debt to more than $1.5 trillion. More than 60 percent of this increase came from foreign asset managers (Figure 9, panel 1). In this section, we discuss the magnitude foreign bond holdings of benchmark driven investors, particularly in the local currency bond market. A useful concept for gauging the size of benchmark driven investments is to think of them as a pool of money that is allocated purely according to the index weights. This concept has the advantage that it can be used to calculate the dollar amount of a portfolio shift corresponding to a change in the benchmark index, which is highly relevant from a policy perspective because it helps assess and predict shifts in the availability of external financing. Specifically, based on this concept, a change in the weights of the underlying benchmark would trigger a reallocation of funds equivalent to the percentage change in the index weight times the dollar amount of benchmark-driven investments. For example, if the weight of country A in the index were reduced from 10 to 9 percent (say, due to the inclusion of ©International Monetary Fund. Not for Redistribution another country), and assuming a pool of benchmark-driven investments of $100 billion, one would expect a reallocation of $1 billion away from country A. In practice, there is much uncertainty about the holdings of benchmark driven investors. One reason is that funds follow benchmark indices to varying degrees. Another reason is that data providers like EPFR can only report assets under management (AUM) of those mutual funds and ETFs that reveal their holdings. Moreover, these vehicles are mostly used by retail investors, whereas most AUMs for major asset managers are from institutional investors (e.g., pension funds, endowments). These institutional investors may be invested in similar strategies as the mutual funds but details on their portfolio holdings at the country and security level are very limited. Some information about the magnitude of benchmark driven investments can be gained from surveys of index providers on the amount of funds tracking their index, but these surveys also have shortcomings. For example, J.P. Morgan provides a monthly investor survey on AUM tracking its EM bond indices (Figure 9, panel 2). However, the survey likely does not fully capture AUM that are invested with mixed mandates and use a combination of benchmarks (e.g., EM hard and EM local currency or EM and DM mandate). Additionally, several other less popular benchmarks exist by other providers that have similar index construction but are not captured by the survey. As of Q3:2019, JPM survey of local bond funds tracking their indices were close to $230 billion. We use two alternative approaches to estimate the size of benchmark driven investments. The first is a regression-based analysis, while the second is based on event studies focused on key index inclusion events in recent years. The two approaches are complementary and result in ©International Monetary Fund. Not for Redistribution broadly consistent estimates of the size of benchmark-driven investments, suggesting that these stood around $300 billion at end-2019, somewhat above survey-based estimates. 4 A. Regression Analysis We obtain regression-based estimates of benchmark-driven investments in EM local bond markets using the approach described in Arslanalp and Tsuda (2015). The approach is based on Ballston and Melin (2013). It assumes that the entire pool of capital invested in EM local markets can be divided into two pools: one benchmarked to the GBI-EM Global Diversified (including only the countries in the index) and the other benchmarked to all EMs (including those outside the index) based on the size/market capitalization of their government bond markets (unconstrained). By looking at the amount allocated to individual markets, one could then estimate the relative proportions of these two pools over time by solving for a t and b t in the following equations: Subject to: Where a t is the pool of benchmark-driven investments at time t. b t is the pool of unconstrained investors at time t. w i,t is the weight of country i in the J.P. Morgan GBI-EM Global Diversified index at time t. W i,t is the weight of country i's bond market at time t based on market capitalization. F i,t is the nominal amount of foreign holdings of country i's bonds at time t, in U.S. dollars. ε i,t is the extent to which portfolio managers are over-/underweight country i at time t. The estimation is conducted using a constrained least squares (CLS) approach given that a t and b t should add up to total foreign holdings for each month. The sample covers the period from January 2010 to June 2020 and includes 17 countries (Table 1). Of these 17 countries, 13 are "benchmark" countries (i.e., part of the GBI-EM Global Diversified index for most of the sample period), while 4 are "off-benchmark" (the Czech Republic, India, Israel, and Korea). 5 The mix of countries provides useful heterogeneity for the estimation of a t and b t . The relevant data for the estimation (total outstanding and foreign holdings of local-currency government debt securities) come from national data sources, as discussed in Arslanalp and Tsuda (2014). Figure 10 provides a summary of the data as of end-2019. Figure 11 provides the country weights (w i,t and W i,t ) as of end-2019. Figure 11. Emerging Market Local-Currency Government Debt Markets: Country Weights, End-2019 1/ The results of the regression analysis for a t and b t are summarized in Figure 12. The results suggest that the pool of benchmarkdriven investments in EM local currency debt markets was $330 billion at end-2019. The detailed results for 2014−19 are presented in Table 2. 1/ The benchmark-driven investor base is estimated based on the Ballston and Melin (2013) approach. All countries include those listed in Table 1. Estimates of benchmark-driven investments also vary significantly across countries (Figure 13, panel 2, Figure 14). As a result, the same kinds of shocks can prompt different capital flow responses in different countries. For instance, Turkey has a high share of benchmark-driven investments as a proportion of the total local currency foreign holdings, but it is relatively small as a percent of GDP. This is in contrast to Hungary where the systemic importance of such investors is much higher. Figure 14. Country Level Estimates of Benchmark-Driven Investments Sources: Bloomberg Finance L.P.; JP Morgan; and IMF staff estimates. B. Event Studies In this section, we present the results of two event studies that provide supporting evidence on the size of benchmark-driven investments. The approach follows Arslanalp and Tsuda (2015). Overall, the event studies suggest that the pool of benchmark-driven investments was around $370 billion during the third quarter of 2017 and $340 billion during the second quarterly of 2018, broadly in line with the regression results presented earlier. The event studies could be based on episodes, in which a country's weight in the J.P. Morgan GBI-EM index rises either due to a country or security inclusion event. When J.P. Morgan adds to the index a new country (country inclusion) or a new set of debt securities for a country already in the index (security inclusion), the country's weight in the index rises. In either case, to track the index, benchmark-driven investors would need to allocate more capital to the country. In contrast, this should not affect unconstrained investors as, by definition, they are not guided by index weights. As a result, such events can create portfolio inflows by benchmark-driven investors, but not unconstrained investors, due to the technical (not fundamental) nature of the event. Arslanalp and Tsuda (2015) examined three security inclusion events (Colombia, Peru, Romania). In this paper, we examine two country inclusion events (Dominican Republic and Uruguay) that took place since 2015. 7 A country's weight in the index can change due to valuation effects or inclusion events. For example, Russia's weight in the GBI-EM Global Diversified fell rapidly from 10 percent to 4.7 percent between March 2014 and end-March 2015, mainly due to the sharp drop in the value of the ruble during this period. This highlights the importance of separating valuation effects from exogenous changes in country weights. For that, we use the approach of Raddatz and others (2017) to separate the change in a country's index weight (w it ) into two components: (i) the buy-and-hold component, which captures the valuation effects; and (ii) the exogenous component, which captures inclusion events. Exogenous component Buy-and-hold component Where w it is the weight of country i in the J.P. Morgan GBI-EM Global Diversified index at time t. R it and R bt are the total gross returns of country i's bonds and the benchmark, respectively, from time t to t+1. Moreover, as benchmark-driven investments should only respond to the exogenous part of the country weight change, the size of the benchmarkdriven investor base can be estimated as follows: Where B t is the benchmark-driven investor base at time t in U.S. dollars. f it is the net foreign purchases of country i's government bonds between t and t+1 in U.S. dollars. Below, we estimate the benchmark-driven investor base using two country inclusion events that took place during 2017−18. Specifically, we estimate (B t ) in Equation 4 To track the index, a benchmark-driven investor would need to allocate capital to Uruguay during this period. Indeed, net foreign inflows into Uruguay's government bonds were exceptionally large when the country was included in the index-$1.1. billion over July-September 2017-the highest quarterly figure recorded in official balance of payments statistics between 2000−17. Based on Equation 4, this would imply a benchmark-driven investor base of around $370 billion at the time of the event (Table 3), slightly above the regression results presented in the previous section. Dominican Republic The (Table 4), fully in line with the regression results presented in the previous section. ©International Monetary Fund. Not for Redistribution IV. HOW SENSITIVE ARE BENCHMARK-DRIVEN INVESTMENTS TO GLOBAL FACTORS? In this section, we examine the sensitivity of benchmark-driven investments to global factors, focusing on the dynamics of portfolio flows to emerging markets. It is worth highlighting that the literature has generally relied on data reported by fund managers for the purpose of analyzing the effect of benchmarks on portfolio flows. In this literature, flows into investment funds ("fund flows") are often used synonymously with (portfolio) capital flows, even though they are quite different conceptually (Koepke, 2019). Koepke and Paetzold (2020) argue that studies using fund flows data as a proxy for portfolio flows are likely to overstate the importance of external drivers, precisely because fund flows are subject to benchmark effects. In the context of financial intermediation by investment funds, it is useful to distinguish three levels of decision-making that can give rise to a portfolio flow to a country: 8 (1) Decisions by ultimate investors to purchase/sell shares of an investment fund (prompting the fund managers to buy/liquidate securities). Ultimate investors typically treat emerging markets as an asset class, focusing on factors that affect emerging markets as a group, rather than on country-specific factors (Bush and others 2019). Such external developments include interest rates in advanced markets or global risk appetite. (2) Decisions by the fund manager to change the portfolio allocation relative to the relevant benchmark index. This second level of decision-making is where country-specific factors play an important role (Bush and others 2019). The rise of benchmark-driven investments means that allocation decisions at the fund manager level are becoming less important, diminishing the role of pull factors. (3) Decisions by the index provider to change the country weights in the index. This could be due to the inclusion/exclusion of a country's securities in the index that reduces/increases the weights of other countries in the index (triggered for example by a change in a country's credit rating). Such changes may result in portfolio flows that are typically unrelated to traditional push or pull factors (Raddatz and others 2017). Our empirical approach is adapted from Koepke (2019) and is consistent with the empirical literature, which emphasizes the importance of both external "push" and domestic "pull" factors. We use monthly data from January 2010 to December 2018. The data source for the analysis is the EPFR's fund flows into EM-dedicated bond funds as a proxy for benchmark-driven bond flows to EMs. For comparison, the model is also estimate with overall bond flows to EMs, based on data compiled by the IIF and consistent with official balance of payments statistics from national authorities. 9 The estimated equation is as follows: Where Flowst is the EPFR Global flows into EM dedicated bond funds as a proxy for benchmark-driven bond flows to EMs. 0 is a constant term. 1 is the coefficient for the lagged dependent variable. is a proxy for global risk aversion, measured by the U.S. corporate BBB spread over U.S. Treasury securities and is the 10-year U.S. Treasury yield. 10 Finally, is a pull variable, namely the emerging market (EM) economic surprise index compiled by Citigroup. The surprise index measures how strong recent economic data releases have been relative to investor expectations, thus providing a proxy for news about economic activity in emerging markets. The results suggest that benchmark-driven flows are highly sensitive to global factors (Table 5). Flows driven by emerging market benchmarks are about three to five times more sensitive to global risk factors than the balance of payments measures of portfolio flows. For example, a one standard deviation increase in the VIX on average reduces invested assets of benchmark-driven EM investors by 2 percent, compared with ½ percent for total portfolio investment ( Figure 15, panel 1). Similarly, a one standard deviation increase in U.S. 10-year Treasury yields reduces invested assets by 1½ percent, compared with about ¼ percent for total portfolio investment. Furthermore, the sensitivity of benchmark-driven flows to external factors has increased in recent years (Figure 15,panel 2). 11 Finally, a breakdown of EPFR data in ETFs and mutual funds suggest that ETFs are significantly more sensitive to interest rate and risk aversion shocks than mutual funds (Figure 15,panel 3). This is consistent with studies suggesting that ETFs amplify the global financial cycle in emerging markets (e.g., Converse, Levy-Yeyati, Williams, 2020). Figure 15. Sensitivity of Flows to External Factors Sources: Bloomberg L.P.; EPFR; Haver Analytics; IIF; IMF staff calculations. The heightened sensitivity of benchmarkdriven investments to external factors is also observed in the correlation of bond yields across countries after index inclusion events. An analysis of 10-year local bond yields for Chile, Colombia and Czech Republic over the last five years suggests that local bond yields become more correlated after country (or security) inclusion events. Specifically, we find that the 3-month correlation of the country yields with the overall J.P. Morgan GBI-EM yield increases significantly from 16 percent to 53 percent after inclusion events (Figure 16). 12 We also calculate the variation explained by the first principal component for our sample countries, which increases from 64 percent to 78 percent post inclusion. These results suggest that the importance of common factors rises as a result of a country's inclusion in a benchmark index. V. CONCLUSION In this paper, we take stock of the rise of benchmark-driven investments in emerging markets. We show that assets under management of benchmark-driven investment vehicles have increased rapidly over the past decade. As of end-2019, the size of benchmark-driven investments in EM local currency sovereign bonds is estimated between $300 billion, about 40 percent of the foreign investor base. The changing investor landscape entails both benefits and risks for EMs. On the upside, the inclusion in major benchmark indices provides countries with access to a larger and more diverse pool of external financing. Moreover, capital flows become less sensitive to domestic economic developments, which may reduce outflows in response to domestic shocks. On the downside, growing assets under management of passive and weakly active investment funds boost the role of external drivers, making emerging markets more susceptible to swings in global financial conditions. Empirical results show that benchmark-driven flows are about three to five times more sensitive to global risk factors than total portfolio flows. These benefits and risks are likely to increase as the transition towards passive investing continues through the rise of index funds and ETFs and as mutual funds follow benchmarks more closely to cut costs and increase transparency to remain competitive. Some of the risks arising from the rise of benchmark-driven investments are exacerbated by the way indices are constructed. As a result of index construction, some countries, often smaller and medium sized EMs, are disproportionately exposed to benchmark-driven investments. Countries with high shares of benchmark-driven investments in local bond markets include, for example, Colombia, Hungary and Peru. With the importance of benchmark-driven portfolio flows increasing, a close dialogue is needed between index providers, the investment community, and regulators. Enhanced transparency by index providers, such as on eligibility criteria for index inclusion and advance communication of forthcoming index changes, can help promote greater consistency and less flow volatility. Issuers should strive for index inclusion where prudent and avoid introducing fragmentation and concentration risks by premature or partial inclusion of debt instruments in international bond indices. These finding have important implications for policymakers. As the amounts of passive and benchmark-driven investments rise, index membership may become not only a benefit, but also a source of vulnerability for some emerging markets. Authorities should monitor risks related to the foreign ownership of local currency bonds, especially when a large share of these bonds is held by benchmark-driven investors. They should also consider the potential effects of policy actions on index eligibility, for example from the imposition of capital flows management measures. Given increased sensitivity of benchmark-driven investments to external factors, countries should reduce external vulnerabilities and strengthen buffers by reducing excessive external liabilities, reliance on short-term debt, while maintaining adequate fiscal buffers and foreign exchange reserves. ANNEX I. EMERGING MARKET AND GLOBAL BOND INDICES The J.P. Morgan GBI-EM index is the most widely used index for investing in EM local government bonds. There are also global bond indices by Bloomberg -Barclays and FTSE that include several EMs. However, the weight of EMs still remains relatively small in the global bond indices, less than one percent at the country level and four percent at the aggregate level, as of April 2020. Major EM bond indices The J.P. Morgan Government Bond Index-Emerging Markets (GBI-EM) index was launched in 2005.There are three versions of the index (GBI-EM Broad, GBI-EM Global, and GBI-EM), and each version has a diversified overlay. The diversified version places a 10 percent cap for each country to limit concentration risk. According to J.P. Morgan surveys, the GBI-EM Global Diversified is the most popular version used by investorsaccounting for more than 90 percent of assets benchmarked to all six versions of index. The main entry requirement for the index is market accessibility. There are no minimum-rating requirements or explicit market size limits. Bills and inflation-indexed bonds are not eligible for the index (only fixed-rate nominal bonds). As of end-April 2020, 19 countries were included in the GBI-EM Global Diversified index: Brazil, Chile, China, Colombia, Czech Republic, Dominican Republic, Hungary, Indonesia, Malaysia, Mexico, Peru, Philippines, Poland, Romania, Russia, South Africa, Thailand, Turkey and Uruguay. The Bloomberg-Barclays Emerging Markets Local Currency Government Index was launched in 2008. The main entry requirements are market accessibility and a minimum market capitalization of $5 billion. There are no minimum-rating requirements. Nominal fixed-rate bonds and bills are eligible for inclusion, but not inflation-indexed bonds. The index has a large overlap with the J.P. Morgan GBI-EM Global index, covering broadly the same countries, including two additional countries (Israel and Korea) and excluding two countries (Dominican Republic and Uruguay), as of end-April 2020. The FTSE Emerging Markets Government Bond Index (EMGBI) was launched in 2013. The main entry requirements are market accessibility, minimum market capitalization of $10 billion, and minimum rating requirements of C by Standard & Poor's and Ca by Moody's. Bills and inflation-indexed bonds are not eligible for the index (only fixed-rate nominal bonds). This index also has large overlap with the J.P. Morgan GBI-EM Global index. As of end-April 2020, the index includes all but three countries in the GBI-EM Global index (Czech Republic, Dominican Republic and Uruguay), all with small weights on the GBI-EM Global index. Major global bond indices The Bloomberg-Barclays Global Aggregate Index (Global AGG) tracks fixed-rate and investment-grade bonds of both developed and emerging markets, with a market capitalization of about $60 trillion, as of April 2020. The index was created in 1992 and historical data are available from January 1987. The Treasury sector of the Global Aggregate index tracks central government bonds issued by 37 countries in 24 currency markets, ©International Monetary Fund. Not for Redistribution
2020-11-05T09:10:15.687Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "8400321298e46b2c5396e55b0b24e08752035592", "oa_license": null, "oa_url": "https://www.elibrary.imf.org/downloadpdf/journals/001/2020/192/001.2020.issue-192-en.xml", "oa_status": "GOLD", "pdf_src": "ElsevierPush", "pdf_hash": "2063cbf536b333fbd5382834535c50053d6b1f65", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
16705445
pes2o/s2orc
v3-fos-license
The Kahler-Ricci flow through singularities We prove the existence and uniqueness of the weak Kahler-Ricci flow on projective varieties with log terminal singularities. It is also shown that the weak Kahler-Ricci flow can be uniquely continued through divisorial contractions and flips if they exist. We then propose an analytic version of the Minimal Model Program with Ricci flow. Introduction It has been the subject of intensive study over the last few decades to understand the existence of canonical Kähler metrics of Einstein type on a compact Kähler manifold, following Yau's solution to the Calabi conjecture (cf. [Y2], [A], [Y3], [T1], [T2]). The Ricci flow (cf. [Ha], [Ch]) provides a canonical deformation of Kähler metrics toward such canonical metrics. Cao [C] gives an alternative proof of the existence of Kähler-Einstein metrics on a compact Kähler manifold with numerically trivial or ample canonical bundle by the Kähler-Ricci flow. However, most projective manifolds do not have a numerically definite or trivial canonical bundle. It is a natural question to ask if there exist any well-defined canonical metrics on these manifolds or on varieties canonically associated to them. A projective variety is minimal if its canonical bundle is nef (numerically effective) and many results have been obtained on the Kähler-Ricci flow on minimal varieties. Tsuji [Ts] applies the Kähler-Ricci flow and proves the existence of a canonical singular Kähler-Einstein metric on a minimal projective manifold of general type. It is the first attempt to relate the Kähler-Ricci flow and canonical metrics to the Minimal Model Program. Since then, many interesting results have been achieved in this direction. The long time existence of the Kähler-Ricci flow on a minimal projective manifold with any initial Kähler metric is established in [TiZha]. The regularity problem of the canonical singular Kähler-Einstein metrics on minimal projective manifolds of general type is intensively studied in [EGZ1] and [Z1] independently. If the minimal projective manifold has positive Kodaira dimension and it is not of general type, it admits an Iitaka fibration over its canonical model. The authors define on the canonical model a new family of generalized Kähler-Einstein metrics twisted by a canonical form of Weil-Petersson type from the fibration structure ( [SoT1], [SoT2]). It is also proved by the authors that the normalized Kähler-Ricci flow converges to such a canonical metric if the canonical bundle is semi-ample ( [SoT1], [SoT2]). If K X is not nef, the unnormalized Kähler-Ricci flow (1.1) must become singular at certain time T 0 > 0. At time T 0 , either the flow develops singularities on a subvariety of X or X admits a Fano fibration and the flow is expected to collapse along the fibres. For the first case, the subvariety where the singularities appear is exactly where K X is negative. The flow then might perform an analytic/geometric surgery equivalent to an algebraic surgery such as a divisorial contraction or a flip, and replace X by a new projective variety X ′ . Hopefully, the flow can be continued on X ′ , which usually has mild singularities. The main goal of the paper is to define the Kähler-Ricci flow on singular varieties such as X ′ and to construct analytic surgeries for the Kähler-Ricci flow. If the second case occurs and the flow (1.1) collapses onto a new projective variety X ′′ , we expect the flow can be continued on the base X ′′ . Heuristically, we can repeat the above procedures until either the flow exists for all time or it collapses to a point. If the flow exists for all time, it should converge to a generalized Kähler-Einstein metric on its canonical model or a Ricci-flat metric on its minimal model after normalization if we assume the abundance conjecture. Eventually, we arrive at the final case when X is Fano and it is conjectured by the second named author that the unnormalized Kähler-Ricci flow becomes extinct in finite time after surgery if and only if it is birationally equivalent to a Fano variety [T3]. The conjecture is proved by the first named author for smooth solutions of the flow [So]. In general, the varieties obtained from divisorial contractions and flips have mild singularities. Also we expect that the analytic surgeries performed by the Kähler-Ricci flow coincide with the algebraic surgeries as divisorial contractions and flips. Therefore we can not avoid singularities if the Kähler-Ricci flow can be indeed continued through surgeries. We then must define the Kähler-Ricci flow on projective varieties with singularities. We confine ourselves in the category of singularities considered in the Minimal Model Program because such singularities are rather mild and they do not get worse after divisorial contractions or flips are performed. The precise definition is given in Section 2.3 for a Q-factorial projective variety with log terminal singularities. Roughly speaking, let X be such a projective normal variety and π :X → X be a resolution of singularity, then the pullback of any smooth volume form on X is integrable on the nonsingular modelX. Our first theorem proves the existence and uniqueness for the Kähler-Ricci flow on projective varieties with log terminal singularities. Furthermore, it establishes a smoothing property for the Kähler-Ricci flow if the initial data is not smooth. Theorem A.1 Let X be a Q-factorial projective variety with log terminal singularities and H be an ample Q-divisor on X. Let If ω 0 ∈ K H,p (X) for some p > 1, then there exists a unique solution ω of the unnormalized weak Kähler-Ricci flow (1.1) starting with ω 0 for t ∈ [0, T 0 ). Furthermore, if Ω is a smooth volume form on X, then for any T ∈ (0, T 0 ), there exists C > 0 such that on [0, T ] × X, e − C t Ω ≤ ω n ≤ e C t Ω. (1. 2) The definitions are given in Section 4.1 for K H,p (X) (Definition 4.2) and the weak Kähler-Ricci flow (Definition 4.3). Theorem A.1 shows that the Kähler-Ricci flow can start with a Kähler current which admits bounded local potential and an L p Monge-Ampère mass for some p > 1. It gives the short time existence for the weak Kähler-Ricci flow. Furthermore, it smoothes out the initial current in the sense that the flow becomes smooth on the nonsingular part of X once t > 0 and the evolving metrics always admit bounded local potentials for any t ∈ (0, T 0 ). In particular, if X is nonsingular, the flow becomes the usual Kähler-Ricci flow with smooth solutions on (0, T 0 ) × X. In particular, T 0 is exactly the first singular time for the unnormalized weak Kähler-Ricci flow. It is not clear how to define metrics on a singular variety X with reasonable regularity and curvature conditions in general. One natural choice is the restriction of the Fubini-Study metric ω F S for some projective embedding of X, if X is normal and projective. It is indeed a smooth metric on X, however, even the scalar curvature of ω F S might blow up near the singularities of X. More seriously, (ω F S ) n might not be a smooth volume form on X in general, although it is a smooth non-negative (n, n)-form on X (see Section 4.3 for more detailed discussions). Theorem A.1 shows that the volume form of the corresponding solutions of the weak Kähler-Ricci flow becomes equivalent to a smooth volume form immediately for t > 0. We speculate that the weak Kähler-Ricci flow produces metrics on X with reasonably good geometric conditions. For example, given a normal projective orbifold X embedded in some projective space CP N , the Fubini-Study metric ω F S is in general not a smooth orbifold Kähler metric on X. If we start the Kähler-Ricci flow with ω F S , the evolving metrics immediately become smooth orbifold Kähler metrics on X. We also make a remark that the assumption of Q-factoriality can be weakened. In fact, the theorems still hold with slight modification if both the initial divisor H and K X are Q-Cartier. The following theorem shows that the Kähler-Ricci flow can be defined on smooth projective varieties if the initial class is not Kähler. Theorem A.2 Let X be a non-singular projective variety and H be a big and semi-ample Q-divisor on X. Suppose that If ω 0 ∈ K H,p (X) for some p > 1, then there exists a unique solution ω of the unnormalized weak Kähler-Ricci flow (1.1) for t ∈ [0, T 0 ). The scalar curvature S is defined on a Zariski open set of X away from the exceptional locus of H. Theorem A.2 shows that for each t ∈ (0, T 0 ), the scalar curvature is uniformly bounded. Theorem A.2 immediately implies the following corollary. It turns out that at each t ∈ (0, T 0 ), the evolving metric has bounded scalar curvature on a projective variety which admits a crepant resolution. Corollary A.3 Let X be a Q-factorial projective variety with crepant singularities, H be an ample Q-divisor on X and T 0 = sup{t > 0 | H + tK X is nef }. From now on, we always assume X is a Q-factorial projective variety with log terminal singularities and H be an ample Q-divisor on X. Let T 0 = sup{t > 0 | H + tK X is nef } be the first singular time the unnormalized weak Kähler-Ricci flow (1.1) for t ∈ [0, T 0 ) starting with ω 0 ∈ K H,p (X) for some p > 1. Theorem A.1 gives the short time existence of the weak unnormalized Kähler-Ricci flow and the first singular time T 0 is exactly when the Kähler class of the evolving metrics stops being nef. If T 0 < ∞ and the limiting Kähler class is big, there is a contraction morphism π : X → Y uniquely associated to the limiting divisor H + T 0 K X . Let N E(X) be the closure of the convex cone that consists of the classes of effective curves on X. If the morphism π contracts exactly one extremal ray of N E(X), the recent result of [BCHM] and [HM] shows that either π contracts a divisor or there exists a unique flip associated to π (see Definition 5.4 for a flip). Since the weak unnormalized Kähler-Ricci flow cannot be continued on X at the singular time T 0 , we have to replace X by another variety X ′ and continue the flow on X ′ . Our next main result is to relate the finite time singularities of the unnormalized Kähler-Ricci flow (1.1) to divisorial contractions and flips in the Minimal Model Program. Theorem B.1 Let ω be the unique solution of the unnormalized weak Kähler-Ricci flow (1.1) for t ∈ [0, T 0 ) starting with ω 0 ∈ K H,p (X) for some p > 1. Suppose that H + T 0 K X is big and the morphism π : X → Y induced by the semi-ample divisor H T 0 = H + T 0 K X contracts exactly one extremal ray of N E(X). Furthermore, the unnormalized weak Kähler-Ricci flow (1.1) can be continued on Y with the initial Kähler current ω Y,0 . Furthermore, ω X + ,0 is smooth outside the singularities of X + and where the flip is performed, and the unnormalized weak Kähler-Ricci flow (1.1) can be continued on X + with the initial Kähler current ω X + ,0 . Here Exc(π) denotes the exceptional locus of the morphism π and X reg denotes the nonsingular part of X. In summary, we have the following corollary. (π −1 ) * ω X + ,0 is defined by pulling back the local potentials of ω X + ,0 . It is well-defined because the local potential of ω X + ,0 can be chosen to be constant along each connected fibre of π + and in particular, (π −1 ) * ω X + ,0 ∈ K H T 0 ,p (X). Corollary B.2 The unnormalized Kähler-Ricci flow can be continued through divisorial contractions and flips. The Minimal Model Program is successful in dimension three by Mori's work and the recent works have (c.f. [BCHM], [Si]) led to proving the finite generation of canonical rings. The deformation of the Kähler classes along the unnormalized Kähler-Ricci flow is in line with the Minimal Model Program with Scaling (MMP with scaling) proposed in [BCHM]. It is also proved in [BCHM] that MMP with scaling terminates after finitely many divisorial contractions and flips if the variety X is of general type. A good initial divisor H means that there are finitely many singular times and the contraction morphism at each singular time only contracts one extremal ray if the unnormalized Kähler-Ricci flow (1.1) starts with H. We refer the readers to Section 5.1 for the precise definition (Definition 5.3). In particular, good initial divisors always exist if dim X = 2 and kod(X) ≥ 0, then the normalized Kähler-Ricci flow with a good initial divisor converges to the canonical model or the minimal model of X coupled with a generalized Kähler-Einstein metric. It is possible that a general ample Q-divisor H on X is a good initial divisor since MMP with scaling terminates for X. Theorem C.1 Let X be a projective Q-factorial variety of general type with log terminal singularities. If H is a good initial divisor on X, then the normalized weak Kähler-Ricci flow starting with any initial Kähler current in K H,p (X) for some p > 1 exists for t ∈ [0, ∞) and it replaces X by its minimal model X min after finitely many surgeries. Furthermore, the normalized Kähler-Ricci flow converges in distribution to the unique Kähler-Eintein metric ω KE on its canonical model X can . Theorem C.1 gives the general philosophy of the analytic Minimal Model Program with Ricci Flow. The Kähler-Ricci flow deforms a given projective variety X to its minimal model X min in finite time after finitely many metric surgeries. Then X min is deformed to the canonical model X can coupled with a generalized Kähler-Einstein metric by the flow after normalization. We also remark that the flow converges in the sense of distribution globally and in the C ∞ -topology away from the singularities of X min and the exceptional locus of the pluricanonical system. Certainly, it is desired that the convergence should be in the sense of Gromov-Hausdorff. We also remark that when X is a nonsingular minimal model of general type, the convergence of the normalized Kähler-Ricci flow is proved in [Ts] and [TiZha]. The organization of the paper is the following. In Section 2, we set up the basic notations for degenerate complex Monge-Ampère equations and algebraic singularities in the minimal model theory. In Section 3, we solve a special family of degenerate parabolic Monge-Ampère equations on projective manifolds. In Section 4, we apply the results in Section 3 to prove Theorem A.1, Theorem A.2 and Corollary A.3 for the short time existence of the weak Kähler-Ricci flow. In Section 5, Theorem B.1 and Corollary B.2 are proved for the weak Kähler-Ricci flow through singularities. We also prove Theorem C.1 for long time existence and convergence. Finally in Section 6, we propose an analytic Minimal Model Program with Ricci Flow. Kodaira dimension and canonical measures Let X be an n-dimensional compact complex projective manifold and L → X a holomorphic line bundle over X. Let N (L) be the semi-group defined by Given any m ∈ N (L), the linear system |L m | = PH 0 (X, L m ) induces a rational map Φ m Definition 2.2 Let X be a projective manifold and K X be the canonical line bundle over X. Then the Kodaira dimension kod(X) of X is defined to be kod(X) = κ(X, K X ). The Kodaira dimension is a birational invariant of a projective variety and the Kodaira dimension of a singular variety is equal to that of its smooth model. Definition 2.3 Let L → X be a holomorphic line bundle over a compact projective manifold X. L is called nef if L · C ≥ 0 for any curve C on X and L is called semi-ample if L m is globally generated for some m > 0. For any m ∈ N such that L m is globally generated, the linear system |L m | induces a holomorphic map Φ m Φ m : X → CP dm by any basis of H 0 (X, L m ). Let Y m = Φ m (X) and so Φ m can be considered as The following theorem is well-known (cf. [La, U]). Theorem 2.1 Let L → X be a semi-ample line bundle over an algebraic manifold X. Then there is an algebraic fibre space such that for any sufficiently large integer m with L m being globally generated, where Y is a normal projective variety. Furthermore, there exists an ample line bundle A on Y such that L m = (Φ ∞ ) * A. If L is semi-ample, the graded ring R(X, L) = ⊕ m≥0 H 0 (X, L m ) is finitely generated and so it is the coordinate ring of Y . Let X be an n-dimensional projective manifold. It is recently proved in [BCHM] and [Si] independently that the canonical ring R(X, K X ) is finitely generated if X is of general type. Then the canonical ring induces a rational map from X to its unique canonical model X can . The following theorem is proved in [EGZ1] when X is of general type (also see [Ts], [TiZha] for minimal models of general type) and in [SoT2] when X admits an Iitaka fibration over X can . Theorem 2.2 Let X be an n-dimensional projective manifold with R(X, K X ) being finitely generated. 1. kod(X) = n, then there there exists a unique Kähler current ω KE ∈ [K Xcan ] with bounded local potential satisfying the Kähler-Einstein equation 2. 0 < kod(X) < n, X admits a rational fibration over X can whose general fibre has Kodaira dimension 0. There exists a unique Kähler current ω can ∈ [K Xcan + L X/Xcan ] such that where L X/Xcan is relative dualizing sheaf and ω W P is a canonical current of Weil-Peterson type induced from the Calabi-Yau fibration. The closed current ω W P is exactly the pullback of the Weil-Peterson metric on the moduli space of Calabi-Yau varieties associated to the fibres if X is a smooth minimal model and K X is semi-ample. When X is not minimal, the general fibre is not necessary a Calabi-Yau variety. One can still define ω W P as a L 2 -metric on the deformation space for varieties of 0 Kodaira dimension. We refer the readers to the precise definition in [SoT2]. If kod(X) = 0, then so there exists a holomorphic volume form Ω = (η ⊗ η) 1/m for some holomorphic section η ∈ H 0 (X, mK X ). It is proved in [SoT2] that for any ample divisor H on X, there exists a Kähler current ω CY ∈ H with bounded local potential such that for some positive constant c > 0. Therefore Ric(ω CY ) = 0 outside the stable base locus of the pluricanonical system of X. The existence of singular Ricci-flat Kähler metrics is proved in [EGZ1] on singular Calabi-Yau varieties. Such metrics are the unique canonical metrics on projective varieties of non-negative Kodaira dimension and the generalized Kähler-Einstein equations can be viewed as an analytic version of the adjunction formula. They are candidates for the limiting metrics of the Kähler-Ricci flow. Definition 2.4 Let X be an n-dimensional Kähler manifold and ω be a closed semi-positive (1, 1)-current on X. ω is called a Kähler current if it is big. If ω is a Kähler current with bounded local potential on X, the corresponding volume current ω n is uniquely well-defined by the standard pluripotential theory. Definition 2.5 Let ω be a Kähler current with bounded local potential on X. A quasi-plurisubharmonic function associated to ω is an upper semi-continous function ϕ : X → [−∞, ∞) such that ω + √ −1∂∂ϕ ≥ 0. We denote by P SH(X, ω) the set of all quasi-plurisubharmonic functions associated to ω on X. In [Ko1], Kolodziej proves the fundamental theorem on the existence of continuous solutions to the Monge-Ampère equation (ω + √ −1∂∂ϕ) n = F ω n , where ω is a Kähler form and F ∈ L p (X, ω n ) for some p > 1 is non-negative. Its generalization was independently carried out in [Zh] and [EyGuZe1]. They prove that there is a bounded solution when ω is semi-positive and big. A detailed proof for the continuity of the solution was given in [DZ] (also see [Zh] for an earlier sketch of proof). These generalizations are summarized in the following. Theorem 2.3 Let X be an n-dimensional Kähler manifold and let ω be a Kähler current with bounded local potential. Then there exists a unique solution ϕ ∈ P SH(X, ω) ∩ L ∞ (X) solving the following Monge-Ampère equation where Ω > 0 is a smooth volume form on X, F ∈ L p (X, Ω) for some p > 1 and X F Ω = X ω n . In [Ko2], Kolodziej proves the stability result for solutions of the complex Monge-Ampere equations for Kähler classes. It is later improved by Dinew and Zhang [DZ] (also see [DP] more general cases) for big and semi-ample classes. The following is a version of their result. Theorem 2.4 Let X be an n-dimensional compact Kähler manifold. Suppose L → X is a semiample line bundle and ω ∈ c 1 (L) is a smooth Kähler current. Let Ω be a smooth volume form on X. For any non-negative functions f and g ∈ L p (X, Ω) for some p > 1 with X f Ω = X gΩ, there exist ϕ and ψ ∈ P SH(X, ω) ∩ L ∞ (X) solving Then for any ǫ > 0, there exists C > 0 depending on ǫ and p, ||f || L p (X,Ω) and ||g|| L p (X,Ω) such that (2.4) Theorem 2.4 can be generalized for the case where the right hand side of the Monge-Ampere equations contains terms such e ϕ by Kolodziej's argument in [Ko2]. Theorem 2.4 also holds uniformly for certain family of ω, such as ω + ǫχ with a fixed Kähler metric χ and ǫ ∈ [0, 1]. Also the sharper exponents are obtained in [DZ] and [DP]. Singularities We will have to study the behavior of the Kähler-Ricci flow on normal projective varieties with singularities because the original smooth manifold might be replaced by varieties with mild singularities through surgery along the flow. The pluripotential theory on normal varieties has been extensively studied (cf [FN]) . Let X be a normal variety. A function f on X is continuous (or smooth) if f can be extended to a continuous (or smooth) function in a local embedding from X to C N . A plurisubharmonic function is an upper semi-continuous function ϕ : V → [−∞, ∞) which locally extends to a plurisubharmonic function in a local embedding from X to C N . By the work of [?, ?] any bounded plurisubharmonic function on X reg , the nonsingular part of X, can be uniquely extended to a plurisubharmonic function on X. Let X be a normal projective variety and ω be a semi-positive closed (1, 1)-current on X. We let P SH(X, ω) be the set of all upper semi-continuous functions ϕ : X → [−∞, ∞) such that ω + √ −1∂∂ϕ ≥ 0. In this paper, we confine our discussions to projective varieties with mild singularities which are allowed in the Minimal Model Program in algebraic geometry. Definition 2.6 [KMM] Let X be a normal projective variety such that K X is a Q-Cartier divisor. Let π :X → X be a resolution and {E i } p i=1 the irreducible components of the exceptional locus Exc(π) of π. There there exists a unique collection a i ∈ Q such that Then X is said to have • terminal singularities if a i > 0, for all i. • canonical singularities if a i ≥ 0, for all i. • log terminal singularities if a i > −1, for all i. • log canonical singularities if a i ≥ −1, for all i. Terminal, canonical and log terminal singularities are always rational, while log canonical singularities are not necessarily rational. We can always assume that the resolution π is good enough such that the exceptional locus is a simple normal crossing divisor. Kodaira's lemma states that for any big and nef line divisor H on X, there always exists an effective divisor E such that H − ǫE is ample for any sufficiently small ǫ > 0. Let π :X → X be a birational morphism between two projective varieties and Exc(π) be the exceptional locus of π, where π is not isomorphic. The following proposition is a special case of Kodaira's lemma and the support of E exactly coincides with Exc(π) (see [D]). Proposition 2.1 If X is normal and Q-factorial, then for any ample Q-divisor H on X, there exists an effective divisor E onX whose support is Exc(π) and π * H −ǫE is ample for sufficiently small ǫ > 0. It is also well-known that Q-factoriality is preserved after divisorial contractions and flips in the Minimal Model Program. Q-factoriality is a necessary condition in our discussion because we need the canonical divisor to be a Cartier Q-divisor in order to define a volume form appropriately. 3 Monge-Ampère flows 3.1 Monge-Ampère flows with rough initial data In this section, we will prove the smoothing property of the Kähler-Ricci flow with rough initial data. We will assume that X is an n-dimensional Kähler manifold. The following proposition shows that any element in P SH p (X, ω, Ω) for p > 1 can be uniformly approximated by smooth quasi-plurisubharmonic functions. Proof Recall that C ∞ (X) is dense in L p (X). Therefore there exists a sequence of positive functions {F j } ∈ C ∞ (X) such that X F j Ω = X F Ω and lim j→∞ ||F j − F || L p (X) = 0. We then consider the solutions of the following Monge-Ampère equations Since F j ∈ C ∞ (X) and F j > 0, ϕ 0,j ∈ P SH(X, ω 0 ) ∩ C ∞ (X). Without loss of generality, we can assume sup By the stability theorem of Kolodziej [Ko2] (Theorem 2.4), we have where C only depends on ||F j || L p (X) and ||F || L p (X) . The proposition follows easily. 2 Now we have a sequence of smooth Kähler forms and ω t = ω 0 +tχ. By simple calculation, one can show that the unnormalized Kähler-Ricci flow with the initial Kähler metric ω 0,j is equivalent to the following Monge-Ampère flow to be the first time when the Kähler class stops being positive along the Kähler-Ricci flow. It is well-known that T 0 > 0 and by the result of [TiZha], the Monge-Ampère flow exists for [0, T 0 ). The following lemma shows that the Monge-Ampére flows starting with ϕ 0,j approximate the same flow starting with ϕ 0 . Lemma 3.1 For any 0 < T < T 0 , there exists C > 0 such that for t ∈ [0, T ], Proof Applying the maximum principle to ϕ j , we can show that there exists C > 0 such that Let ψ j,k = ϕ j − ϕ k . Then ψ j,k satisfies the following equation (3.6) By the maximum principle, We also can bound the volume form along the Monge-Ampère flow, even though the initial volume form is only in L p (X). Lemma 3.2 For any (3.7) Proof Let ∆ j be the Laplacian operator associated to the Kähler form By the maximum principle, H + is uniformly bounded from above for t ∈ [0, T ]. Then H − (t, ·) tends to ∞ uniformly as t → 0 + and there exist constants C 1 , C 2 and C 3 > 0 such that Then at the minimal point of H − , the maximum principle gives ω n j ≥ C 4 t n Ω. It easily follows that H − is uniformly bounded from below for t ∈ [0, T ]. Since ϕ j is uniformly bounded for t ∈ [0, T ], the lemma is proved. 2 The following smoothing lemma shows that the approximating metrics become uniformly bounded immediately along the Monge-Ampère flow. Lemma 3.3 For any 0 < T < T 0 , there exists C > 0 such that for t ∈ (0, T ], (3.8) Proof This is a parabolic Schwarz lemma similar to [Y1], [LiYa]. Straightforward computation from [SoT1] shows that for any t ∈ [0, T ], there exist uniform constants C 1 and C 2 > 0 such that and so H is uniformly bounded from above at (t 0 , z 0 ). Since H(0, ·) = 0 and ϕ 0,j is both uniformly bounded, H is uniformly bounded for t ∈ [0, T ] and so we prove the lemma. 2 Let g (j) be the Kähler metric associated to ω j and ∇ (j) be the gradient operator associated to the Kähler form ω j . As in [Y2], set Lemma 3.4 For any T < T 0 , there exist constants λ > 0 and C > 0 such that for t ∈ [0, T ], Proof Since all the second order terms are bounded by certain power of e 1 t . By the computation in [PSS], there exist sufficiently large α and β > 0 such that By choosing sufficiently large A and β > α , we have for sufficiently large A > 0. By the maximum principle and Lemma 3. Proposition 3.2 For any 0 < ǫ < T < T 0 and k ≥ 0, there exists C ǫ,T,k > 0 such that, For any ǫ > 0, there exists J > 0 such for any j > J, Combining the above estimates together, for any t ∈ [0, δ] and z ∈ X This completes the proof of the lemma. 2 Now we are ready to show the existence and uniqueness for the Monge-Ampère flow starting with ϕ 0 ∈ P SH p (X, ω 0 , Ω). Proposition 3.3 ϕ is the unique solution of the following Monge-Ampère equation Proof It suffices to prove the uniqueness. Suppose there exists another solution (3.13) By the maximum principle, max X ψ(t, ·) is decreasing and min X ψ(t, ·) is increasing on (0, T 0 ). Since both of max X ψ(t, ·) and min X ψ(t, ·) are continuous on [0, . The proposition follows easily. 2 With the above preparations, we can show the smoothing property for the Kähler-Ricci flow with rough initial data. In particular, ω(t, ·) converges in the sense of distribution to ω ′ 0 as t → 0. Proof The unnormalized Kähler-Ricci flow ∂ω (3.14) for some smooth function f (t) on (0, T 0 ). We can assume that φ(0, ·) = ϕ 0 by subtracting a constant because φ(t, ·) converges to a continuous φ 0 (·) in C 0 (X) as t → 0, and φ 0 differs from ϕ 0 by a constant. Then we consider the function ψ = φ − ϕ, with ψ(0, ·) = 0. By the same argument as that in the proof of Proposition 3.3 we can show that for 0 < t 1 < t 2 < T 0 , and by letting t Theorem 3.1 shows that the Kähler-Ricci flow smooths out the initial semi-positive closed (1, 1)-current ω with bounded local potential and ω n ∈ L p (X) for some p > 1. It improves a result of Chen-Tian-Zhang [CTZ] (also see [CT] and [CD]), where p > 3. We remark that the condition that p > 1 is essential for later estimates and geometric applications. Monge-Ampère flows with degenerate initial data In this section, we will investigate a family of Monge-Ampère flows with singular data on a smooth projective variety. The existence and uniqueness for the solutions will be proved. We start with two conditions prescribing the singularity and degeneracy of the data that will be considered along certain Monge-Ampère flows. These conditions arise naturally in the geometric setting in later discussions. Condition A. Let X be an n-dimensional projective manifold. Let L 1 → X be a big and semiample line bundle over X and L 2 → X be a line bundle such that [L 1 + ǫL 2 ] is still semi-ample for ǫ > 0 sufficiently small. Let ω 0 ∈ c 1 (L 1 ) be a smooth semi-positive closed (1, 1)-form on X and χ ∈ c 1 (L 2 ) a smooth closed (1, 1)-form. Let ω t = ω 0 + tχ. We assume that ω 0 at worst vanishes along a projective subvariety of X to a finite order, that is, there exists an effective divisor E 0 on X such that for any fixed Kähler metric ϑ, Such an ω 0 always exists. For example, let m be sufficiently large such that (L 1 ) m is globally generated and let {S (m) j } dm j=0 be a basis of H 0 (X, (L 1 ) m ). We can then let where E i and F j are irreducible components of E with simple normal crossings. In addition, we assume a i ≥ 0 and 0 < b j < 1. Let Ω be a semi-positive (n, n)-form on X such that X Ω > 0 and where S E and S F are the multi-valued holomorphic defining sections of E and F , h E and h F are smooth hermitian metrics on the line bundles associated to E and F . Note that the condition b j ∈ (0, 1) makes Ω an integrable (n, n)-form on X. Furthermore, Ω Θ is in L p (X, Θ) for some p > 1. Since L 1 is big and semi-ample, by Kodaira's lamma, there exists an effective Q-divisorẼ such that [L 1 ]− ǫ[Ẽ] is ample for any sufficiently small rational ǫ > 0. Without loss of generality, we can always assume that the support ofẼ contains E 0 , E and F , i.e., Let SẼ be the defining section ofẼ and hẼ a smooth hermitian metric on the line bundle associated to [Ẽ] such that for sufficiently small ǫ > 0, We can also scale hẼ and assume |SẼ| 2 hẼ ≤ 1 on X. Let ω t = ω 0 + tχ. We consider the following Monge-Ampère flow with the initial data ϕ 0 ∈ P SH(X, ω 0 ) ∩ C ∞ (X). (3.16) The equation (3.16) is not only degenerate in the sense that [ω t ] is not neccesarily Kähler but also that Ω has zeros and poles along E and F . The goal of the following discussion is to prove the existence and uniqueness of the solution for the Monge-Ampère flow (3.16) with appropriate assumptions. Let The following theorem is the main result of this section. Theorem 3.2 Let X be an n-dimensional projective manifold. Suppose Condition A and Condition B are satisfied. Then for any ϕ 0 ∈ P SH(X, (3.17) Remark 3.1 For any fixed T ∈ (0, T 0 ), we can assume that ω t ≥ ǫω 0 for all t ∈ [0, T ], where ǫ > 0 is sufficiently small and it depends on T . Recall that L 1 + tL 2 is semi-ample and big for all t ∈ [0, T 0 ). We fix T ′ ∈ (T, T 0 ). Then and [ω 0 + T 0 χ] is semi-positive and big. Then there exists φ ∈ P SH(X, It is easy to check that ω ′ t ≥ T 0 −t T 0 ω 0 and Condition A and Condition B are still satisfied for ω ′ 0 and Ω ′ . From now on, we fix T ∈ (0, T 0 ) and assume without loss of generality from the previous remark that ω t is bounded from below by ǫω 0 for sufficiently small ǫ > 0. In order to prove Theorem 3.2, we have to perturb equation (3.17) in order to obtain smooth approximating solutions. Let ω t,s = ω 0 + tχ + sϑ and be the perturbations of ω t and Ω for s, w, r ∈ [0, 1]. In particular, ω t,0 = ω t and Ω 0,0 = Ω. Since ϑ is Kähler, ω t,s is Kähler for s > 0 and t ∈ [0, T 0 ). Then we consider the following well-defined family of Monge-Ampère flows with the fixed initial data ϕ 0 . (3.18) The standard argument gives the following lemma as [ω s,t ] stays positive on [0, T 0 ). 3.19) Proof There exists a constant C > 0 such that Since F w,r has at worst poles alongẼ and the vanishing order of |S F | 2 h F is strictly less than 2, by choosing p − 1 > 0 sufficiently small, X (F w,r ) p ϑ n is uniformly bounded from above. 2 We can apply the results for degenerate complex Monge-Ampère equations as F w,r is uniformly bounded in L p (X) for some p > 1. Lemma 3.8 For any 0 < T < T 0 , there exists C > 0 such that for all s, w, r ∈ (0, 1], (3.20) It is easy to see that α s,w,r (t) is uniformly bounded for t ∈ [0, T ]. Notice that By the maximum prinicple, X ϕ s,w,r Ω w,r ≤ C for a uniform constant C that depends on T . It follows that sup X ϕ s,w,r is uniformly bounded above. Now it suffices to obtain a uniform lower bound for ϕ s,w,r . Let θ = δω 0 , where δ > 0 is sufficiently small such that 2θ ≤ ω t for all t ∈ [0, T ]. By the choice of hẼ, for any sufficiently small ǫ > 0. We consider the following family of Monge-Ampère equations for w, r ∈ [0, 1]. with the normalization conditions [θ] n = C w,r X Ω w,r and sup X φ = 0. By letting ǫ → 0, we have The uniform bound for φ w,r gives the uniform lower bound for ϕ s,w,r . Combined with the upper bound for ϕ s,w,r , we have completed the proof. 2 Lemma 3.9 For any T ∈ (0, T 0 ), there exist C, α > 0 such that for all t ∈ [0, T ] and s, w, r ∈ (0, 1] Proof Let ∆ s,w,r be the Laplace operator with respect to the Kähler metric ω s,w,r . Notice that ∂ ∂tφ s,w,r = ∆ s,w,rφs,w,r + tr ωs,w,r (χ). is uniformly bounded from above for A > 0 sufficiently large. Then for A sufficiently large, we have By the maximum principle, H + is uniformly bounded above and so there exist C 1 and C 2 such thatφ s,w,r ≤ C 1 + C 2 log |SẼ| 2 hẼ . To estimate the lower bound ofφ s,w,r , we define for sufficiently large A. Then straightforward calculation shows that there exist constants C 3 , Then a similar argument by the maximum principle gives the lower bound for H − andφ s,w,r . 2 Lemma 3.10 For any T ∈ (0, T 0 ), there exist C, α > 0 such that for all t ∈ [0, T ] and s, r, w ∈ (0, 1] |tr ϑ (ω s,w,r )| ≤ C|SẼ| −2α hẼ . Proposition 3.4 For any T ∈ (0, T 0 ), K ⊂⊂ X \Ẽ and k > 0, there exists C k,K,T such that Proof The proof follows from standard Schauder's estimates. 2 Our goal is to construct a solution by the approximating solutions ϕ s,w,r . Lemma 3.11 The following monotonicity conditions hold for ϕ s,w,r . Proof The proof is a straightforward application of the maximum principle. 2 . Then ϕ s,w ∈ P SH(X, ω t,s ) ∩ L ∞ (X) ∩ C ∞ (X \Ẽ) and ϕ s,w,r converges to ϕ s,w on X \Ẽ locally in C ∞ -topology by estimates from Lemma 3.8 and Proposition 3.4. The following monotonicity also holds and follows easily from the above results. Furthermore, for any The following corollary is then immediate. Corollary 3.1 ϕ satisfyies the following Monge-Ampère flow (3.23) In order to prove the uniqueness of the solution of the Monge-Ampère flow (3.23), we consider a family of new Monge-Ampère flows with one more parameter. Lemma 3.14 For any T ∈ (0, T 0 ), there exist C and δ 0 > 0 such that for s, w, r ∈ (0, 1], s,w,r be the Laplace operator with respect to ω s,w,r = −ϕ 0 when t = 0 and It is easy to see that ∂ ∂δ ϕ (δ) s,w,r is uniformly bounded above by the maximum principle. By the similar argument as before, ϕ (δ) s,w,r is bounded in L ∞ (X) uniformly in s, w, r, δ and t ∈ [0, T ]. 2 Now we are able to prove our main result for the existence and uniqueness of the Monge-Ampère solution. Suppose there is another solution ϕ ′ satisfying the Monge-Ampère flow (3.17) such that First, we show that ϕ ′ ≤ ϕ. Monge-Ampère flows with rough and degenerate initial data In this section, we will generalize Theorem 3.2 for Monge-Ampère flows with rough initial data. The main result will be applied to the Kähler-Ricci flow on singular projective varieties with surgery. Let X be an n-dimensional projective manifold. Let L 1 and L 2 be two holomorphic line bundles on X satisfying Condition A along with ω 0 ∈ c 1 (L 1 ) and χ ∈ c 1 (L 2 ) being smooth closed (1, 1)-forms. Let Ω be a non-negative (n, n)-form on X satisfying Condition B. Let for p > 0 and T 0 = sup{t > 0 | L 1 + tL 2 is semi-ample}. Since L 1 is big and semi-ample, we denote Exc(L 1 ) be the exceptional locus for the linear system |mL 1 | for sufficiently large m. Without loss of generality, we can assume thatẼ as defined in Section 3.2 contains Exc(L 1 ). Lemma 3.20 There exists C > 0 such that for s, w, r ∈ (0, 1] and δ ∈ [−δ 0 , δ 0 ], ||ϕ (δ) s,w,r (t, ·)|| L ∞ ([0,T ]×X) ≤ C. (3.39) Proof It can be proved by the same argument as that in the proof of Lemma 3.8. 2 Lemma 3.21 There exists C > 0 such that on [0, T ]×X, for all s, w,r ∈ (0, 1] and δ ∈ [−δ 0 , δ 0 ], − C ≤ tφ (δ) s,w,r ≤ C. (3.40) Proof The upper bound can be proved using the same argument as that in Lemma 3.2 by applying the maximum principle on In order to prove the lower bound. We consider the following family of Monge-Ampère equations [ω 0,s ] n R X Ωw,r and sup X φ s,w,r = 0. Then A s,w,r is uniformly bounded from above and below for s, w and r ∈ (0, 1]. As A s,w,r Ω w,r is uniformly bounded in L p (X, Θ) for some p > 1, φ s,w,r uniformly bounded in L ∞ (X) for s, w, r ∈ (0, 1]. Let s,w,r be the Laplace operator associated to ω (δ) s,w,r . Then there exist C 1 , C 2 and C 3 > 0 such that Applying the maximum principle, H − is uniformly bounded from below since both ϕ (δ) s,w,r and φ s,w,r are uniformly bounded in L ∞ (X). We are done. 2 We have the following volume estimate. 3. ||ϕ|| L ∞ ((0,T ]×X)) is bounded for each T < T 0 . Furthermore, for any T ∈ (0, T 0 ), there exists C > 0 such that on [0, T ] × X, (3.49) Proof It suffices to show (2) and the volume estimate. On any K ⊂⊂ X \Ẽ, For any ǫ > 0, let s be sufficiently small such that Fix such s. There exists t 0 > 0 such that The volume estimate (3.49) follows from (3.41) by letting s, w, r and δ → 0. 2 Theorem 3.3 Let X be an n-dimensional algebraic manifold. Let L 1 and L 2 be two holomorphic line bundles on X satisfying Condition A and ω 0 ∈ c 1 (L 1 ) and χ ∈ c 1 (L 2 ) are smooth closed (1, 1)-forms. Let Ω be an (n, n)-form on X satisfying Condition B. Let Then for any ϕ 0 ∈ P SH p (X, ω 0 , Ω) for some p > 1, there exists a unique ϕ .1 Notations Let X be a Q-factorial projective variety with at worst log terminal singularities. We denote the singular set of X by X sing and let X reg = X \ X sing . Let π :X → X be the resolution of singularities and KX = π * K X + a i E i where E i is the irreducible component of the exceptional locus Exc(π) of π. Since X is log terminial, a i > −1. Since K X is a Q-Cartier divisor, there exists a positive m ∈ Z such that mK X is Cartier. Definition 4.1 Ω is said to be a smooth volume form on X if Ω is a smooth (n, n)-form on X such that for any z ∈ X, there exists an open neighborhood U of z such that where f U is a smooth positive function on U and α is a local generator of mK X on U . Let D be an ample divisor onX such that where h D is a hermitian metric equipped on the line bundle associated to D. Let ι : X → CP N be any imbedding of X into a projective space and ω 0 be the pullback of a smooth Kähler metric from CP N in a multiple of the Kähler class O(1). Then ω 0 is a smooth Kähler metric on X. Since [π * ω 0 ] is the pullback of an ample class on CP N , it is a big and semi-ample divisor onX. By the Kodaira's lemma, there exists an effective divisorẼ onX such that is ample for any ǫ > 0 sufficiently small. Furthermore, since X is Q-factorial, we can assume by Proposition 2.1 that the support ofẼ is contained in the exceptional locus of π. There exists a hermitian metric hẼ equipped on the line bundle associated toẼ such that for sufficiently small ǫ > 0, π * ω 0 − ǫRic(hẼ ) > 0. Let S D and SẼ be the defining section of D andẼ. Definition 4.2 Let X be a Q-factorial projective variety with log terminal singularities and H be a big and semi-ample Q-divisor on X. Let ω 0 ∈ [H] be a smooth closed (1, 1)-form and Ω a smooth volume form on X. We define for p ∈ (0, ∞], The definition of K H,p (X) does not depend on the choice of the smooth closed (1, 1)-form ω 0 ∈ [H] and the smooth volume form Ω. We define the following notion of the weak Kähler-Ricci flow on projective varieties with singularities. Definition 4.3 Let X be a Q-factorial projective variety with log terminal singularities and ω 0 ∈ [H] be a closed semi-positive (1, 1)-current on X associated to a big and semi-ample Qdivisor H on X. Suppose that A family of closed positive (1, 1)-current ω(t, ·) on X for t ∈ [0, T 0 ) is called a solution of the unnormalized weak Kähler-Ricci flow if the following conditions hold. ω(0, ·) = ω 0 , on X. (4.3) In particular, when H is ample, T 0 is always positive and X \ D = X reg . We would like to prove the existence and uniqueness of the weak Kähler-Ricci flow on singular varieties if the initial metric satisfies certain regularity conditions. The following theorems are well-known as the rationality theorem and base-point-free theorem in the Minimal Model Program (see [KMM], [D]). Theorem 4.1 Let X be a projective manifold such that K X is not nef. Let H be an ample Q-divisor and let λ = max{t ∈ R | H + tK X is nef }. (4.4) Then λ ∈ Q. Theorem 4.2 Let X be a projective manifold. Let D be a nef Q-divisor such that aD − K X is nef and big for some a > 0. Then D is semi-ample. Existence and uniqueness of the weak Kähler-Ricci flow Let X be a Q-factorial projective variety with log terminal singularities. Let H be an ample Q-divisor on X, ω 0 ∈ [H] be a smooth Kähler metric and Ω be a smooth volume form on X and χ = √ −1∂∂ log Ω. Consider the ordinary differential equation for the Kähler class defined by the unnormalized Ricci flow on X (4.5) Heuristically, if the Kähler-Ricci flow exists for t ∈ [0, T ), the unnormalized Kähler-Ricci flow should be equivalently to the following Monge-Ampère flow (4.6) Let π :X → X be the resolution of singularities as defined in Section 4.1. In order to define the Monge-Ampère flow on X, one might want to lift the flow to the nonsingular modelX of X. However, ω 0 is not necessarily Kähler onX and Ω in general vanishes or blows up along the exceptional divisor of π onX unless the resolution π is crepant, hence the lifted flow is degenerate near the exceptional locus. So we have to perturb the Monge-Ampère flow (4.6) and obtain uniform estimates so that the flow might be allowed to be pushed down on X. Let Then for any t ∈ [0, T 0 ), [ω t ] is ample and T 0 > 0 is a rational number or T 0 = ∞ by the rationality theorem 4.1. The base-point-free theorem 4.2 implies the following important proposition. Theorem 4.3 Let ϕ 0 ∈ P SH p (X, ω 0 , Ω) for some p > 1. Then the Monge-Ampère flow oñ X \Ẽ Proof Since [π * ω 0 ] corresponds to a big and semi-ample divisor onX and [π * ω 0 ] − ǫ[π * χ] is also big and semi-ample for sufficiently small ǫ > 0. The adjunction formula gives KX = π * K X + i a i E i + j F j , where E i and F j are irreducible components of the exceptional locus with a i ≥ 0 and b j > −1. Note that π * Ω vanishes only on each E i to order a i and π * Ω has poles along those F i with b i . Then π * ω 0 , π * χ and π * Ω satisfy Condition A and Condition B. Furthermore, π * ϕ 0 ∈ P SH p (X, π * ω 0 , π * Ω) and so the assumptions in Theorem 3.3 are satisfied. The first part of the theorem is then an immediate corollary of Theorem 3.3. The singular setẼ can be chosen to be contained in the exceptional locus Exc(π) of π, since X is Q-factorial. Alsoφ must be constant along each component of Exc(π) as [π * ω t ] is trivial along each component of the exceptional divisors. So it descends to a function in P SH(X, ω t ) on X. 2 Theorem 4.4 Let X be a Q-factorial projective variety with log terminal singularities and H be an ample Q-divisor on X. Let If ω 0 ∈ K H,p (X) for some p > 1, then there exists a unique solution ω of the unnormalized weak Kähler-Ricci flow for t ∈ [0, T 0 ). Furthermore, if Ω is a smooth volume form on X, then for any T ∈ (0, T 0 ), there exists a constant C > 0 such that on [0, T ] × X, (4.8) Proof It suffices to prove the uniqueness as the existence and the volume estimate follow easily from Theorem 4.3 and Theorem 3.3. Since X is Q-factorial, π * [ω 0 ] − ǫ[Exc(π)] is ample for ǫ > 0 sufficiently small. So we can chooseẼ to be contained in Exc(π). Hence F descends to X reg and √ −1∂∂F = 0 on X reg . For each t ∈ (0, T 0 ), F is smooth on X reg , therefore F is constant on each curve in X which does not intersect X sing . On the other hand, for any two generic points z and w on X, there exists a curve joining z and w without intersecting X sing since codim(X sing ) ≥ 2. So F (z) = F (w) as F is constant on C. Then F is constant on X reg since F is continuous on X reg . By modifying ϕ by a function only in t, ϕ would satisfy the Monge-Ampère flow (4.7). The theorem follows from the uniqueness of the solution ϕ. The volume estimate also follows from Theorem 3.3. 2 We immediately have the following long time existence result generalizing the case for nonsingular minimal models due to Tian-Zhang [TiZha]. Corollary 4.1 Let X be a minimal model with log terminal singularities and H be an ample Q-divisor on X. Then T 0 = sup{t > 0 | H + tK X is nef } = ∞ and the unnormalized weak Kähler-Ricci flow starting with ω 0 ∈ K H,p (X) for some p > 1 exists for t ∈ [0, ∞). Kähler-Ricci flow on projective varieties with orbiforld or crepant singularities Given a normal projective variety X, very little is known how to construct "good" Kähler metrics on X with reasonable curvature conditions. In general, the restriction of Fubini-Study metrics ω F S on X from ambient projective spaces behaves badly near the singularities of X. Even the scalar curvature of ω F S would have to blow up. In particular, ω n is not necessarily equivalent to a smooth volume form on X. For example, let X be a surface containing a curve C with self-intersection number −2 and Y be the surface obtained from X by contracting C. Then Y has an isolated orbifold singularity. Let ω be a smooth Kähler metric and Ω a smooth volume on Y . Then ω n Ω = 0 at the orbifold singularity. It tells that one should look at the category of smooth orbifold Kähler metrics on Y instead of smooth Kähler metrics from ambient spaces. As it turns out, the Kähler-Ricci flow produces Kähler currents whose Monge-Ampère mass is equivalent to a smooth volume form on singular varieties by Theorem 4.4. It is desirable that the Kähler-Ricci flow indeed improves the regularity of the initial data. In the case when X has orbifold or crepant singularities, we show that at least the scalar curvature of the Kähler currents are bounded. In particular if X has only orbifold singularities, the Kähler-Ricci flow immediately smoothes out the initial Kähler current. Theorem 4.5 Let X be a Q-factorial projective normal variety with orbifold singularities. Let H be an ample Q-divisor on X and If ω 0 ∈ K H,p (X) for some p > 1, then there exists a unique solution ω of the unnormalized weak Kähler-Ricci flow for t ∈ [0, T 0 ). Furthermore, ω(t, ·) is a smooth orbifold Kähler-metric on X for all t > 0 and so the weak Käher-Ricci flow becomes the smooth Kähler-Ricci flow on X immediately when t > 0. Proof X is automatically log terminal under the assumptions in the theorem if it only admits orbifold singularities. The theorem can be proved by the same argument as in Theorem 3.1. We leave the details for the readers as an exercise. 2 Theorem 4.5 can also be applied to the Kähler-Ricci flow on projective manifolds whose initial class is not Kähler. Theorem 4.6 Let X be a smooth projective variety. Let H be a big and semi-ample Q-divisor on X. Suppose that If ω 0 ∈ K H,p (X) for some p > 1, then there exists a unique solution ω of the unnormalized weak Kähler-Ricci flow for t ∈ [0, T 0 ). Claim 1 For any 0 < t 0 < T < T 0 , there exist A and B > 0 such that for all s ∈ (0, 1] and on [t 0 , T ] × X, where ∇ s is the gradient operator associated toω s . Claim 2 For any 0 < t 0 < T < T 0 , There exists C > 0 such that for all s ∈ (0, 1] and on (4.14) In particular, there exists C > 0 such that Straightforward calculations show that Notice that ∂ϕs ∂t is uniformly bounded on [t 0 , T ] for any 0 < t 0 < T < T 0 and s ∈ (0, 1]. Then similar argument in the proof of Theorem 5.1 can be applied. Namely, one can apply the maximum principle for (t − t 0 )H and (t 0 )K, where If we choose A > 0 sufficiently large, Hence (t − t 0 )H is uniformly bounded on [t 0 , T ] × X for any s ∈ (0, 1]. If we choose A and B > 0 sufficiently large, Hence (t − t 0 )K is uniformly bounded on [t 0 , T ] × X for any s ∈ (0, 1]. Here we make use of Claim 1 that T trω s (χ) = trω s (ω ′ 0 + T χ) − trω s (ω ′ 0 ) is uniformly bounded on [t 0 , T ] × X uniformly for s ∈ (0, 1]. Also the term The theorem is then proved by letting s → 0. 2 Now we shall prove the two claims in the proof of Theorem 4.6. Proof of Claim 1 Without loss of generality, we let π : X → CP Nm be the morphism induced by mH and mω ′ 0 is the pullback of the Fubini-Study metric on CP Nm if m is sufficiently large. Notice that for t ∈ [t 0 , T ], H + T K X is still semi-ample and big, so m ′ (H + T K) induces a morphism π ′ : X → CP N m ′ . We can again assume that ω 0 + T χ is the pullback of the Fubini-Study metric on CP N m ′ . The curvature of ω ′ 0 on CP Nm and the curvature of ω ′ 0 +T χ on CP N m ′ are both bounded. Then it becomes a straightforward calculation from [SoT1]. 2 Proof of Claim 2 This can be proved by the parabolic Schwarz lemma from [SoT1]. We apply the maximum principle for t log trω s (ω ′ 0 ) − Aϕ s and t log trω s (ω ′ 0 + T χ) − Aϕ s for sufficiently large A so that both terms are uniformly bounded on [0, T ] × X uniformly for s ∈ (0, 1]. The claim then easily follows. 2 Theorem 4.6 shows that the Kähler-Ricci flow can be defined even if the initial smooth Kähler current is in the semi-ample cone of divisors. The following theorem is an immediate corollary of Theorem 4.6. Theorem 4.7 Let X be a Q-factorial projective variety with crepant singularities. Let H be an ample Q-divisor on X and If ω 0 ∈ K H,p (X) for some p > 1, then there exists a unique solution ω of the unnormalized weak Kähler-Ricci flow for t ∈ [0, T 0 ). Furthermore, for any t ∈ (0, T 0 ), there exists C(t) > 0 such that the scalar curvature S(ω(t, ·)) is bounded by C(t) ||S(ω(t, ·))|| L ∞ (X) ≤ C(t). (4.18) Proof Let Ω be a smooth volume form on X. Let π :X → X be a crepant resolution of X. Then π * Ω is again a smooth volume form onX. Then we can apply Theorem 4.6. We remark that it might be interesting to remove the dependence on the t for the scalar curvature bound if T 0 = ∞. 2 It is shown in [Z2] that the scalar curvature is uniformly bounded along the normalized Kähler-Ricci flow on smooth manifolds of general type. On the other hand, the scalar will in general blow up if the Kähler-Ricci flow develops finite time singularities (see [Z3]). Minimal Model Program with Scaling Definition 5.1 Let X be a projective variety and N 1 (X) Z the group of numerically equivalent 1-cycles (two 1-cycles are numerically equivalent if they have the same intersection number with every Cartier divisor). Let N 1 (X) R = N 1 (X) Z ⊗ Z R. We denote by N E(X) the set of classes of effective 1-cycles. N E(X) is convex and we let N E(X) be the closure of N E(X) in the Euclidean topology. A special case of the Minimal Model Program is proposed in [BCHM] and plays an important role for the termination of flips. We briefly explain the Minimal Model Program with Scaling below. Definition 5.2 (MMP with scaling) 1. We start with a pair (X, H), where X is a projective Q-factorial variety X with log terminal singularities and H is a big and semi-ample Q-divisor on X. 3. Otherwise, there is an extremal ray R of the cone of curves N E(X) on which K X is negative and λ 0 H + K X is zero. So there exists a contraction π : X → Y of R. • If π is a divisorial contraction, we replace X by Y and let H Y be the strict transformation of λ 0 H + K X by π. Then we return to 1. with (Y, H Y ). • If π is a small contraction, we replace X by its flip X + and let H X + be the strict transformation of λ 0 H + K X by the flip. Then we return to 1. with (X + , H X + ). • If dim Y < dim X, then X is a Mori fibre space, i.e., the fibers of π are Fano. Then we stop. The following theorem is proved in [BCHM]. Theorem 5.1 If X is of general type, the Minimal Model Program with Scaling terminates in finite steps. In general, the contraction of the extremal ray might not be the same as the contraction induced by the semi-ample divisor λ 0 H + K X . We define the following special ample divisors so that at each step, there is only one extremal ray contracted by the morphism induced by λ 0 H + K X . Definition 5.3 Let X be a projective Q-factorial variety with log terminal singularities. An ample Q-divisor H on X is called a good initial divisor H if the following conditions are satisified. 1. Let X 0 = X and H 0 = H. The MMP with scaling terminates in finite steps by replacing (X 0 , H 0 ) by (X 1 , H 1 ), ..., (X m , H m ) until X m+1 is a minimal model or X m is a Mori fibre space. 2. Let λ i be the nef threshold for each pair (X i , H i ) for i = 1, ..., m. Then the contraction induced by the semi-ample divisor λ i H i + K X i contracts exactly one extremal ray. It might be possible that good initial divisors are generic if MMP with scaling holds for any pair (X, H). It will be seen in the future that good initial divisors simplify the analysis for surgery along the Kähler-Ricci flow, though such an assumption is not necessary. We will explain it in detail in Section 5.5. Now we relate the Kähler-Ricci flow to MMP with scaling. Consider the unnormalized Kähler-Ricci flow ∂ω ∂t = −Ric(ω) on X with the initial Kähler current ω 0 ∈ [H] for an ample divisor H on X. Let T 0 = sup{t ≥ 0 | H + tK X > 0}. By the rationality theorem 4.1, T 0 = ∞ or T 0 is a positive rational number. In particular, if X is a minimal model, then T 0 = ∞. In fact, T 0 = 1 λ 0 is the inverse of the nef threshold. The following theorem is a natural generalization for the long time existence theorem of Tian and Zhang [TiZha] for the Kähler-Ricci flow on smooth minimal models. Theorem 5.2 Let X be an n-dimensional Q-factorial projective variety with log terminal singularities with nef K X . For any ample Q-divisor H on X and ω 0 ∈ K H,p (X) with p > 1, there unnormalized weak Kähler-Ricci flow starting with ω 0 , exists for t ∈ [0, ∞). Suppose that X is not minimal and so T 0 < ∞. Then H + T 0 K X is nef and the weak Kähler-Ricci exists uniquely for t ∈ [0, T 0 ). By Kawamata's base point free theorem, H + T 0 K X is semi-ample and hence the ring R(X, H + T 0 K X ) = ⊕ ∞ m=0 H 0 (X, m(H + T 0 K X ) is finitely generated. • If H + T 0 K X is big and hence R(X, H + T 0 K X ) induces a birational morphism π : X → Y . For a generic ample divisor H, the morphism π contracts exactly one extremal ray of N E(X). We discuss the following two cases according to the size of the exceptional locus of π. 1. π is a divisorial contraction, that is, the exceptional locus Exc(π) is a divisor whose image of π has codimension at least two. In this case, Y is still Q-factorial and has at worst log terminal singularities. 2. π is a small contraction, that is, the exceptional locus Exc(π) has codimension at least two. In this case, Y have rather bad singularities and K Y is no longer a Cartier Q-divisor. The solution to such a small contraction is to replace X by a birationally equivalent variety with singularities milder than those of Y . Definition 5.4 ( see [KMM]) Let π : X → Y be a small contraction such that −K X is π-ample. A variety X + together with a proper birational morphism π + : X + → Y is called a flip of π if π + is also a small contraction and K X + is π-ample. Here X + is again Q-factorial and has at worst log terminal singularities. • If H + T 0 K X is not big, then the Kodaira dimension 0 ≤ κ = kod(H + T 0 K X ) < n and X is a Mori fibre space admitting a Fano fibration over a normal variety Y of dimension κ. In particular, Y is Q-factorial and has log terminal singularities. We will discuss in the following sections the behavior of the Kähler-Ricci flow at the singular time T 0 according to the above situations. Estimates In this section, we assume that T 0 < ∞ and H + T 0 K X is big. Let Ω be a smooth volume form on X and χ = √ −1∂∂ log Ω ∈ [K X ]. Consider the Monge-Ampère flow associated to the unnormalized Kähler-Ricci flow on X with the initial Kähler form ω 0 , where p > 1, ω t = ω 0 + tχ and ω = ω t + √ −1∂∂ϕ. Since H + T 0 K X is big and semi-ample, the linear system |m(H + T 0 K X )| for sufficiently large m induces a morphism Let ω Y be the pullback of a multiple of the Fubini-Study metric form on CP Nm with ω Y ∈ [H + T 0 K X ]. There exists a resolution of singularities and the exceptional locus of π µ :X → Y satisfying the following conditions. There exists an effective divisor any sufficiently small ǫ > 0 and the support of E Y coincides with the exceptional locus of µ. Let S E Y be the defining section for the line bundle associated to [E Y ] and h E Y the hermitian metric such that for any ǫ > 0, Let Exc(π) be the exceptional locus of π. Then we have the following uniform estimates. Furthermore, for any K ⊂⊂ X \ Exc(π) and k ≥ 0, there exists C K,k > 0 such that Proof We lift the Monge-Ampère flow (5.2) onX. The proof of the L ∞ -estimate proceeds in the same way as in the proof of Lemma 3.8 since [ω t ] is big and semi-ample for all t ∈ [0, T 0 ]. The C 2 -estimate onX follows the same argument as in Lemma 3.10, which is valid onX \ E Y . Since the support of µ(E Y ) is contained in Exc(π), the C 2 -estimate holds on X reg \ Exc(π). We leave the details for the readers as an exercise. 2 Theorem 5.4 There exists C > 0 such that on [0, T 0 ) × X, Let ∆ be the Laplace operator with respect to the pullback of ω. Then H is smooth outside E Y and As H| t=0 = −ϕ 0 + ǫ log |S E Y | 2 h Y is bounded from above and for each t ∈ (0, T 0 ), the maximum of H can only be achieved onX \ E Y , H ≤ H| t=0 is uniformly bounded above and by letting ǫ → 0, there exists C > 0 such that tφ ≤ C. We are done. 2 Corollary 5.1 We consider the unique solution ω for the unnormalized weak Kähler-Ricci flow on X starting with an initial current in K H,p (X) for some p > 1. If H +T 0 K X is big, then ω(t, ·) converges to a Kähler currentω T 0 ∈ K H+T 0 K X ,∞ (X) in C ∞ (X reg \ Exc(π))-topology. That is, there exists C > 0 such thatω for a fixed smooth volume form Ω on X. By Corollary 5.1, H is trivial over the fibres and so ω Y is trivial restricted on each fibre. Let ω T 0 = ω Y + √ −1∂∂ϕ T 0 , then ϕ T 0 must be constant on each fibre as any fibre of π are connected. Therefore ϕ T 0 can descend onto Y and ϕ T 0 ∈ P SH(Y, ω Y ) ∩ L ∞ (Y ). So the limiting Kähler currentω T 0 descends onto Y as a semi-positive closed (1, 1)-current. Extending Kähler-Ricci flow through singularities by divisorial contractions In this section, we will prove that the weak Kähler-Ricci flow can be continued through divisorial contractions. We assume that π : X → Y is a divisorial contraction and the fibres of π are connected. It is well-known that Y is again a Q-factorial projective variety with at worst log terminal singularities if X is. Proposition 5.1 Let Ω Y be a smooth volume form on Y and H Y = π * (H + T 0 K X ). Then for some p > 1, Proof Obviously,ω T 0 has bounded local potentia and the restriction ofω has constant local potential along each fibre of π. Soω T 0 descends to Y and (π −1 ) * ω T 0 is well-defined and admits bounded local potential on Y . Let F = (ω T 0 ) n Ω Y . It suffices to show F ∈ L p (Y, Ω Y ) for some p > 1. There exists C > 0 such that Since Ω Ω Y has at worst poles, Y F p Ω Y < ∞ for p − 1 > 0 sufficiently small. 2 Theorem 5.5 Let X be a Q-factorial projective variety with log terminal singularities and H be an ample Q-divisor on X. Let be the first singular time. Suppose that the semi-ample divisor H + T 0 K X induces a divisorial contraction π : X → Y . Furthermore, the unnormalized weak Kähler-Ricci flow can be continued on Y with the initial Kähler current ω Y,0 . Proof Since H Y is the strict transformation of H by π and ω Y,0 admits bounded local potential, ω Y,0 ∈ K H Y ,p ′ (X) for some p ′ > 1 by Proposition 5.1. Then the Kähler-Ricci flow can start with ω Y,0 on Y uniquely as H Y is ample. 2 Extending Kähler-Ricci flow through singularities by flips In this section, we will prove that the weak Kähler-Ricci can be continued through flips. We assume that π : X → Y is a small contraction and there exists a flip π = π + • π −1 : X + X. Then X + is Q-factorial and it has at worst log terminal singularities. The limiting Kähler currentω T 0 descends on Y and it can be then pulled back on X + by π + . Furthermore, there exists C > 0 such that (π * ω T 0 ) ň π * Ω ≤ C. (5.8) 2 be an ample Q-divisor on X. Let be the first singular time. Suppose that the semi-ample divisor H + T 0 K X induces a small contraction π : X → Y and there exists a flip (5.10) Let ω be the unique solution of the unnormalized weak Kähler-Ricci flow for t ∈ [0, T 0 ) starting with ω 0 ∈ K H,p (X) for some p > 1. Then there exists ω X + ,0 ∈ K H X + ,p ′ (X + ) such that ω(t, ·) converges to (π −1 ) * ω X + ,0 in C ∞ (X reg \ Exc(π))-topology, where H X + is the strict transformation of H byπ. Furthermore, ω X + ,0 is smooth outside the singularities of X + and where the flip is performed, and the unnormalized weak Kähler-Ricci flow can be continued on X + with the initial Kähler current ω X + ,0 . Proof Since H + is the strict transformation of H byπ and ω X + ,0 admits bounded local potential, ω X + ,0 ∈ K H + ,p ′ for some p > 1 by Proposition 5.2. Then the Kähler-Ricci flow can start with ω X + ,0 on X + uniquely as H + is big and semi-ample, and H + + ǫK X + is ample for sufficiently small ǫ > 0. 2 Long time existence assuming MMP As proved in Section 5.3 and 5.4, the Kähler-Ricci flow can flow through divisorial contractions and flips. If the exceptional loci of the contracted extremal rays do not meet each other, Theorem 5.5 and 5.6 still hold. However, at the singular time T 0 , the morphism π : X → Y induced by the semi-ample divisor H + T 0 K X might contract more than one extremal ray. It simplifies the analysis to assume the existence of a good initial divisor as in Definition 5.3 as to avoid complicated contractions. Theorem 5.7 Let X be a Q-factorial projective variety with log terminal singularities. If there exists a good initial divisor H on X, then either X does not admit a minimal model or the unnormalized weak Kähler-Ricci flow has long time existence for any Kähler current ω 0 ∈ K H,p (X) with p > 1, after finitely many surgeries through divisorial contractions and flips. Proof Assume X admits a minimal model and let X 0 = X and H 0 = H. Since H is a good initial divisor, by MMP with scaling, at each singular time, the morphism induced by the semiample divisor is always a contractional contraction or flipping contraction. More precisely, suppose the Kähler-Ricci flow performs surgeries and replaces (X 0 , H 0 ) by a finite sequence of (X i , H i ) at each singular time T i , i = 1, ..., m , and X m+1 is a minimal model of X. If λ i is the nef threshold for (X i , H i ) as in Definition 5.2, i = 1, ..., m, λ i > 0 and At T i , the morphism induced by the semi-ample divisor H i + T i K X i contracts exactly one extremal ray and so it must be a divisorial contraction or a flip. By Theorem 5.5 and Theorem 5.6, the Kähler-Ricci flow with the pair (X i , H i ) is replaced by the one with the pair (X i+1 , H i+1 ) with H i+1 being the strict transform of H i + T i K X i until X is finally replaced by its minimal model X m+1 and the Kähler-Ricci flow exists for all time afterwards by Theorem 4.4 as K X m+1 is nef. 2 If H is not a good initial divisor, the surgery at the finite singular time could be complicated and a detailed speculation is given in Section 6.2. Convergence on projective varieties of general type Let X be a minimal model of general type with log terminal singularities and so K X is big and nef. Let H be an ample Q-divisor on X and ω 0 ∈ K H,p . We consider the normalized Kähler-Ricci flow on X. (5.11) Let Ω be a smooth volume form on X and χ ∈ √ −1∂∂ log Ω ∈ c 1 (K X ). Then the Kähler-Ricci flow (5.11) is equivalent to the following Monge-Ampère flow. where ω t = e −t ω 0 + (1 − e −t )χ. Now that K X is semi-ample as X is a minimal model, the abundance conjecture holds for general type. The linear system |mK X | for sufficiently large m > 0 induces a morhpism π : X → X can , where X can is the canonical model of X. Without loss of generality, we can always assume that χ ≥ 0 and χ is big. Furthermore, we can assume ω 0 ≥ ǫχ for sufficiently small ǫ > 0 since H is ample. The long time existence is guaranteed by Theorem 4.4 since K X is nef and T 0 = ∞. Lemma 5.1 There exists C > 0 such that Proof LetX be a nonsingular model of X. Without loss of generality, we can consider the Monge-Ampère flow (5.12) onX by pullback and the following smooth approximation for the Monge-Ampère flow as discussed in Section 3.2. Let ψ ǫ = ϕ s,w,r − φ s,w,r − ǫ log |SẼ| hẼ , whereẼ is a divisor whose support contains the exceptional locus of the resolution ofX over X and χ − ǫRic(hẼ ) > 0 for sufficiently small ǫ > 0. Then similar argument by the maximum principle as in Section 3.3 shows that ψ ǫ is uniformly bounded from below for all t ∈ [0, ∞) and for all sufficiently small ǫ > 0. Then by letting ǫ → 0, there exists C > 0 such that for t ∈ [0, ∞), s, w and r ∈ (0, 1], ϕ s,w,r ≥ −C. Therefore ϕ is uniformly bounded from below for all t ∈ [0, ∞) by its definition. The uniform upper bound of ϕ can be obtained by similar argument. 2 Lemma 5.2 Let X • = X reg \Exc(π). For any K ⊂⊂ X • , t 0 and k > 0 , there exists C K,k,t 0 > 0 such that for t ∈ [t 0 , ∞), ||ϕ(t, ·)|| C k ω 0 (X) ≤ C K,k,t 0 . (5.16) Proof We can assume that X is nonsingular with the cost of ω 0 and Ω being degenerate. We first have to show that tr ω 0 (ω) is uniformly bounded on K. This is achieved by similar arguments for Lemma 3.10. Then the higher order estimates follow by standard argument. 2 Let H = e t ∂ϕ ∂t − Aϕ + ǫ log |SẼ| 2 hẼ − Ant, where A > 0 is sufficiently large such that Aω 0 ≥ χ and ǫ > 0 is chosen to be sufficiently small. Then there exists C > 0 for all sufficiently small ǫ > 0 such that Since the maximum can only be achieved on X • and H| t 0 =0 is bounded from above, by the maximum principle, there exists C > 0 such that on [t 0 , ∞) × X, H ≤ C(t + 1). Proof Let ϕ ∞ be the limit of ϕ as t → ∞. By Corollary 5.2, ∂ϕ ∂t converges to 0, and so ϕ ∞ must satisfy equation (5.20). The uniqueness of ϕ ∞ follows from the uniqueness of the solution to the equation (5.20) as ϕ ∞ ∈ P SH(X, χ) ∩ L ∞ (X). 2 The Kähler current χ + √ −1∂∂ϕ ∞ is exactly the pullback of the unique Kähler-Einstein metric ω KE on the canonical model X can of X in Theorem 2.2. The following theorem then follows from Proposition 5.4. Theorem 5.8 Let X be a minimal model of general type with log terminal singularities. For any Q-ample divisor H on X, the normalized weak Kähler-Ricci flow converges to the unique Kähler-Eintein metric ω KE on the canonical model X can for any initial Kähler current in K H,p (X) with p > 1. We have the following general theorem by combining Theorem 5.7 and Theorem 5.8 if the general type variety is not minimal. Theorem 5.9 Let X be a projective Q-factorial variety of general type with log terminal singularities. If there exists a good initial divisor H on X, then the normalized weak Kähler-Ricci flow starting with any initial Kähler current in K H,p (X) with p > 1 exists for t ∈ [0, ∞) and replaces X by its minimal model X min after finitely many surgeries. Furthermore, the normalized Kähler-Ricci flow converges to the unique Kähler-Eintein metric ω KE on its canonical model X can . Analytic Minimal Model Program with Ricci Flow In this section, we lay out the program relating the Kähler-Ricci flow and the classification of projective varieties following [SoT1] and [T3]. The new insight is that the Ricci flow is very likely to deform a given projective variety to its minimal model and eventually to its canonical model coupled with a canonical metric of Einstein type, in the sense of Gromov-Hausdorff. We will start discussions with the case of projective surfaces. Results on surfaces A smooth projective surface is minimal if it does not contain any (−1)-curve. Let X 0 be an projective surface of non-negative Kodaira dimension. If X 0 is not minimal, then the unnormalized Kähler-Ricci flow starting with any Käher metric ω 0 in the class of an ample divisor H 0 has a smooth solution until the first singular time T 0 = sup{t > 0 | H 0 + T 0 K X 0 is nef}. The limiting semi-ample divisor H 0 + T 0 K X 0 induces a morphism π 0 : X 0 → X 1 by contracting finitely many (−1)-curves. X 1 is smooth and there exists an Q-ample divisor H 1 on X 1 such that H 0 + T 0 K X 0 = π * 0 H 1 . Then by Theorem 5.5, the unnormalized Kähler-Ricci flow can be continued through the contraction π 0 at time T 0 . Since there are finitely many (−1)-curves on X, the unnormalized Kähler-Ricci flow will arrive at a minimal surface X min or it collapses a CP 1 fibration in finite time after repeating the same surgery for finitely many times. It is still a largely open question if the Kähler-Ricci flow converges to the new surface in the sense of Gromov-Hausdorff at each surgery. The only confirmed case is the Kähler-Ricci flow on CP 2 blow-up at one point. More precisely, it is shown in [SW] that the unnormalized Kähler-Ricci flow on CP 2 blow-up at one point converges to CP 2 in the sense of Gromov-Hausdorff if the initial Kähler class is appropriately chosen and the initial Kähler metric satisfies the Calabi symmetry. Then the flow can be continued on CP 2 and eventually will be contracted to a point in finite time. This shows that the Kähler-Ricci flow deforms the non-minimal surface to a minimal surface in the sense of Gromov-Hausdorff. Similar behavior is also shown in [SW] for higher-dimensional analgues of the Hirzebruch surfaces. This leads us to propose a conjectural program in the following section for general projective varieties. After getting rid of all (−1)-curves, we can focus on the minimal surfaces divided into ten classes by the Enriques-Kodaira classification. If kod(X min ) = 2, X min is a minimal surface of general type and its canonical model X can is an orbifold surface achieved by contracting all the (−2)-curves on X min . It is shown in [TiZha] that the normalized Kähler-Ricci flow ∂ω ∂t = −Ric(ω) − ω converges in the sense of distributions to the pullback of the orbiford Kähler-Einstein metric on the canonical model X can . If kod(X min ) = 1, X min is a minimal elliptic fibration over its canonical model X can . It is shown in [SoT1] that the normalized Kähler-Ricci flow ∂ω ∂t = −Ric(ω) − ω converges in the sense of distributions to the pullback of the generalized Kähler-Einstein metric on the canonical model X can . If kod(X min ) = 0, K X is numerically trivial. Yau's solution to the Calabi conjecture shows that there always exists a Ricci-flat Kähler metric in any given Kähler class on X min . In particular, it is shown in [C] that the unnormalized Kähler-Ricci converges in the sense of distributions to the unique Ricci-flat metric in the initial Kähler class. If X is Fano, then it is proved in [Pe2] and [TiZhu] that the normalized Kähler-Ricci flow ∂ω ∂t = −Ric(ω) + ω with an appropriate initial Kähler metric will converge in the sense of Gromov-Hausdorff to a Kähler-Ricci soliton after normalization. In general, the understanding of the Kähler-Ricci flow is still not completely understood for surfaces of −∞ Kodaira dimension as the flow might collapse in finite time. Conjectures In this section, we discuss our program in higher dimensions. Our proposal gives new understanding of the Minimal Model Program from the viewpoint of differential geometry. We refer it as the analytic Minimal Model Program. Minimal Model Program with Ricci Flow 1. We start with a triple (X, H, ω), where X is a Q-factorial projective variety with log terminal singularities, H is a big semi-ample Q-divisor on X such that H + ǫK X is ample for sufficiently small ǫ > 0, and ω ∈ K H,p (X) for some p > 1. Let Let ω(t, ·) be the unique solution of the unnormalized weak Kähler-Ricci flow for t ∈ [0, T 0 ). Conjecture 6.1 For each t ∈ (0, T 0 ), the metric completion of X reg by ω(t, ·) is homeomorphic to X. We also conjecture that the Ricci curvature of ω(t, ·) is bounded from below. 2. If T 0 = ∞, then X is a minimal model and the Kähler-Ricci flow has long time existence. The abundance conjecture predicts that K X is semi-ample and kod(X) ≥ 0. 2.1. kod(X) = dim X, i.e., X is a minimal model of general type. starting with ω converges to the unique Kähler-Einstein metric ω KE on X can in the sense of Gromov-Hausdorff as s → ∞. We then repeat Step 1 by replacing (X, H, ω) with (X + , H X + , ω X + ) even though X + is not necessarily Q-factorial. Note that a divisorial contraction is also a general flip if we choose π + to be the identity map. 3.2 0 < dim Y < dim X. X then admits a Fano fibration over Y . 3.3 If dim Y = 0, X is Fano and ω ∈ −T 0 [K X ]. Then we have the following generalized Hamilton-Tian conjecture. starting with ω converges to a Kähler-Ricci soliton (X ∞ , ω KR ) in the sense of Gromov-Hausdorff as s → ∞. Perelman [Pe2] announced a proof for this conjecture for Kähler-Einstein manifolds. A proof is given for Fano manifolds with a Kähler-Ricci soliton by Tian-Zhu [TiZhu]. It is conjectured by Yau [Y2] that the existence of a Kähler-Einstein metric on a Fano manifold is equivalent to suitable stability in the sense of geometric invariant theory. The condition of K-stability is proposed by Tian [T1] and is refined by Donaldson [Do]. The Yau-Tian-Donaldson conjecture claims that the existence of Kähler metrics with constant scalar curvature is equivalent to the K-stability (possibly with some additional milder conditions on holomorphic vector fields). Since the Kähler-Ricci flow provides an approach to such a conjecture for Kähler-Einstein metrics and it has attracted considerable current interest. We refer the readers to an incomplete list of literatures [PS1], [PS2], [TiZhu], [PSSW1], [PSSW2], [Sz] and [To] for some recent development.
2009-09-26T22:19:51.000Z
2009-09-26T00:00:00.000
{ "year": 2009, "sha1": "8b1dddbd6b7da6c375127d39708f326a9c0e65da", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0909.4898", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "328a3fb385e751bde77564b75a740e9c6d06f245", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
258675690
pes2o/s2orc
v3-fos-license
Recent Advances in Tissue-Engineered Cardiac Scaffolds—The Progress and Gap in Mimicking Native Myocardium Mechanical Behaviors Heart failure is the leading cause of death in the US and worldwide. Despite modern therapy, challenges remain to rescue the damaged organ that contains cells with a very low proliferation rate after birth. Developments in tissue engineering and regeneration offer new tools to investigate the pathology of cardiac diseases and develop therapeutic strategies for heart failure patients. Tissue -engineered cardiac scaffolds should be designed to provide structural, biochemical, mechanical, and/or electrical properties similar to native myocardium tissues. This review primarily focuses on the mechanical behaviors of cardiac scaffolds and their significance in cardiac research. Specifically, we summarize the recent development of synthetic (including hydrogel) scaffolds that have achieved various types of mechanical behavior—nonlinear elasticity, anisotropy, and viscoelasticity—all of which are characteristic of the myocardium and heart valves. For each type of mechanical behavior, we review the current fabrication methods to enable the biomimetic mechanical behavior, the advantages and limitations of the existing scaffolds, and how the mechanical environment affects biological responses and/or treatment outcomes for cardiac diseases. Lastly, we discuss the remaining challenges in this field and suggestions for future directions to improve our understanding of mechanical control over cardiac function and inspire better regenerative therapies for myocardial restoration. Introduction Heart failure (HF) is the leading cause of morbidity and mortality worldwide and in the US despite many breakthroughs in medicine and biotechnology [1][2][3][4]. Approximately 115 million Americans have hypertension, 100 million have obesity, 118 million have prediabetes or diabetes, and 125 million have atherosclerotic disease, all of which are well-known risk factors for the development of HF. Myocardial infarction (MI), often known as heart attack, is an acute coronary syndrome that results in the formation of non-contracting fibrotic scar tissue and the malfunction or death of cardiomyocytes. The injury is basically non-reversible because of the low regenerative potential of mammalian hearts [1,2]. According to the most recent data from the National Health and Nutrition Examination Survey, an American has an MI approximately every 40 s [3]. Moreover, hypertension, heart valve dysfunction, arrhythmia, and congenital heart diseases are other key contributors to HF. To date, neither pharmaceutical administration nor heart transplantation has been able to sufficiently restore the function of a failing heart. Thus, Hydrogel Scaffolds In the 1980s, hydrogel materials were pioneered as an advanced culture scaffold for fibroblasts and skeletal muscle cells, later resulting in the first myocardial muscle model system with a collagen matrix by Eschenhagen et al. in 1997 [24]. In addition, fibrin, collagen, laminin, Matrigel, and combinations of various ECM proteins have been used to develop various hydrogel systems for the functional enhancement of engineered tissues, with or without using casting molds and anchoring molecules [8,25]. A key advantage of this method is that the naturally existing ECM components promote cell growth and the development of cell-cell and cell-matrix connections [22,26]. Hydrogels have been widely applied in tissue engineering and regenerative medicine [26][27][28], drug delivery [29][30][31], soft electronics [32,33], and biosensors and actuators [34][35][36]. In general, hydrogels are elastic scaffolds with substantially lower stiffnesses than the native myocardium or heart valves [37]. To overcome the mechanical weakness, composite scaffolds have been developed by blending hydrogels and synthetic biomaterials to develop materials that more closely mimic the mechanical properties of cardiac tissues [38][39][40][41][42]. Electrospun Nanofibrous Scaffolds Beginning with the Formhals patent, electrospinning has an almost 90-year history and numerous applications in modern industry [43]. Studies on polymer fibers in the 1990s led to the re-recognition of electrospinning and new applications in tissue engineering and drug delivery, mainly due to technological advancements allowing the resolution and moderation of nanometer-scale features [44,45]. Electrospinning is one of the most practical and versatile methods for fabricating micro/nanofibrous polymeric structures with precise control over matrix architectural features, such as fiber size, orientation, crosslinks, and fusion, and the resulting properties, including mechanical and electrical conduction behaviors [46][47][48]. Electrospinning is a widely used mode of nanofiber production because it can be employed to generate nanofibers from a wide variety of both synthetic and biologically derived polymers, polymer blends, and composites [49]. A polymer solution is ejected through a syringe at a specific flow rate onto a metal collector at a desired distance from the needle tip. A voltage is applied between the needle tip and the collector to supply an electric field to draw the polymer fibers [37,50]. The fibrous architecture and properties can be altered by a variety of parameters in the polymer solution (e.g., molecular weight, concentration, mixture of polymers); in the operation of the apparatus (voltage, distance from needle tip to collector plane, injection flow rate, and duration); and in the setup of the collector or other processing conditions (e.g., humidity) [37,51,52]. Three-Dimensional Bioprinted Scaffolds Three-dimensional printing is the fabrication of three-dimensional objects from digital models by the layer-by-layer deposition of materials onto a surface. It has emerged as a technique for developing 3D scaffolds for tissues or organs with a programmable structure and precise control over the micro/nanostructure and the distribution of tissue components. The capability of 3D printing in micro-and nanoscale fabrications for cardiac tissue engineering was discussed in detail by Kankala et al. [53]. The mixture of cells can be achieved either through a cell seeding procedure followed by the printing of complex scaffolds or the simultaneous delivery of biomaterials and cells to construct 3D cell-laden scaffolds [54,55]. There are three primary ways to achieve 3D bioprinting: inkjet bioprinting, laser-assisted bioprinting, and extrusion bioprinting. The advantages and disadvantages of these methods were reviewed by Xie et al. [56]. Anisotropic Tissue-Engineered Scaffolds Most biological tissues exhibit some degree of anisotropy in their mechanical characteristics. That is, the tissue's mechanical behavior is different in different directions. This feature results in direction-dependent cellular activities such as cytoskeleton rearrangement and alignment, integrin activation, and ECM deposition. In terms of bulk mechanical behavior, tissue anisotropy varies from almost zero (isotropy) in tissues such as the liver to a high degree of anisotropy in tissues such as ligaments and tendons. Cardiac tissues, including the myocardium and heart valves, are anisotropic as well. The ventricular wall is a multi-layer tissue with complex microstructures in which cardiac muscle fibers are interconnected hierarchically within collagen fibers. The variation of the main fiber angle across the ventricular wall is responsible for the longitudinal and circumferential motion of cardiac torsion (Figure 1a) [12,57]. These characteristics result in mechanical and electrical features that are directionally dependent-a phenomenon known as cardiac anisotropy. The transmural variation in the myofiber/collagen has been confirmed by the examination of serial histology sections from the rodent and ovine myocardium [58,59]. With disease progression (such as hypertension), the fiber alignment is further altered, and the tissue becomes more anisotropic [58]. The fiber organization is essential for the organ's mechanical and electrical functions, and an alteration may lead to organ dysfunction and eventually HF. Moreover, the structure of heart valves is complex, yet well-organized, with three distinct layers (ventricularis, spongiosa, and fibrosa) that each serve a specific function (Figure 1b). The ventricularis layer, located on the ventricle side, is mostly composed of radially aligned elastin fibers. In the spongiosa-the middle layer of the native valve ECM-randomly aligned proteoglycans are present. The fibrosa layer is dominated by dense collagen fibers with circumferentially oriented structures. As a result, the valve tissues exhibit anisotropic mechanical, biochemical, and biophysical functions [12,57]. Tissue-engineered scaffolds for cardiac regeneration or studies of the biomechanical mechanism of HF must employ a similar microstructural organization. The fabrication methods to produce anisotropic scaffolds for wide applications in tissue engineering have been recently reviewed [60][61][62]. In this paper, we mainly focus on the myocardial applications. Methodology to Induce Anisotropy in Scaffolds Mechanical anisotropy in a scaffold can be imparted by fiber alignment and organization. To date, the methods to generate aligned, anisotropic scaffolds can be classified into the following categories: electrospinning with a rotating collector, gap electrospinning, and 3D bioprinting. Brief descriptions of the main strategies and examples of each category are provided below. Electrospinning Using a Rotating Collector Electrospinning utilizing a rotating collector permits the modulation of fiber alignment through alterations in the geometry and/or rotational speed of the collector. A rotating cylinder mandrel is the most commonly used method (Figure 2A), although it does not provide the highest degree of alignment compared to other methods ( Figure 2B-D). In this method, the linear speed at the surface of the rotating drum (i.e., rotating velocity) should match the solvent evaporation rate. The kinematics of the mandrel are determined by the category of processing parameters, which further influence the arrangement of nanofibers (alignment, fiber size, etc.) on the collecting surface [64,65]. (a) (b) Figure 1. Schematics of (a) myocardium showing a gradual transition of aligned cell layers from endocardium to epicardium [38] and (b) trilaminar leaflet structure of semilunar valves, illustrating the fibrosa, spongiosa, and ventricularis layers, as well as their principal constituents [63]. Methodology to Induce Anisotropy in Scaffolds Mechanical anisotropy in a scaffold can be imparted by fiber alignment and organization. To date, the methods to generate aligned, anisotropic scaffolds can be classified into the following categories: electrospinning with a rotating collector, gap electrospinning, and 3D bioprinting. Brief descriptions of the main strategies and examples of each category are provided below. Electrospinning Using a Rotating Collector Electrospinning utilizing a rotating collector permits the modulation of fiber alignment through alterations in the geometry and/or rotational speed of the collector. A rotating cylinder mandrel is the most commonly used method (Figure 2A), although it does not provide the highest degree of alignment compared to other methods ( Figure 2B-D). In this method, the linear speed at the surface of the rotating drum (i.e., rotating velocity) should match the solvent evaporation rate. The kinematics of the mandrel are determined by the category of processing parameters, which further influence the arrangement of nanofibers (alignment, fiber size, etc.) on the collecting surface [64,65]. Achieving fiber alignment requires the careful selection of the processing conditions when using a cylinder rotating mandrel to achieve fiber alignment. First, the induction of fiber alignment occurs within a narrow range of the rotational speed (e.g., between 3.0 and 10.9 m/s) [65]. When the rotating speed is lower than the take-up speed of the fiber, randomly oriented fibers are formed on the drum. When the rotating speed is too high, the depositing fiber jet breaks, and this prevents continuous fibers from being collected Figure 1. Schematics of (a) myocardium showing a gradual transition of aligned cell layers from endocardium to epicardium [38] and (b) trilaminar leaflet structure of semilunar valves, illustrating the fibrosa, spongiosa, and ventricularis layers, as well as their principal constituents [63]. (a) (b) Figure 1. Schematics of (a) myocardium showing a gradual transition of aligned cell layers from endocardium to epicardium [38] and (b) trilaminar leaflet structure of semilunar valves, illustrating the fibrosa, spongiosa, and ventricularis layers, as well as their principal constituents [63]. Methodology to Induce Anisotropy in Scaffolds Mechanical anisotropy in a scaffold can be imparted by fiber alignment and organization. To date, the methods to generate aligned, anisotropic scaffolds can be classified into the following categories: electrospinning with a rotating collector, gap electrospinning, and 3D bioprinting. Brief descriptions of the main strategies and examples of each category are provided below. Electrospinning Using a Rotating Collector Electrospinning utilizing a rotating collector permits the modulation of fiber alignment through alterations in the geometry and/or rotational speed of the collector. A rotating cylinder mandrel is the most commonly used method (Figure 2A), although it does not provide the highest degree of alignment compared to other methods ( Figure 2B-D). In this method, the linear speed at the surface of the rotating drum (i.e., rotating velocity) should match the solvent evaporation rate. The kinematics of the mandrel are determined by the category of processing parameters, which further influence the arrangement of nanofibers (alignment, fiber size, etc.) on the collecting surface [64,65]. Achieving fiber alignment requires the careful selection of the processing conditions when using a cylinder rotating mandrel to achieve fiber alignment. First, the induction of fiber alignment occurs within a narrow range of the rotational speed (e.g., between 3.0 and 10.9 m/s) [65]. When the rotating speed is lower than the take-up speed of the fiber, randomly oriented fibers are formed on the drum. When the rotating speed is too high, the depositing fiber jet breaks, and this prevents continuous fibers from being collected [66]. Secondly, within this range, an increasing rotational speed results in more aligned nanofibers. The fiber alignment typically presents a normal distribution of the fiber angles, Achieving fiber alignment requires the careful selection of the processing conditions when using a cylinder rotating mandrel to achieve fiber alignment. First, the induction of fiber alignment occurs within a narrow range of the rotational speed (e.g., between 3.0 and 10.9 m/s) [65]. When the rotating speed is lower than the take-up speed of the fiber, randomly oriented fibers are formed on the drum. When the rotating speed is too high, the depositing fiber jet breaks, and this prevents continuous fibers from being collected [66]. Secondly, within this range, an increasing rotational speed results in more aligned nanofibers. The fiber alignment typically presents a normal distribution of the fiber angles, and the degree of anisotropy is determined by the histogram profile of the fiber angles on the sheet [58,67]. This feature can be viewed as an advantage because the myofibers/collagen fibers from the histological measurement of native myocardium exhibit the same pattern ( Figure 3) [68]. To further enhance the fiber alignment, some researchers have utilized a rotating disc ( Figure 2B). In this setup, the thin edge of the collector concentrates the electric field, permitting the deposition of highly aligned fibers thereon. The charged jet is restricted within the edge because the electrostatic field between the sharp edge point (+) and needle (−) becomes the strongest in this location. However, highly aligned fibers can only be formed in a relatively small region, and this severely limits the size of the scaffold that can be fabricated [69][70][71]. Like in the cylinder mandrel setup, one should note that the rotating speed not only affects the nanofiber alignment but also the fiber diameter and porosity and, ultimately, the bulk mechanical properties. J. Funct. Biomater. 2023, 14, x FOR PEER REVIEW 6 of 26 and the degree of anisotropy is determined by the histogram profile of the fiber angles on the sheet [58,67]. This feature can be viewed as an advantage because the myofibers/collagen fibers from the histological measurement of native myocardium exhibit the same pattern ( Figure 3) [68]. To further enhance the fiber alignment, some researchers have utilized a rotating disc ( Figure 2B). In this setup, the thin edge of the collector concentrates the electric field, permitting the deposition of highly aligned fibers thereon. The charged jet is restricted within the edge because the electrostatic field between the sharp edge point (+) and needle (−) becomes the strongest in this location. However, highly aligned fibers can only be formed in a relatively small region, and this severely limits the size of the scaffold that can be fabricated [69][70][71]. Like in the cylinder mandrel setup, one should note that the rotating speed not only affects the nanofiber alignment but also the fiber diameter and porosity and, ultimately, the bulk mechanical properties. Additional modifications to the collector enable the replication of the 3D geometry of the tissue. For example, a 3D tube construct can be formed for vascular graft applications using a small-diameter rotating rod (<5 mm) ( Figure 2C) [72,73]. This makes it possible to employ distinct polymers for different layers without the need for further assembly, which replicates native vessel characteristics [73]. Using a conical mandrel ( Figure 2D) allows the fabrication of scaffolds with curvilinear microarchitectures that mimic heart valves [74]. Gap Electrospinning Gap electrospinning induces aligned nanofibers by an applied electrical field. By applying a positive voltage to the polymer solution and a negative voltage to two neighboring plates separated by a gap, the fibers are deposited and stretched from one plate to the other due to the residual electrostatic repulsion between the plates ( Figure 4A). Numerous alterations have been made to the basic setup to achieve variations in the microarchitecture, and these were reviewed in depth by Robinson et al. [62]. However, the maximum length of nanofiber sheets has been limited to 10 cm, because large distances inhibit the jet crossing from one side to the other [75][76][77]. To overcome this limitation, Lei et al. recently applied a negative voltage to a U-shape collector and successfully produced long aligned fibers (up to 60 cm) ( Figure 4B) [78,79]. Additional modifications to the collector enable the replication of the 3D geometry of the tissue. For example, a 3D tube construct can be formed for vascular graft applications using a small-diameter rotating rod (<5 mm) ( Figure 2C) [72,73]. This makes it possible to employ distinct polymers for different layers without the need for further assembly, which replicates native vessel characteristics [73]. Using a conical mandrel ( Figure 2D) allows the fabrication of scaffolds with curvilinear microarchitectures that mimic heart valves [74]. Gap Electrospinning Gap electrospinning induces aligned nanofibers by an applied electrical field. By applying a positive voltage to the polymer solution and a negative voltage to two neighboring plates separated by a gap, the fibers are deposited and stretched from one plate to the other due to the residual electrostatic repulsion between the plates ( Figure 4A). Numerous alterations have been made to the basic setup to achieve variations in the microarchitecture, and these were reviewed in depth by Robinson et al. [62]. However, the maximum length of nanofiber sheets has been limited to 10 cm, because large distances inhibit the jet crossing from one side to the other [75][76][77]. To overcome this limitation, Lei et al. recently applied a negative voltage to a U-shape collector and successfully produced long aligned fibers (up to 60 cm) ( Figure 4B) [78,79]. Gap electrospinning offers significant benefits in producing controllable, aligned electrospun fibers. It is cost-effective since, in most configurations, no extra equipment is required beyond a typical electrospinning device. In addition, the fiber orientation and gradient of alignment can also be adjusted. However, there are a few drawbacks to the approach. The technology is restricted by the mesh thickness, as the residual charge increases with the mesh thickness. The rise in residual charge causes electrical repulsion and, consequently, a loss of fiber alignment [73]. Finally, since the highly aligned scaffold Gap electrospinning offers significant benefits in producing controllable, aligned electrospun fibers. It is cost-effective since, in most configurations, no extra equipment is required beyond a typical electrospinning device. In addition, the fiber orientation and gradient of alignment can also be adjusted. However, there are a few drawbacks to the approach. The technology is restricted by the mesh thickness, as the residual charge increases with the mesh thickness. The rise in residual charge causes electrical repulsion and, consequently, a loss of fiber alignment [73]. Finally, since the highly aligned scaffold generally possesses low mechanical strength in the cross-fiber direction, the handling of the thin scaffold is challenging during the removal of the scaffold from the mandrel. Three-Dimensional Printing Three-Dimensional printing can also be used to induce fiber alignment in anisotropic scaffolds. There are two strategies to deposit aligned fibers: (i) direct depositing into a customized pattern to achieve the complex alignment of micro/nanofibers ( Figure 5A) [80,81]; and (ii) the shear-induced alignment of threadlike nanofibers or the elongated deformation of injected components along the printing direction ( Figure 5B) [82,83]. Cu et al. [84] printed a variety of designs featuring different fiber widths (100, 200, 400 µm); filling densities (20, 40, 60%); fiber angles (30 • , 45 • , 60 • ); and stacking layers (2, 4, 8 layers) to create anisotropic scaffolds compatible with cardiomyocytes. They claimed that the scaffolds accurately represented the transmural fiber alignment and curvature of murine left ventricles. collectors and (B) U-shape electrode collector. Gap electrospinning offers significant benefits in producing controllable, aligned electrospun fibers. It is cost-effective since, in most configurations, no extra equipment is required beyond a typical electrospinning device. In addition, the fiber orientation and gradient of alignment can also be adjusted. However, there are a few drawbacks to the approach. The technology is restricted by the mesh thickness, as the residual charge increases with the mesh thickness. The rise in residual charge causes electrical repulsion and, consequently, a loss of fiber alignment [73]. Finally, since the highly aligned scaffold generally possesses low mechanical strength in the cross-fiber direction, the handling of the thin scaffold is challenging during the removal of the scaffold from the mandrel. Three-Dimensional Printing Three-Dimensional printing can also be used to induce fiber alignment in anisotropic scaffolds. There are two strategies to deposit aligned fibers: (i) direct depositing into a customized pattern to achieve the complex alignment of micro/nanofibers ( Figure 5A) [80,81]; and (ii) the shear-induced alignment of threadlike nanofibers or the elongated deformation of injected components along the printing direction ( Figure 5B) [82,83]. Cu et al. [84] printed a variety of designs featuring different fiber widths (100, 200, 400 μm); filling densities (20, 40, 60%); fiber angles (30°, 45°, 60°); and stacking layers (2, 4, 8 layers) to create anisotropic scaffolds compatible with cardiomyocytes. They claimed that the scaffolds accurately represented the transmural fiber alignment and curvature of murine left ventricles. The advantages of this method include the simultaneous control over the micro-geometry and macro-architecture (such as fiber alignment), the feasibility of achieving a The advantages of this method include the simultaneous control over the microgeometry and macro-architecture (such as fiber alignment), the feasibility of achieving a high resolution (~5-50 µm) in the fiber organization, and the proper cell density within the scaffolds [85]. It is important to note that hydrogels are often used and deposited as bioinks to enhance the bioactivity of the scaffold; recent 3D bioprinted cardiac scaffolds were reviewed by Wang et al. (see Table 1 in [86]). Advantages and Limitations of Current Anisotropic Scaffolds The incorporation of anisotropy in tissue-engineered scaffolds not only replicates the structural features of native cardiac tissues but also allows for mechanistic studies that can improve our understanding of heart diseases. One important consideration in replicating tissue anisotropy is the fiber angle distribution. As described above, the ventricular wall exhibits a normal distribution of myofiber angles in the tissue sections, and this feature can be achieved by electrospinning with a cylindrical rotating mandrel [12]. In contrast, other approaches including 3D bioprinting generate uniformly aligned or grid structures of fibers that are absent in native tissues. The exact cause and consequences of the normal distribution of myofibers in a single section are not yet fully understood, but a biomimetic cardiac scaffold should consider this feature during scaffold fabrication. Moreover, multiple layers of sheets with varied main fiber angles can be produced either by electrospinning or by 3D bioprinting methods, replicating the myocardium or heart valves with layered, anisotropic characteristics. However, in native tissues, there is also a functional integration of aligned constituents across layers. The current engineering techniques have not been able to provide such in vivo bonding features between aligned layers [87]. A successful biomimetic scaffold should exhibit not only a similar elasticity but also a similar degree of anisotropy to the native tissue. We summarize the reported anisotropy of native cardiac tissues and biomimetic scaffolds in Tables 1 and 2, respectively. Because of the nonlinear elastic behavior of native tissues, we mainly adopted the tissue elastic modulus measured at low strains that replicate the stiffness of myofibers (in the myocardium) or non-collagen components (in valves). The healthy adult myocardium has an anisotropy degree of 0.3-0.9 in the RV and 0.5-1.9 in the LV, and the fetus myocardium exhibits a higher degree of anisotropy on both sides of the ventricles (Table 1). In addition, the tissue anisotropy is enhanced or even changed (from stiffer in one direction to stiffer in the other direction) with disease progression (e.g., an anisotropy degree of 3.2-5.4 in failing RVs, Table 1). In contrast, tissue-engineered scaffolds have a wide range of anisotropy degrees, ranging from~2 in a PEUU scaffold to~46 in a PCL scaffold ( Table 2). Except for one study, all the scaffolds presented a high degree of anisotropy (>3) that is absent in the healthy myocardium or heart valves. Thus, there is a lack of consensus on the degree of anisotropy for myocardium tissue constructs, for either healthy or diseased conditions. Another limitation of anisotropic scaffolds is that their Young's moduli (presented in MPa) are greater than the native myocardium's Young's moduli (presented in kPa). Lastly, inducing fiber alignment while keeping other parameters identical increases the bulk stiffness of the scaffold, and thus anisotropic scaffolds are often stiffer than isotropic ones. Therefore, it is important to keep both the elasticity and anisotropy compatible with those of the host tissues. Table 1. Anisotropic mechanical properties of native cardiac tissues reported in the literature. The degree of anisotropy was calculated as the ratio of the Young's modulus (E) or peak stress between longitudinal (L; main fiber/outflow tract) and circumferential (C; cross-fiber/perpendicular to outflow tract) directions. To distinguish the different orientation systems, the orientation system with the outflow tract and its perpendicular directions are labeled with L* and C*, respectively. Unless stated elsewhere, all data were obtained from healthy animals. LV: left ventricle; RV: right ventricle. Tissue Animal Model Young's Modulus (kPa) Anisotropy Degree Ref. Organ-Level Impact of Substrate Anisotropy The benefit of using or implanting an anisotropic scaffold for the whole organ function has been reported previously. Mathematical modeling and in vivo studies have shown that anisotropic scaffolds, compared to isotropic ones, enhanced the functionality of a diseased heart by improving depressed LV pump function and increasing systolic function without compromising the filling (diastolic function) [103,104]. Through mathematical modeling, Sallin et al. [105] further demonstrated the significance of myocardial fiber arrangement in the ventricular wall by promoting effective cardiac pumping. When the heart is modeled as an ellipsoid with myocardial fibers oriented in the circumferential (diseased) vs. longitudinal (normal) direction with a helical fiber organization, the ejection fractions are markedly different (30% vs. 60%) and represent those of failing and normal hearts, respectively. Chang et al. fabricated a 3D dual-ventricle bioscaffold with three layers, each with distinct helical arrangements. They showed that the cardiomyocytes (CMs) exhibited appropriate alignments in this scaffold, and the entire construct achieved the spatiotemporal control of excitation-contraction coupling. Additionally, their observation of an increased ejection fraction in the longitudinally aligned scaffold agreed with the results predicted from Sallin's model. In this investigation, however, the mechanical behavior of the scaffolds did not match that of the native myocardium. The collagen fibers in the natural myocardium coil tightly at small strain rates and uncoil to become stiffer at high strains. In contrast, this 3D scaffold did not reproduce the nanoscale structure of collagen fibers, resulting in straight, bundled fibers that were linearly elastic throughout the strain range [106,107]. We will discuss this limitation in the next Section 4 . Cell-Level Impact of Substrate Anisotropy Anisotropic structures of native tissues, resulting from the aligned arrangement of ECM components or cells, play an essential role in carrying out and maximizing their direction-dependent physiological functions. Studies probing the cellular responses to anisotropic mechanical environment have been conducted by comparing the outcomes obtained from isotropic and anisotropic scaffolds. The first response of cells to aligned substrates is to change their shape and orientation. Cardiomyocytes cultured on (isotropic) plastic are oriented randomly. As a result, their contractile force is distributed in all directions. However, when cultured in anisotropic scaffolds, the CMs will adopt the fiber alignment and be properly positioned on the scaffold [108]. The elongated cell alignment in turn influences the contractile force as well as cell-cell and cell-matrix interactions. Aligned CMs are also more mature and exhibit a more physiological behavior than randomly distributed cells. For instance, Wanjare et al. [109] co-seeded human iPSC-derived cardiomyocytes (iCMs) and endothelial cells (iECs) onto electrospun polycaprolactone scaffolds with either a randomly oriented or parallel-aligned microfiber configuration. They showed that, in contrast to randomly oriented scaffolds, the aligned scaffolds led to iCM alignment along the microfiber direction and promoted iCM maturation by increasing the sarcomeric length and gene expression of myosin heavy chain adult isoform (MYH7). The maximal contraction velocity of iCMs on aligned scaffolds was significantly greater (3.8 m/s) than that on randomly oriented scaffolds (2.4 m/s). These outcomes demonstrate that anisotropic scaffolds promote CM maturation and contractility. Other groups have examined the effect of matrix anisotropy on stem or progenitor cell function to elucidate cell mechanobiology and its regenerative potential for the heart. For instance, the role of matrix anisotropy in mesenchymal stromal cell (MSC) behavior and paracrine functions has been investigated. Matrix anisotropy has been shown to play a role in MSC morphology, differentiation fate, and other paracrine functions [110][111][112][113][114][115][116]. Recently, Nguyen-Truong et al. [97] examined the effect of RV tissue mechanics on the pro-angiogenic paracrine function of MSCs, concentrating on the combined effect of RV-like tissue stiffness and anisotropy. Using random and aligned PEUU electrospun scaffolds with the stiffness of normal RVs, they found that the MSCs cultured on the anisotropic group consistently exhibited a higher pro-angiogenic function than those cultured on the isotropic group, showing a positive influence of anisotropy on MSC paracrine function. However, this impact of anisotropy was lacking in the stiff scaffold groups resembling diseased RVs. These results highlighted the importance of the synergistic effect of matrix stiffness and anisotropy in the regulation of MSC function, which may lead to the mechanical conditions of MSCbased treatments for heart failure. Similarly, Allen et al. [117] investigated mouse embryonic stem cell differentiation toward CM regulated by substrate anisotropy. They showed that the cell alignment exhibited a gradient-based response (nonaligned, semi-aligned, and highly aligned) to substrate anisotropy and that an aligned substrate accelerated CM maturation to generate synchronous beating. Nonlinear Elastic Tissue-Engineered Scaffolds Like many biological tissues, cardiovascular tissues exhibit J-shaped stress-strain behavior. This feature is known as nonlinear elastic behavior. For instance, the right ventricle passive stiffness increases nonlinearly with an increased strain/load because of the recruitment of collagen fibers [91]. Disease progression typically leads to CM hypertrophy and the accumulation of collagen, resulting in a leftward shift of the stress-strain curve and elevated elastic moduli in both low-and high-strain regions [92,118]. However, this feature is absent in most of the biomaterials that exhibit linear elasticity. To overcome this limitation, researchers have used a variety of approaches to tune the mechanical properties of materials. Methodology to Induce Nonlinear Elastic Behavior in Scaffolds Inspired by biological tissues, the fabrication of crimped, extendable fibers is the main strategy to impart nonlinear elasticity on a biomaterial. One way to induce crimped fibers is by permanently lengthening the sheet along the main-fiber direction first and then returning the sheet back to the pre-stretched length. Meng et al. applied this method to electrospun scaffolds made with polylactocaprone (PCL), poly(lactic acid) (PLA), and poly(l-lactideco-caprolactone) (PLCL), and they found that the mixture of the three was effective in the formation of crimped structures [119]. In the aligned PLCL scaffold, the fibrous sheet was stretched repeatedly, resulting in permanent elongation. Then, the entire sheet was positioned into the pre-stretched shape, treated with heated ethanol spray, and cooled down quickly to produce wavy nanofibers. This crimped fibrous structure was confirmed by SEM imaging, and the nonlinear elastic behavior was measured by uniaxial tensile mechanical tests. Interestingly, the same methodology failed to generate the crimped fiber structure in the randomly aligned PLCL scaffolds, and thus the nonlinear elastic behavior was absent in these scaffolds. However, using similar methods, Niu et al. electrospun tubular PLCL scaffolds with randomly aligned, axially aligned, and circumferentially aligned structures [120]. They reported nonlinear elastic behavior in all scaffolds. The nonlinearity of these scaffolds was compared and found to be similar to that of native blood vessels (porcine aorta ventralis). Another way to produce crimped fibers is by controlled heating and/or chemical treatment, as briefly reviewed by Szczesny et al. [121] and Zhang et al. [122]. However, these methodologies often generate scaffolds with low porosity, which results in limited crimped fibers and poor cell infiltration. To improve these aspects, Szczesny et al. electrospun a dual poly-L-lactide (PLLA)/poly(ethylene oxide) (PEO) solution and heated the sheet between two glass slides, either before or after washing the scaffolds to dissolve PEO fibers, with or without poly(vinyl alcohol) (PVA) treatment to increase fiber bonding [121]. They found that only the wash-and-then-heat group exhibited nonlinear stress-strain behavior, whereas the PVA-treated scaffolds failed to present nonlinear elastic behavior. In addition, increased porosity has been found to promote the formation of crimped fibers. In the same study, the authors showed a potential link between porosity and the fiber crimping of the scaffold. Recently, Zhang et al. prepared nanofibrous PLCL/PEO scaffolds and found that the fiber crimping and nonlinear elastic behavior increased with an increase in mesh porosity [120]. This report was consistent with the previous finding of Szczesny et al. Finally, certain materials may exhibit nonlinear behavior and can be used to fabricate scaffolds. For example, poly(glycerol dodecanedioate) (PGD) is a shape-memory, biodegradable elastomer that is linearly elastic at room temperature but has nonlinear elasticity at body temperature. Ramaraju et al. showed that the incorporation of the small intestinal submucosa (SIS) into the PGD sheets induced nonlinearity in the scaffolds. The mechanical properties of PGD can be tuned with native SIS by altering the thermal curing conditions used. The reason for the nonlinear elastic behavior is thought to be the void spaces formed during the incorporation of SIS sheets into PGD, but increasing the void spaces also decreases the stiffness of the scaffolds [123]. Role of Substrate Nonlinear Elasticity in Cell Behavior The nonlinear elasticity of matrices changes cell-matrix interactions by regulating cell adhesion, spreading, and signal transduction. Prior studies have shown that cells grown on fibrous ECM with mechanical nonlinearity perceive the mechanical signal distance to be far greater than those grown on synthetic linear elastic polymeric material [119,124,125]. Meng et al. showed that compared to the human umbilical vein endothelial cells (HU-VECs) cultured on linear elastic scaffolds, the HUVECs cultured on nonlinear (aligned and crimped) PLCL scaffolds had a greater density of focal adhesions and a higher expression of focal adhesion proteins. This indicated a stronger cell-matrix interaction, which more effectively transduced mechanical signals. These cells also had an increased spreading area, thereby promoting the formation of an endothelial layer on the vascular scaffold. The cell proliferation rate on the nonlinear elastic scaffold was lower than that on the linear elastic scaffold, but it was attributed to the lower Young's modulus in the nonlinear elastic scaffold [119]. In a separate study, Zhang et al. showed that a nonlinear elastic scaffold promoted HUVEC adhesion and proliferation despite the reduced stiffness of the scaffold. These cellular responses were attributed to the rough surface, increased porosity, and increased hydrophilicity of the nonlinear elastic scaffold rather than mechanical factors [122]. Liu et al. showed that the nonlinearity of the ECM regulated the organization of hASCs by preparing six gels with different concentrations and critical stresses. Finally, Niu et al. cultured HUVECs on nonlinear elastic tube scaffolds with three different fiber orientations (random, circumferential, and longitudinal alignment) [120]. They did not include linear elastic scaffolds as a control, and thus it remains unknown whether the cell proliferation is altered by nonlinear elastic properties. Crimped fibrous scaffolds promote cell spreading and adhesion, but the effect on cell proliferation remains unclear. However, the mechanisms for altered cell responses are mostly attributed to the matrix topography (rough surface or porous structure) or surface chemistry (hydrophilicity) of the crimped fibrous scaffolds. Whether the mechanical behavior (nonlinear elasticity) is just a side product of the crimped fibers or directly affects the mechanical transduction of the cells is unknown. The exact role of the nonlinear elastic behavior of the substrate in the mechanical signaling pathway of cells should be investigated in future work. Limitations of Current Nonlinear Elastic Scaffolds Above, we summarized the current methods for nonlinear elastic scaffold fabrication and some known cellular responses to crimped fibrous scaffolds. While it is encouraging to see the advancement in this biomimetic mechanical property in tissue-engineered scaffolds, it should be noted that the previously mentioned studies focused on applications in soft tissues such as tendons [121], ligaments [126], and blood vessels [127,128]. The fabrication of biomimetic scaffolds exhibiting cardiac nonlinearity remains a knowledge gap. Additionally, the methods for forming crimped fibers need to be improved, as both success and failure to exhibit nonlinear elasticity have been reported in randomly oriented fibrous scaffolds. For example, Meng et al. showed that micro crimped structure formation was only observed in aligned scaffolds (PLCL, PLA, and PCL) and was absent in random scaffolds [119]. However, randomly aligned tubular PLCL scaffolds fabricated by Niu et al. using a similar technique did present nonlinear elastic behavior [120]. Therefore, other factors, perhaps related to the fiber orientation and bonding, may play a role in the formation of crimped fibers and should be investigated. Third, the mechanical mechanism of the 'nonlinear elastic response' of cells is still not fully understood, and most researchers have attributed the altered cell behavior to morphological or chemical properties from the crimped fibrous micro-structure of the scaffold. Moreover, prior in vitro studies have been performed in a static environment wherein the 'crimped' fibers may not be loaded and become straight fibers. Future investigations of the cell responses under dynamic loading conditions (e.g., from small strain to large strain) will provide a better understanding of the mechanical mechanism. This may be particularly critical for cardiovascular research, as the tissues are under constant dynamic loads, which is different from other non-cardiovascular tissues. Viscoelastic Tissue-Engineered Scaffolds Another less investigated mechanical behavior of scaffolds is viscoelasticity. A viscoelastic material has elastic behavior that is time-dependent and strain-history-dependent. Viscoelasticity is universally present in biological tissues. Heart valves are viscoelastic [129], and more evidence has recently shown that the ventricular free wall exhibits viscoelastic characteristics as well [90]. Using either uniaxial or biaxial tensile tests, hysteresis loops and/or stress relaxation curves are commonly observed in ventricular tissues [90,127,130]. The cardiac tissue viscoelasticity can be attributed to the complex composition of the tissue, which includes cardiac cells (e.g., cardiomyocytes), ECM molecules (e.g., GAGs and collagen), extracellular fluids, and the interactions between these components. Unlike the increased awareness of the importance of viscoelasticity in cancer research [131], the viscoelastic property of myocardial tissues or tissue-engineered scaffolds is seldom investigated in cardiac research. Thus, in this section, we extend our review beyond the cardiac field and discuss the methods to induce viscoelasticity in hydrogels and/or synthetic scaffolds and some known impacts of substrate viscoelasticity on cellular behavior, in the context of general biological applications. A review of techniques to characterize native or engineered tissue viscoelasticity is available in [128]. Methodology to Induce Viscoelastic Behavior in Scaffolds Hydrogels are the most commonly used biomaterials for constructing viscoelastic substrates. Hydrogels can be classified based on the source of the polymers-natural ECM biopolymers (e.g., collagen or fibrin hydrogels); synthetic hydrogels (e.g., polyethylene glycol (PEG) or polyacrylamide (PAM) hydrogels); and naturally derived macromolecular hydrogels (e.g., alginate or chitosan hydrogels). Currently, the main approaches used to modulate the viscoelasticity of hydrogels include: (1) crosslinking polymers; (2) altering the polymer architecture, such as length and branching; (3) tuning the composition; and (4) altering the concentration of the polymer or polymer mixture [132]. Crosslinks in polymeric hydrogels can be physical (e.g., ionic or covalent) and can be static or dynamic. Vining et al. generated various alginate-collagen hydrogels via combined ionic and covalent crosslinking at different densities to tune the matrix viscoelasticity. Across a narrow range of moduli (0.25 kPa, 0.5 kPa, and 2.5 kPa), the equilibrium stress relaxation of the scaffolds was similar to that of the native ECM [133,134]. This parameter was increased significantly (>3000 s) by the addition of covalent crosslinks, which indicated a weakening of the viscoelastic behavior of the scaffold. Because ionic crosslinks are weaker bonds than covalent crosslinks and make it easier to induce frictional energy loss during deformation, the stress relaxation is more pronounced in ionically crosslinked hydrogels. Besides physically crosslinked hydrogels, hydrogels such as hydrazone, oxime, and thioester contain chemically crosslinked hydrogels with dynamic covalent bonds, creating a covalent adaptable network that possesses viscoelasticity. Morgan et al. tuned the mechanical properties of the oxidized alginate hydrogels by mixing with different ratios of dihydrazide (to form hydrazone) and bishydroxlamine (to form oxime) to alter the dynamic covalent crosslinks [135]. In general, the more oxime crosslinks, the stiffer the gel (larger storage modulus). A similar trend was found in the viscosity (loss modulus or relaxation time) of the gels. By changing the composition of crosslinks, the viscoelasticity can also be tuned. Richardson et al. synthesized a range of hydrazone crosslinked polyethylene glycol hydrogels [136]. By adjusting the ratio of alkyl-hydrazone and benzyl-hydrazone crosslinks, the average stress relaxation time of the hydrogels varied from hours (e.g., 4.01 × 10 3 s) to months (e.g., 2.78 × 10 6 s). Pauly et al. prepared agarose hydrogels containing proteoglycan mimetic graft copolymers with various polysaccharide side chains (dextran, dextran sulfate, heparin, chondroitin sulfate, and hyaluronan) [137]. Agarose gels have a strain-rate-dependent compressive modulus. When either the highly charged polysaccharide heparin or the neutral polysaccharide dextran is added to the gel, the modulus of the hydrogel is unmodified or reduced; however, when the heparin or dextran additive is included in the form of a proteoglycan-mimetic graft copolymer, the modulus is increased. The gels also exhibit stress relaxation behaviors with multiple time constants for relaxation that can be modulated by the structure and composition of the proteoglycan mimic additives. While hydrogels are the main type of biomaterials used for viscoelastic studies in the literature, there are a limited number of studies investigating the viscoelastic property of synthetic scaffolds. For instance, the viscoelasticity of PCL scaffolds can be tuned by blending natural or synthetic components at different ratios. Kim et al. attempted to tune the viscoelasticity of PCL scaffolds by adding different concentrations of alginate. They showed that the fluidic viscosity of the scaffold increased by increasing the alginate weight fraction in the composites. The storage modulus (G ) of the blended scaffolds was higher than that of pure PCL scaffolds, and it was increased with an increasing alginate concentration (0.1 Pa to 40 Pa at 0-30 wt % of alginate) [42]. Moreover, Peter et al. reported the preparation of a wide range of viscoelastic polydimethylsiloxane (PDMS) scaffolds, and tuning viscoelasticity was achieved by changing the base:crosslinker ratio of Sylgard 184 and the ratio of Sylgard 184 and Sylgard 527 [40]. Increasing the ratio of Sylgard 184 and Sylgard 527 caused decreases in the storage modulus (G ) and loss modulus (G ) of the scaffolds. The use of synthetical biomaterials can overcome the limitations of most natural-material-based hydrogels, i.e., the achieved viscoelasticity range is relatively small and in a sub-physiological range (i.e., lower elasticity and viscosity than native tissues). Shamsabadi et al. used the microsphere sintering technique to fabricate scaffolds for bone tissue engineering using PCL and bioactive glass (BG) 58S5Z (58S modified with 5wt% zinc) [41]. The viscoelastic behavior of the 0% BG (scaffold with only PCL) and 5% BG samples was determined by performing compressive stress relaxation tests. The storage modulus for both samples increased with the frequency. The loss modulus of the 5% BG sample was higher only for frequencies <0.4 Hz. The smaller loss modulus for the 5% BG at higher loading rates indicated its lower viscosity, and because of this, its storage modulus remained nearly constant in this range. Mondesert et al. fabricated fibrous scaffolds with repetitive honeycomb patterns. The relaxation of the scaffolds was tested in directions D1 and D2 at a 15% strain [138]. The scaffolds exhibited a slight relaxation in both directions, showing that the viscosity of the material did not drastically influence the mechanical behavior. Hence, the viscous behavior of these scaffolds was neglected while analyzing their mechanical properties. Role of Substrate Viscoelasticity in Cell Behavior Recent pioneering work has revealed some new findings on the impact of substrate dynamic mechanical behavior (viscoelasticity) on various cellular behaviors, including cell morphology and spreading, migration, proliferation, differentiation, and ECM deposition. Cell Spreading and Migration Cell spreading is closely related to cell-matrix interactions, which affect the distribution of cell traction forces and mechanotransduction pathways and maintain the mechanical homeostasis of the cell. To examine how cell spreading is influenced by matrix viscoelasticity, Cameron et al. modulated the viscosity (the loss modulus) of polyacrylamide (PAM) hydrogels while maintaining the same elasticity (storage modulus) to study the spreading effect of hMSCs on these hydrogels [139]. Increasing the loss moduli significantly decreased the length of the focal adhesions (FAs), which affected the spreading of the cells. The smaller size of the FAs in hMSCs on more viscous substrates showed that the FAs were less mature and more transient, indicating that the hMSCs were more motile or actively spreading. An additional study with RGD (Arg-Gly-Asp)-coupled alginate hydrogels showed that viscoelastic hydrogels induced a larger spreading area of human MSC than elastic hydrogels while keeping the initial modulus or ligand density constant [140]. Scaffolds with increased creep better promoted the spreading of MSCs on a 2D culture [141]. Similar findings were observed in the 3D culture of MSCs. Enhanced creep led to the increased spreading and osteogenic differentiation of MSCs in the 2D culture, and the increased substrate stress relaxation promoted cell spreading and proliferation in the 2D culture and altered the cell morphology in the 3D culture [142]. In accordance with this, the promotion of cell spreading on various viscoelastic substrates has been reported in other cell types such as U2OS cells [140], myoblasts [143], and fibroblasts [142], in both 2D and 3D cell cultures. Moreover, substrate viscoelasticity also plays a regulatory role in cell migration, and substrates with faster stress relaxation promote the migration of cells such as myoblasts [143] and fibroblasts [142]. Both regulatory effects may be explained by focal adhesion (FA) formation and ligand clustering [128]. FA formation is probably the key mechanism through which the viscoelastic property of the substrate affects cell behaviors [140]. For instance, promoted FA formation was observed in hydrogels with faster relaxation (more viscoelastic). Chaudhuri et al. used hyaluronic acid and collagen I to form 3D hydrogels and found that the FA in MSCs was promoted by more viscoelastic hydrogels. The increased accumulation of β1 integrin, indicative of increased FA formation, was observed in the periphery of MSCs encapsulated in RGD-coupled ionically crosslinked alginate hydrogels with faster stress relaxation [142]. Cell Proliferation Viscoelastic matrices promote cell proliferation. Chaudhuri et al. showed that MSC proliferation was elevated in a PAM-alginate hydrogel with a faster relaxation rate [142]. Ryan et al. modified collagen hydrogels with insoluble elastin to induce prolonged stress relaxation (i.e., reduced viscosity), which resulted in lower proliferation and a more contractile phenotype of human smooth muscle cells (SMCs) [144]. Chao et al. seeded chondrocytes in chitosan-modified PLCL scaffolds with a viscoelastic property close to that of native bovine cartilage and observed that the cell proliferation was higher compared with that in unmodified (non-viscoelastic) scaffolds [145]. Peter et al. seeded preosteoblast cells (MC3T3-E1) on alginate-blended PCL scaffolds, and increased cell proliferation was found on viscoelastic scaffolds compared to pure PCL (low-viscoelasticity) scaffolds [42]. Finally, Tamate et al. showed that the proliferation of HeLa cells (cancer cells) was inhibited when the viscosity of the hydrogel was diminished [146]. The above studies all consistently demonstrated that substrate viscosity promotes cell proliferation in a variety of healthy and cancer cells. Cell Differentiation The effect of substrate viscoelasticity on cell differentiation has been mostly studied in MSCs and the application of orthopedic tissue regeneration. For example, hydrogels with rapid stress relaxation induced the greater osteogenic differentiation of MSCs [147][148][149]. Viscoelastic hydrogels have also been successfully applied to regulate cell-cell and cell-matrix interactions for the differentiation and regeneration of bone and cartilage tissues with MSC spheroids [147,150]. The improved osteogenic differentiation of MSCs in faster relaxing (more viscoelastic) substrates has been related to mechanotransduction regulators such as the enhanced clustering of integrin ligands or stronger actomyosin contractility [142]. Li et al. prepared PAM hydrogels with different substrate stiffness to study cell proliferation. The substrate with slower stress relaxation drove the pro-inflammatory polarization of human bone-marrow-derived monocytes and their differentiation into antigen presenting cells, indicating an anti-inflammatory role of viscoelastic substrates [151]. ECM Deposition ECM deposition is a key outcome in the regeneration of connective tissues including bone and cartilage. Chondrocytes encapsulated in scaffolds with similar viscoelasticity to native cartilage tissue displayed the greater deposition of a cartilage-like matrix composed of type 2 collagen and aggrecan and the lower expression of type 1 collagen [152]. MSCs encapsulated in a viscoelastic hydrogel consisting of an interpenetrating network of alginate and fibrillar collagen type I with interferon-γ (IFN-γ)-loaded heparin-coated beads suppressed the proliferation of human T cells [153]. However, the results showed that cell proliferation was independent of substrate stiffness and was more dependent on the crosslinking components of the hydrogel. Limitations of Current Viscoelastic Scaffolds As an emerging area in tissue engineering and mechanobiology, the research into substrate viscoelasticity in cardiac applications is in its infancy stage. We summarized the reported viscoelastic properties of tissue-engineered scaffolds and native biological tissues in Tables 3 and 4, respectively. Although the tissue-engineered scaffolds include a large range of viscosity (with the half relaxation time ranging from 10 s to 18,000 s), the elasticity is only at the low end (with a Young's modulus <30 kPa and a storage modulus ranging from 0.04 kPa to 130 kPa). The elastic property is far below that of cardiac tissues (typically with a Young's modulus of hundreds or thousands of kPa). Future studies should match both the elastic and viscous behavior of scaffolds to better replicate the physiological viscoelastic properties of cardiac tissues. In addition, the DMA technique (to obtain the storage and loss moduli) is seldom used for the measurement of cardiac tissues (Table 4). Different viscous parameters have been reported between the two research areas as well. While the half relaxation time is often provided for tissue-engineered scaffolds, the phase angle is more often obtained in native tissues. Therefore, it is difficult to compare the viscoelastic properties of tissue-engineered scaffolds to those of native cardiac tissues from the current literature. Future tissue engineering research should confirm the similarity of the viscoelastic properties of scaffolds and native tissues using measurements obtained via the same methodology. Furthermore, while the elastic property of cardiac bioscaffolds is often reported, it remains unknown whether they are viscoelastic. We recently reported different MSC responses to varied matrix stiffness and anisotropy degrees using PEUU scaffolds mimicking healthy and diseased right ventricles. The biaxial elastic behavior was measured in the main fiber and cross-fiber directions, and anisotropic elastic behavior was confirmed [97]. A re-examination of the two anisotropic scaffold groups that represent healthy (soft) and diseased (stiff) right ventricle elasticities was performed via stress relaxation tests. Unsurprisingly, viscoelastic behaviors were observed in these sheets. Moreover, we observed both elastic and viscous anisotropy in these scaffolds ( Figure 6). Therefore, it is possible that the existing cardiac scaffolds present viscoelastic properties, although this behavior has been ignored. Table 4. Viscoelastic properties of biological tissues reported in the literature. E refers to elastic/Young's modulus or initial modulus in stress relaxation. G is the storage modulus, G is the loss modulus, and W d is the dissipated energy. The phase angle was calculated as G /G . Figure 6. Viscoelastic properties of the previously reported anisotropic elastic scaffolds that mimic the stiffness of healthy (soft and anisotropic) and diseased (stiff and anisotropic) right ventricles [97]. Viscoelastic properties were measured by equibiaxial stress relaxation at the maximal strain of 15%. The elastic property was measured by the relaxation modulus (A,B), and the viscous property was measured by the dissipated energy (C,D), as described previously [90]. Results are shown as mean ± SE. The main fiber direction used was the longitudinal direction. * p < 0.05 between longitudinal (L) and circumferential (C) directions at the same relaxation time. Future Work In summary, future work should focus on addressing the limitations of current scaffold fabrication techniques, such as the degree of anisotropy and the thickness limitation of hydrogel-based scaffolds. Additionally, efforts should be made to improve the repeatability and reproducibility of scaffold fabrication methods to ensure consistency across different studies or research groups and to allow for the easier comparison of results. Fur- Soft & Anisotropic Scaffold Stiff & Anisotropic Scaffold Figure 6. Viscoelastic properties of the previously reported anisotropic elastic scaffolds that mimic the stiffness of healthy (soft and anisotropic) and diseased (stiff and anisotropic) right ventricles [97]. Viscoelastic properties were measured by equibiaxial stress relaxation at the maximal strain of 15%. The elastic property was measured by the relaxation modulus (A,B), and the viscous property was measured by the dissipated energy (C,D), as described previously [90]. Results are shown as mean ± SE. The main fiber direction used was the longitudinal direction. * p < 0.05 between longitudinal (L) and circumferential (C) directions at the same relaxation time. Future Work In summary, future work should focus on addressing the limitations of current scaffold fabrication techniques, such as the degree of anisotropy and the thickness limitation of hydrogel-based scaffolds. Additionally, efforts should be made to improve the repeatability and reproducibility of scaffold fabrication methods to ensure consistency across different studies or research groups and to allow for the easier comparison of results. Furthermore, there is a need for the appropriate characterization of scaffold mechanical properties and comparisons with the measurements obtained from myocardium tissues to ensure that engineered scaffolds exhibit the most important mechanical behaviors of native tissues. As biodegradation is expected in many tissue-engineered scaffolds, it is equally critical to investigate the mechanical changes in scaffolds during this process, data that are lacking in the current literature. Overall, continued efforts to improve scaffold design and fabrication techniques will enable the better investigation of the pathology of cardiac diseases and the development of patient-specific treatments for different types of HF, translating the research from bench to bedside. Conclusions Heart failure remains a major cause of morbidity and mortality worldwide, and tissue engineering offers promising therapeutic strategies for cardiac regeneration. The inclusion of biomimetic mechanical properties in cardiac scaffolds, such as anisotropy, nonlinear elasticity, and viscoelasticity, is crucial for promoting cell functions and myocardium tissue regeneration. This review summarized recent advances in cardiac scaffolds that achieved these mechanical properties, as well as the advantages and limitations of each method. The biological responses to tissue-specific mechanical environments were also discussed. In summary, this review highlighted the importance of considering mechanical properties in myocardium tissue engineering and regeneration. By developing biomimetic scaffolds, researchers and clinicians can create new opportunities to promote cardiac tissue regeneration and improve patient outcomes. These findings offer hope for the development of new therapeutic strategies to treat heart failure, the leading cause of death in the US and worldwide.
2023-05-14T15:09:45.913Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "01b3affda5c0d5e77c91edacbf4c7cf4db125b5b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/jfb14050269", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "86e8d5ad31823bca620fc213bdbd5734941fffad", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
193228976
pes2o/s2orc
v3-fos-license
New Materialism of Dust 2012 by FUOC CC Abstract This text considers the materiality of dust. It maps a transversal route of considering dust, from the processes of polishing iPad covers in Chinese factories to a wider theoretical argument for a media materiality that starts from rocks and chemicals. In short, this kind of new materialism is interested in the various times, durations, entwinements and distributions of a whole range of agencies, several of them non-human. Hence, we are also forced to think about the contexts of new materialism in a slightly more fluid, novel way than just assuming that specificity concerning the technological and the scientific underpinnings of media culture are automatically material. Indeed, materiality is not just about machines; nor is it just solids, and things, or even objects. Materiality leaks in many directions, as electronic waste demonstrates, or the effects of electromagnetic pollution. It is transformational, ecological, and multiscalar. Insects wings might beat anywhere between 100 to 1000 beats per second; dead zooplankton sedimented for millions of years forms the backbone of the global economy; most things on the solar radiation spectrum remain unseen to us but perhaps registered on the body anyway, somehow; think of the aesthetics of magnetic storms in the upper atmosphere and their weird frequencies -worked into an aesthetic piece by Semiconductor's Ruth Jarman and Joe Gerhardt in 20 Hz (2011). A lot happens before humans or cultural theorists arrive at the scene. After that, they might start talking and writing about representation, meaning, signifying, practices, discourses and ideology. But even before that, a lot has happened. Or instead of the starting examples, take dust -the thing that covers a lot of the globe (deserts) as well as a lot of our obsolescent media. In the words of Reza Negarestani, in Cyclonopedia and the extensive (political) philosophy of dust offered: Each particle of dust carries with it a unique vision of matter, movement, collectivity, interaction, affect, differentiation, composition and infinite darkness -a crystallized data-base or a plot ready to combine and react, to be narrated on and through something. There is no line of narration more concrete than a stream of dust particles. Dust already counts, as does a litany (see Ian Bogost's Alien Phenomenology concerning lists and litanies) of other non-human things/processes: technologies, chemicals, rabbits, chairs, airplanes, LCD displays, ionization, geological formations, insects, shoes, valves, density of surfaces, and skin. Instead of a list, which we only could fake to be exhaustive, let's just state that matter has its intensities, its affordances and tendencies that are not just passive, waiting for the activity of form(ing) by the human. A lot of the so-called new materialist debate has revolved around trying to figure a way out of the (post-)Kantian world where we do not really have access to such things as dust. We are only able to know about them, mediated through the assumed a priori categories (temporality and spatiality, specific to the transcendental subject). Non-humans and the world are approached through a variety of epistemological measures. This relates to the question of how do we actually know anything of the worlds outside us, and be sure about that knowledge, of the world. The same has applied to a lot of academic theory too, where ontology has not been on top of the agenda for instance in a lot of cultural and media studies (despite such pioneers of rethinking materialism as Lawrence Grossberg). Discussing matters ontological, real and ontological has had a bit of a rough time in the midst of various epistemological enterprises about what is knowable, what is not, what is real, what is hallucinated and imagined. The past years have seen an intensifying debate that argues that we need to think more broadly than the question of categories of knowledge and actually account for ontology and ontogenesis. In other words, new materialism tries to steer clear of the hylomorphic fallacy -of a division between us (humans, knowledge, meanings, form) and them (the real world of objects, things, materialities, often assumed passive and meaningless in the signifying sense). The French philosopher Gilbert Simondon was adamantly argues that in order for us to grasp the materiality of things and technology, we need to rethink and challenge the assumption that form is external to matter. Perhaps there is a forming inside matter already, an intensity, or as Gilles Deleuze suggested, an element of the virtual? For Simondon, the name for this mattering was individuation -that matter individuates in its milieu. Often in contemporary cultural theory, this is referred to through a broader idea of "new materialism" -not just the materialism as we used to think of it, as mechanical, or in political economy versions as historical or dialectic materialism, but also the materialism of non-humans -whether inside us (for instance bacteria or genes) or outside us (ecology, media technologies, and well, bacteria and genes). Quoting Negarestani before mentioning the more established philosophers from which new materialism stems ---Simondon, Brian Massumi, Deleuze, Bruno Latour, Rosi Braidotti, Elizabeth Grosz and others, for instance in the object-oriented ontology brand of lazamientos y distribuciones de una amplia gama de agencias, algunas de ellas no humanas. De ahí que nos veamos obligados a reflexionar sobre los contextos del nuevo materialismo de una forma novedosa, ligeramente más fluida que simplemente asumiendo que la especificidad relativa a las bases tecnológicas y científicas de la cultura de los medios es automáticamente material. En efecto, la materialidad no concierne solo a las máquinas, ni tampoco afecta únicamente a los sólidos o a las cosas, ni tan siquiera a los objetos. La materialidad se filtra en múltiples direcciones, tal como demuestran los residuos electrónicos o los efectos de la contaminación electromagnética. Es transformacional, ecológica y multiescalar. thought -is emblematic of the embracing of the speculative nature of the world. It accounts for the insistence that objects and nonhuman events speculate too, even before the philosopher enters the scene. Speculation is not so much a cognitive attitude but a mode of engaging in a situation, in a milieu. Also the particle of dust that we started with speculates through its "unique vision of matter, movement, collectivity, interaction, affect, differentiation, composition and infinite darkness". Speculation engages (in) the event that unfolds. Insects speculate, so do bacteria, and non-organic formations too, as long as we credit them with a certain duration, characteristics and a milieu. Speculative realism might often, in its object-oriented forms, avoid this talk of events, but it is still worthwhile to bear in mind their contribution to the new materialist discourse. Speculation, in speculative realism, is something that also wants to avoid the linguistic understanding of speculation, but claims that the world, already and outside the human, is speculative, contingent and prone to change. Just like the speculative human thinker or designer, speculative matter is not always sure and determined where it is going and what will happen next. Speculation forces us to question causality, or at least to track it to its bitter complex middle. Speculation can be said to be pragmatic too, in the manner Brian Massumi coins it together with pragmatism and the relation to radical empiricism. Here, speculation addresses the potentiality and change inherent in the world, combined together with pragmatism as an attitude towards the processes of composition. Yet, speaking of new materialism we need to ask whether the focus on non-humans is sufficient, or whether we need a further level of specificity? In short, if new materialism is interested in the qualities of objects, things, processes and the wider vibrancy of matter (as Jane Bennett coins it), is it sufficient to just brand everything as objects or do we need to keep the agenda much more open to a variety of encounters in thought and (creative) practice, including design? In such speculative design practices as described in Design Noir, by Anthony Dunne and Fiona Raby, objects become only one passage point in understanding the wider topology, spectral geography, of electromagnetic media. This design perspective forces us to link epistemological considerations (visualisations and computer simulations that make the electromagnetic spectrum understandable to the human senses), design practices (how do you engage with such real but invisible worlds) and speculative ontologies (matter that is effective and affective as a mediatic milieu, and yet escapes into the non-human frequencies and speeds). Palabras clave Indeed, the already very briefly mentioned posse of theorists have elaborated very different ways of engaging with activity of matter; that matter does, is and has a range of effects, causations and reactionsnot all registered at all in the sensory systems of humans, and often even less in our cognitive coordinates or epistemological apparatus (which themselves have to be related to histories of technical media). Dust, electromagnetic phenomena, and other non-humans engage in intensive differentiation that demands a different cultural studies vocabulary than the one we inherited from the language-biased deconstructionism or representation-analysis. This has ontological implications, as the term "flat ontology" coined by Manuel Delanda and Levi Bryant has demonstrated -we should not give privilege to one particular (generic) type of being. If instead of assuming a fixed set of being, an ontological starting point, we approach it as ontogenesis, we might be able to think of it as an attitude of orientation; a speculative pragmatics even, in the manner that is interested in mapping out future potentials of the world -things and as real relations. In the midst of theoretical debates and traditions concerning new materialism, one particular approach is to emphasize the differing materialisms of "mediatic" phenomena. This does not mean we need to reduce the richness of the theoretical approaches concerned with ''media'' or ''technology.'' Instead, a media-focused emphasis is one way to entangle ontological debates concerning new materialism with historical media approaches and practices that wish to engage actively, in an aesthetico-political way, with such intangible realities. Mixing philosophy with media theory offers an insight into why we are so interested in non-human bodies and objects, processes that escape direct and conscious human perception, intensity of matter of technological and biological kinds. In short, this media-biased proposition goes something like this: New materialism is not only about intensities of bodies and their capacities such as voice or dance, of movement and relationality, of fleshiness, of ontologicalmonism and alternative epistemologies of generative matter, and active meaning-making of objects themselves non-reducible to linguistic signification. Not wanting to dismiss any of those perspectives, I just want to remind of the specificity and agency in mediatic matter too. New materialism is already present in the way technical media transmits and processes ''culture,'' and engages in its own version of the continuum of nature-culture (to use Donna Haraway's term) or in this case, media-natures. Instead of philosophical traditions, we can read modern physics, engineering, and communications technology as mapping the terrain of new materialism: signal-processing, use of electromagnetic fields for communication, and the various non-human temporalities of vibrations and rhythmics of for instance, computing and networks areas as much based in non-solids as the conventional materialities that you can grasp with your hand. A slightly different kind of tool relation than with the hammer. Think of pregnancy dresses with thin silver threads to block out electromagnetic radiation. There is a whole other history of invention besides the one that we usually find in creative industries or media history context. In such an alternative, non-human history, the soil and rocks might act as storage media for the passing of time, electromagnetic waves transmit, and are accidentally picked up by natural antennas, our high-tech media relies on a long history of natural media, and that a range of materials exhibit such capacities that we often attribute only to high-tech media -in the words of Paul DeMarinis: "semiconductor physics is unaccountably breeding in hidden places". Media history is one big "story" of experimenting with different materials from glass plates to chemicals, from selenium to coltan, from dilute sulphuric acid to shellac silk and guttapercha, to processes such as crystallization, ionization, and so forth. All of those could be approached through the non-hylomorphic idea of individuation that Simondon proposed. What is more, the materials have their after effects, nowadays most visible in the amount of e-waste our electronic culture leaves behind, which presents one further ''materiality'' for our investigation interested in tracking non-human dimensions of media culture. As such, new materialism is perhaps, surprisingly, one such perspective that could make sense of a continuum between mediatic apparatuses as communication tools and materiality both as high tech and soon to be obsolescent waste. In short: Continua all the way down (and up again); soft to hard, hardware to signs. In software studies, the continuous relation from the symbol functions on higher levels of coding practices to voltage differences as a ''lower hardware level'' has been recognized: assembly language needs to be compiled, binary is what the computer ''reads,'' and yet such binaries take effect only through circuits; and if we really want to be hardcore, we just insist that in the end, it comes back to voltage differences. Such is the methodology of ''descent'' that Foucault introduced as genealogy, but that German media theory takes as a call to open up the machine physically and methodologically to its physics. It has lead into a range of artistic methodologies too, from computer forensics to data carvery (as performed by Martin Howse, Danja Vasiliev and Gordan Savicˇic´), to network algorhythmics (Shintaro Miyazaki). In other words, recognizing the way abstraction works in technical media from voltages and components to the more symbolic levels allows us to track back, as well, from the world of meanings and symbols--but also a-signification--to the level of dirty matter. This material descent can also take us to consider the theme of material depletion, and open up the whole notion of medium into its shifting constituent parts. This is the stuff that can contribute to one particular possibility of ''new'' materialism: the perspective of minerals sedimented for millions of years before being mined by cheap labour in developing countries for use in computers and iPads. After that short use-period of some years, they become part of the materiality of e-waste leaking toxins into nature after river-dumping or incarceration, making them into toxic vapours that attach to the nervous systems of cheap labour in China, India, Ghana, etc. Delanda wrote of the thousand years of non-linear history as a proposition to engage with the long durations of rocks, minerals, biomatter and language. Now we can push that into a million, billion years of non-linear history almost in the way Negarestani suggests in his work of theory-fiction concerning petroleum, dust and other material agencies. A new materialist archaeologist excavates how the sedimented participates in the contemporary biopolitical sphere. A geology of media technologies. This new material biopolitics is embedded in a multitude of durations: A specific design solution concerning a screen or technological component has an effect on its becoming obsolescent sooner than ''necessary'' while the product itself is embedded in a capitalist discourse emphasizing newness as a key refrain and fetishistic value driving the purchase decisions. And, after being abandoned for another device, what is often called ''recycling'' is just waste-trade, wherein old electronic media is shipped, for instance, to India, to be dismantled with very rudimentary (and dangerous) processes that attach toxins to the lungs and nervous systems of the poor workers. In short, this kind of new materialism is interested in the various times, durations, entwinements and distributions of a whole range of agencies, several of them non-human. For instance Grosz has pointed out the fruitfulness of this kind of an agenda for theory. This questioning should refuse preset answers, whether such would want to focus only on the materiality of the scientific context, or suggesting we are dealing only with objects. New materialism has a specific relation to a future, which also means a certain openness: materialism has to be invented continuously anew: a speculative pragmatism. It cannot just be discovered dormant, formulated in a philosophy book, or a theoretical doctrine; instead, speculative mapping turns to the world of non-humans in concrete ways and often aided by artistic practices; for instance in technological specificity or scientific contexts, a metaphysical category or even critique of linguistic turn. Hence, we are also forced to think of the contexts of new materialism in a slightly more fluid, novel way than just assuming that specificity concerning the technological and the scientific underpinnings of media culture are automatically material. Indeed, materiality is not just machines; nor is it just solids, and things, or even objects. Materiality leaks in many directions, as electronic waste demonstrates, or the effects of electromagnetic pollution. It is transformational, ecological, and multiscalar.
2019-06-16T13:13:54.802Z
2012-12-19T00:00:00.000
{ "year": 2012, "sha1": "5f5941450ce6e6de894aee1f331412fe81584302", "oa_license": "CCBY", "oa_url": "http://artnodes.uoc.edu/articles/10.7238/a.v0i12.1716/galley/1472/download/", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "d7bde819d28236f91eece08487d9453467c1d26d", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Art" ] }
203639054
pes2o/s2orc
v3-fos-license
What Are the Driving Forces of Urban CO2 Emissions in China? A Refined Scale Analysis between National and Urban Agglomeration Levels With the advancement of society and the economy, environmental problems have increasingly emerged, in particular, problems with urban CO2 emissions. Exploring the driving forces of urban CO2 emissions is necessary to gain a better understanding of the spatial patterns, processes, and mechanisms of environmental problems. Thus, the purpose of this study was to quantify the driving forces of urban CO2 emissions from 2000 to 2015 in China, including explicit consideration of a comparative analysis between national and urban agglomeration levels. Urban CO2 emissions with a 1-km spatial resolution were extracted for built-up areas based on the anthropogenic carbon dioxide (ODIAC) fossil fuel emission dataset. Six factors, namely precipitation, slope, temperature, population density, normalized difference vegetation index (NDVI), and gross domestic product (GDP), were selected to investigate the driving forces of urban CO2 emissions in China. Then, a probit model was applied to examine the effects of potential factors on urban CO2 emissions. The results revealed that the population, GDP, and NDVI were all positive driving forces, but that temperature and precipitation had negative effects on urban CO2 emissions at the national level. In the middle and south Liaoning urban agglomeration (MSL), the slope, population density, NDVI, and GDP were significant influencing factors. In the Pearl River Delta urban agglomeration (PRD), six factors had significant impacts on urban CO2 emissions, all of which were positive except for slope, which was a negative factor. Due to China’s hierarchical administrative levels, the model results suggest that regardless of which level is adopted, the impacts of the driving factors on urban CO2 emissions are quite different at the national compared to the urban agglomeration level. The degrees of influence of most factors at the national level were lower than those of factors at the urban agglomeration level. Based on an analysis of the forces driving urban CO2 emissions, we propose that it is necessary that the environment play a guiding role while regions formulate policies which are suitable for emission reductions according to their distinct characteristics. Introduction With the advance of global socioeconomics, greenhouse gas emissions are continuously increasing [1,2]. It is generally known that greenhouse gas emissions have significant negative many administrative regions, areas exist that are not sources of CO 2 emissions, for example, bodies of water and large national scenic areas [37,38]. Thus, it is necessary to explore the driving forces of urban CO 2 emissions on a refined scale. Due to differences in the environmental and socioeconomic conditions across China, different regions have formed with great disparities in urban development and CO 2 emissions [39]. Some empirical studies have quantified the driving forces of urban CO 2 emissions in different geographical areas in China. For example, Wang et al. [40] evaluated the CO 2 emission efficiency in the Pearl River Delta and indicated that compact urban development could help to improve CO 2 emission efficiency. Li et al. [41] explored the effects of urban forms on CO 2 emissions in 288 cities in China. Zhao et al. [42] applied nighttime light datasets and spatial econometric models to examine the socioeconomic and climatic factors associated with spatiotemporal CO 2 emissions by dividing China into four parts. Feng et al. [43] applied a system dynamics model to model the energy consumption and CO 2 emission trends for the city of Beijing. However, evaluation of the driving forces of urban CO 2 emissions has mostly been conducted in specific regions, and regional or scale differences in urban CO 2 emissions have rarely been discussed. Generally, it is not appropriate to transfer findings from one spatial scale to another, because socioeconomic development, e.g., urban CO 2 emissions, is sensitive to scale changes [17,44]. Scale and hierarchy evaluation are very significant for a better understanding of the complexity of China's regional differences in urban CO 2 emissions [45]. Although many studies have examined the driving forces of CO 2 emissions in a number of cities, regions, or counties, an evaluation of the driving forces of urban CO 2 emissions based on a sampling approach on multiple scales, which is necessary for government policymakers, is still lacking. This study aims to explore the driving forces of urban CO 2 emissions in China. The contributions of this study are summarized as follows: (1) quantifying the driving forces of urban CO 2 emissions at a finer spatial resolution (e.g., 1-km spatial resolution) using multiple-source data from 2000 to 2015; (2) evaluating the driving forces of urban CO 2 emissions from multiple spatial levels; (3) comparing regional differences in the driving forces of urban CO 2 emissions. To address the above questions, we conducted experiments at two administrative levels (i.e., the national and urban agglomeration levels) to test our evaluation in this study. First, urban CO 2 emissions and potential driving force data were extracted from multiple-source data from 2000 to 2015. Second, the probit model was employed to evaluate the driving forces of urban CO 2 emissions on the national and urban agglomeration levels. Third, the differences between the national and urban agglomeration levels and between the different urban agglomerations were analyzed and compared, according to the factor relations of the urban agglomerations. The remainder of the study was organized as follows. The second section introduces the data and methods. The third section presents the results. A discussion is presented in the fourth section, and the last section describes the conclusions and policy implications. Study Areas To effectively quantify and compare the driving forces of urban CO 2 emissions in China, study areas were selected from national and urban agglomeration levels for a multiscale analysis ( Figure 1). The main justification for selecting these is that most previous, related, urban CO 2 emissions studies have been analyzed on these levels [46][47][48]. Specifically, at the urban agglomeration level, six urban agglomerations were selected as study areas, namely, the Beijing-Tianjin-Hebei urban agglomeration (BTH), the middle and south Liaoning urban agglomeration (MSL), the Shandong Peninsula urban agglomeration (SP), the Chengdu-Chongqing urban agglomeration (CY), the Yangtze River Delta urban agglomeration (YRD), and the Pearl River Delta urban agglomeration (PRD). BTH is characterized by an industrial population agglomeration area, especially in Beijing, and is a megacity with extremely high population density, so urban CO 2 emissions may be very high. MSL is an old heavy-industry area, and SP has many state-owned enterprises. CY is a representative western urban agglomeration. YRD, as a traditional economic development zone, has developed automobile, chemical, and other industries for decades which have caused high urban CO 2 emissions. PRD is characterized by light industry and foreign trade. These six urban agglomerations are the most developed, densely populated, and economically-active areas in China and contain almost all the characteristics of China's urban agglomerations ( Figure 1). Specifically, these urban agglomerations in total have a population of 499,830,000, which is approximately 36% of the whole population according to the China Statistical Yearbook of 2018. These urban agglomerations have areas of 1,003,418 km 2 , representing approximately 10% of China's territory [24]. Although the economic value created by these areas is considerable, they also suffer from tremendous problems, such as greenhouse gas effects, water pollution, and air pollution. CO 2 emissions are rising annually and are more than 8×10 8 t. Therefore, it is necessary to study the CO 2 emission levels and the factors affecting them in these regions. Figure 1. Spatial distribution of the study areas. Note: BTH represents the Beijing-Tianjin-Hebei urban agglomeration; MSL represents the middle and south Liaoning urban agglomeration; SP represents the Shandong Peninsula urban agglomeration; CY represents the Chengdu-Chongqing urban agglomeration; YRD represents the Yangtze River Delta urban agglomeration; and PRD represents the Pearl River Delta urban agglomeration. Urban CO 2 Emissions Accurately extracting urban CO 2 emissions is a prerequisite for evaluating the driving forces of urban CO 2 emissions in China. In this study, the extraction of urban CO 2 emissions is divided into three steps. First, urban areas were extracted; then, data on China's CO 2 emissions were obtained. Finally, CO 2 emissions for each urban area were estimated. At present, there are many methods for extracting urban areas. Nighttime light data have been shown to provide an effective way to extract urban areas on a large scale [49,50]. Most previous studies used two types of raw remote sensing data to extract urban areas, namely, the US Air Force Defense Meteorological Satellite Program's Operational Linescan System (DMSP-OLS) data, and the National Polar-orbiting Partnership (NPP)-Visible Infrared Imaging Radiometer Suite (VIIRS) data. However, the main problem is determining the threshold value of the nighttime light data. For example, a threshold based on DMSP-OLS data has been adopted by studies to qualitatively or quantitatively partition urban areas [51][52][53][54]. Based on this, taking reference from the studies of He et al. [55], Xu et al. [56], and Yang et al. [57], the DMSP-OLS data, NPP-VIIRS data, land surface temperature data, and normalized difference vegetation index (NDVI) data were used to efficiently extract urban areas in China at a 1-km spatial resolution from 2000 to 2015 using the stratified support vector machine method ( Figure 2). Subsequently, Landsat Thematic Mapper (TM)/Enhanced Thematic Mapper Plus (ETM+) images were used to examine their spatial accuracy. The accuracy verification results show an average Kappa value of 0.66 and an overall accuracy of 95.20% [55]. Therefore, these datasets could be used to accurately represent urban expansion in China. The CO 2 emissions data were retrieved from the Open-Data Inventory for Anthropogenic Carbon dioxide (ODIAC) fossil fuel emission dataset from the Center for Global Environmental Research (http://db.cger.nies.go.jp/dataset/ODIAC/), National Institute for Environment Studies, which is committed to supporting global environmental research by monitoring the global environment, developing databases, operating supercomputers, and providing facilities for data analysis. The ODIAC first introduced the combination of nighttime light data and the emission/location profile of a single power plant to estimate the spatial range of CO 2 emissions from fossil fuels with a spatial resolution of 1-km and a unit of t/km 2 . Currently, ODIAC includes several versions, such as ODIAC2013a, ODIAC2015a, ODIAC2016, ODIAC2017, and ODIAC2018. In this study, we used the ODIAC2016 data product, which was generated by combining multisource nighttime light data, the global point source database, and ship/aircraft fleet orbits [58] ( Figure S1). The verification results clearly show that the ODIAC2016 data can effectively match the CO 2 emissions at the global, regional, and city scales [58]. Therefore, the data product meets the requirements of large-scale and long time series [59]. Ultimately, we extracted urban CO 2 emissions in China from 2000 to 2015 based on these datasets. Potential Driving Forces In this study, we divided the potential driving factors into two categories: Socioeconomic factors and natural factors. Based on a literature review [17,60], six factors, namely, precipitation, slope, temperature, and NDVI, as natural factors, and population density and GDP, as socioeconomic factors, were selected to investigate the driving forces of urban CO 2 emissions in China (Figure 3, Figure S2). All the data have passed collinearity tests, so each factor influences a different aspect of urban CO 2 emissions [61]. Population density has been proven to be a significant factor driving urban expansion and CO 2 emissions [62]. The 2000-2015 population data were obtained from the Data Center for Resources and Environmental Sciences, Chinese Academy of Sciences (RESDC) (http://www.resdc.cn) (Figure 3m-p). With regard to the population, some studies have found that population density has a significant positive effect on CO 2 emissions and a negative spatial spillover effect [63], suggesting that regions with high population densities, such as China's urban agglomerations, contribute more to environmental pollution [64]. Therefore, the population density of the areas under study could significantly affect urban CO 2 emissions. GDP is also a factor affecting urban CO 2 emissions, as China is implementing industrial adjustments and transformation from traditional industry towards high efficiency, low-energy consumption levels [63]. The data in our study were acquired from the RESDC (Figure 3i-l). The combination of temperature and precipitation is often referred to as climate, which is closely related to human production and habitation activities. Climate influences agriculture, industry, and energy supply, and cannot be ignored. Previous studies have reported the share of meteorological factors (temperature and precipitation) in different industrial sectors: 14.38% in the mining industry, 4.71% in the construction industry, and 8.20% in the manufacturing industry [65]. Because of their effects on industrial activities that are closely related to urban CO 2 emissions, temperature and precipitation are indirect influencing factors on urban CO 2 emissions. These data were also collected from RESDC (Figure 3a-h). It can easily be seen that regions with flat terrain usually have advanced economic development in China ( Figure S2). The slope affects urban CO 2 emissions indirectly by influencing urban expansion and the economic boom. In this study, the slope was calculated using digital elevation model data. The data derived from the Shuttle Radar Topography Mission were downloaded from the Consortium for Spatial Information (CGIAR-CSI) (http://srtm.csi.cgiar.org/), which offers a major advance in the accessibility of high-quality elevations with 250 m spatial resolution. NDVI was also used as an influencing factor to further analyze urban CO 2 emissions. Vegetation growth has a great impact on CO 2 emission concentrations, which in turn, react to environmental CO 2 pollution. Thus, vegetation is also closely related to urban CO 2 emissions. In this study, the 2000-2015 monthly NDVI composites were obtained from the Geospatial Data Cloud (http://www.gscloud.cn/). These data have been processed by systematic correction and given in a 1-km spatial resolution. We ultimately generated annual NDVI composites for 2000-2015 based on the average fusion (Figure 3q-t). Ultimately, all of the spatial data were projected into an Albers conic equal area projection and resampled to a spatial resolution of 1 km. Probit Model Many mathematical models have been used to evaluate the driving forces of urban CO 2 emissions, mostly via regression model. Traditionally, ordinary least squares (OLS) regression has been widely used to validate the relationships between urban CO 2 emissions and potential driving factors [66]. The OLS regression has noteworthy limitations as a result of the assumptions that the error term is continuous symmetric and that the independent variable is linear. In many practical problems, the corresponding variable is not continuous, so the discrete choice model has been introduced here accordingly [67]. A discrete choice model (e.g., probit model), most frequently a binary choice model [68,69], could quantify the relationships between urban CO 2 emissions and potential driving factors in pixels by binary data. The advantage of the binary selection model is that the results can directly predict locations that are likely to be urbanized [60]. The value of the response variable in the probit model is 0 or 1. The probit model has been applied in many fields, such as medicine, biology, and econometrics [67]. Thus, to explore the impact of each driving factor on urban CO 2 emissions, the probit model was adopted in this study. The model can be expressed as follows: The model is a binary response, nonlinear function, and ϕ(x) obeys the standard normal distribution, where α and β are parameters; Y = 1 means that the independent variable influences the dependent variable, and X indicates that each dependent variable has the same dimension as X. To more clearly express the influence of each factor affecting urban CO 2 emissions, we have incorporated temperature, precipitation, NDVI, slope, population, and GDP into Equation (1); the model can then be expanded into the following equation: where βi is the coefficient of the driving factor, n is the number of variables, and u is the interference residual value. The larger the absolute value of the coefficient, the more urban CO 2 emissions will be affected by this factor; conversely, a smaller absolute value leads to a smaller effect. A positive value indicates a promoting effect on urban CO 2 emissions, and a negative value implies a negative effect on urban CO 2 emissions. Spatiotemporal Variations of Urban CO 2 Emissions As shown in Figure 4, total urban CO 2 emissions showed a trend of increasing year by year, from less than 2 × 10 8 t in 2000 to nearly 8 × 10 8 t in 2015, which is four times the amount in 2000. From 2000 to 2005, total urban CO 2 emissions doubled, reaching 4.5%. From 2006 to 2010, the growth rate was significantly lower than that of the previous stage, decreasing by 2.1%. From 2011 to 2015, the growth rate slightly increased and remained above 2.0%. One interesting phenomenon identified was that these three stages were basically consistent with the three stages of China's energy strategy plan, e.g., the "Development Plan of New and Renewable Energy Industry in China for 2000-2015". First, the plan established an economic incentive policy system and an industry management system from 2000-2005. The total annual exploited and utilized amounts of new and renewable energy only account for 0.70% of total commercial energy consumption. Hence, urban CO 2 emissions always have a high rate. Second, the plan aimed to further improve the economic incentive policy system and the technological monitoring and servicing system for new and renewable energy from 2006-2010. The new and renewable energy percentage reached 1.25%, and correspondingly, the urban CO 2 emissions rate exhibited a massive downturn, dropping to half of the original rate. Third, the plan called for new and renewable energy to become one of the new, important, emerging trades in China from 2011-2015. The total urban CO 2 emissions rate remained at approximately 2.0%. Through a comparison of Figures 4 and 5, we found that from 2000 to 2015, with the expansion of urban area, the increase in urban CO 2 emissions presented the same trend. Upon closer inspection, the urban area growth rate exceeded the urban CO 2 emission growth rate; however, from 2005 to 2010, both rates declined greatly, although the urban area growth rate decreased more notably. After 2010, both rates were steady and again increased slightly. From the above findings, we could easily determine that urban CO 2 emissions are accompanied by urban expansion, and the growth trends of the two variables are similar. Spatiotemporal variations in urban CO 2 emissions in China from 2000 to 2015 are shown in Figure 6. We found that urban CO 2 emissions were concentrated in the six urban agglomerations. In 2000, for the BTH, urban CO 2 emissions were mainly concentrated in the southeast region of Beijing and slightly to the south of the central region of Tianjin. Urban CO 2 emissions in the YRD were mainly concentrated to the north of Shanghai and scattered within southern Jiangsu province and northeastern Zhejiang province. Urban CO 2 emissions were mainly concentrated in the northeast and northwest of the PRD and far from the coastline. By 2005, in the YRD, Shanghai's urban CO 2 emissions extended from the north to the surrounding areas. In the MSL, urban CO 2 emissions were concentrated in a continuous irregular area near Shenyang. The urban CO 2 emissions of SP were mainly concentrated in the central region and had a sporadic distribution. In the CY, small patches of urban CO 2 emissions were located in Chengdu and Chongqing. In 2010, urban CO 2 emission concentration areas in the BTH, YRD, PRD, and MSL further expanded, the sporadic areas gradually became patches, and the patches gradually expanded. By 2015, the BTH, YRD, PRD, and MSL had steadily expanded, and the SP and CY had developed sporadically. Initially, the main areas generating urban CO 2 emissions, in general, were a few concentrated regions, such as Beijing, Tianjin, Shanghai, Foshan, and Guangzhou, but urban CO 2 emissions continuously expanded to the surrounding areas. Small areas expanded into larger areas and sporadic areas, and more small areas formed. Table 1 shows the regression coefficients of driving forces that affect urban CO 2 emissions using the probit model from 2000 to 2015. The effect of each factor on urban CO 2 emissions is highly significant. Among these factors, NDVI (1.77), population density (0.14), GDP (0.12), and slope (0.02) were significantly positively correlated with urban CO 2 emissions. In contrast, temperature (−0.01) and precipitation (−0.11) were significantly negatively correlated with urban CO 2 emissions, respectively. Results of the Driving Forces of Urban CO 2 Emissions at the Urban Agglomeration Level The driving forces of urban CO 2 emissions at the urban agglomeration level are shown in Table 2. Urban CO 2 emissions in the CY were notably influenced by every factor. Among the factors, temperature (−0.64), precipitation (−0.78), and slope (−0.04) had negative impacts, while population density (0.10), GDP (0.09), and NDVI (1.70) had positive impacts. In the BTH, slope, precipitation, temperature, population density, and NDVI had a remarkable influence, among which slope, population density, and NDVI had positive impacts on urban CO 2 emissions with coefficients of 0.11, 2.37, and 2.19, respectively, while temperature (−0.10) and precipitation (−0.92) had negative impacts. In the MSL, slope, population density, NDVI, and GDP were significant influencing factors. Slope, population density, and NDVI had positive impacts, with coefficients of 0.11, 0.38, and 1.71, respectively, while GDP (−0.03) was a negative factor. In the SP, slope (0.29), population density (0.47), and NDVI (4.38) positively affected urban CO 2 emissions, while GDP (−0.11) was a negative factor. In the YRD, six factors had significant impacts on urban CO 2 emissions. Slope had a negative influence, and the other factors had positive influences. The coefficients were 0.42 (precipitation), −0.15 (slope), 0.04 (temperature), 0.51 (population density), 3.08 (NDVI), and 0.20 (GDP), respectively. In the PRD, all the factors (except for temperature) had positive impacts on urban CO 2 emissions. The coefficients were 0.04 (precipitation), 0.32 (slope), 0.32 (population density), 0.99 (NDVI), and 0.07 (GDP), respectively. Note: Significant at the * 10% level, ** 5% level, and *** 1% level. BTH represents the Beijing-Tianjin-Hebei urban agglomeration; MSL represents the middle and south Liaoning urban agglomeration; SP represents the Shandong Peninsula urban agglomeration; CY represents the Chengdu-Chongqing urban agglomeration; YRD represents the Yangtze River Delta urban agglomeration; and PRD represents the Pearl River Delta urban agglomeration. Driving Forces of Urban CO 2 Emissions at the National Level From Table 1, at the national level, we found that each factor had a significant impact on urban CO 2 emissions. It should be noted that some driving factors had negative effects on urban CO 2 emissions, such as temperature and precipitation, while others had positive effects, such as slope, population, NDVI, and GDP. In terms of temperature, China has a cold living environment in winter; thus, coal has become the main source for heating, especially in northern China, resulting in high CO 2 emissions [70]. Precipitation, accompanied by winds, could relieve certain CO 2 concentrations in the air, so more rain means less urban CO 2 emissions [71]. Slope, population, GDP, and NDVI are all positive driving forces. An increase in population leads to an increase in man-made CO 2 emissions; for example, the increase in population means more families and more private cars, which leads to an increase in urban traffic CO 2 emissions [72]. The growth of GDP is also inseparable from the development of industry. Many factories emitted a large amount of CO 2 [64]. In addition, the greater the degree of the slope, the less likely it is that CO 2 produced in the region will spread, which might lead to an increase. However, the results of the NDVI factor did not seem to fit our expectations. We believe that vegetation could absorb CO 2 , but in this result, the more vegetation there was, the more urban CO 2 emissions were observed. Because built-up areas were applied to extract the urban CO 2 emissions and various potential factors, the vegetation captured here is urban vegetation. While vegetation has an absorption effect on CO 2 , relative to the large amounts of urban CO 2 emissions, the vegetation's absorption effect is not substantial. Therefore, NDVI shows a positive impact on urban CO 2 emissions [73]. Moreover, NDVI is generally located near residential or industrial areas, meaning that it is often accompanied by residential or industrial CO 2 emissions, perhaps forming the false impression that urban CO 2 emissions have a positive correlation with the growth of NDVI [66]. Difference in the Driving Forces of Urban CO 2 Emissions in the Six Urban Agglomerations The study of various driving factors at the national level has neglected regional characteristics. Thus, the various driving factors on urban CO 2 emissions were evaluated and compared at the urban agglomeration level. Table 2 shows each factor that affected urban CO 2 emissions at the urban agglomeration level. First, considering GDP, the highest degree of influence is seen in the YRD, with an influence coefficient of 0.20, while the lowest degree of influence is in the PRD, with a coefficient of 0.07. As is well known, the YRD and the PRD region are economically-developed areas of China, but from the results, we found a large difference in the degree of influence of GDP in the two urban agglomerations on CO 2 emissions. In 2000-2015, the YRD was ranked at the top in China for secondary industry, which makes the greatest contribution to GDP accounts; however, a large number of factories were constructed in the development of secondary industry, causing high CO 2 emissions. Therefore, GDP has a high influence in the YRD. By contrast, the PRD also has a high-level economy, but the degree of influence of the GDP is relatively small. The reason may be that the PRD's tertiary industry is more advanced; thus, tertiary industry makes the greatest contribution to GDP. Tertiary industry represents the service industry, which emits less CO 2 than secondary industry. Therefore, it is clearly indicated that the industry structure remains to be further improved, enhanced, and upgraded in the YRD. Second, for the NDVI, we found that the NDVI has a far higher influence in SP than in the other urban agglomerations, with a coefficient of 4.40. This phenomenon might be explained by the fact that, as shown in Figure 4, the distribution of vegetation in SP has an aggregation effect and is evenly distributed throughout, unlike the other urban agglomerations. Therefore, in the local industrial production area, the influence degree of the NDVI around factories would be relatively higher. In addition, based on the current situation wherein the built-up area in SP is dispersed, cities in SP are not very close to each other, and the central cities do not play a strong leading role. Third, with regard to the driving factor of population density, we found that the degree of influence in BTH and SP is high, with coefficients of 2.37 and 2.47, respectively. Further analysis revealed that although population density has a high degree of influence in the two urban agglomerations, the reasons are different. The BTH has a large number of national high-tech industries and attracts many highly-competent people, forming a very large rainbow absorption effect that attracts many ordinary people from across China. The majority of these people are rural workers, and most of them provide supporting services [74]. Therefore, a large proportion of the migrant population is engaged in low-level services. The result of this influx is a large population, leading urban CO 2 emissions to sharply increase in areas due to transportation and growth in the residential sector. However, for the SP, the effect of population density may be due to its large population base and heavy industry. Thus, the impact of the population factor on urban CO 2 emissions is very notable. Fourth, the slope is a negative driving factor in the CY and the YRD. The CY is located in a mountainous area. Compared to flat areas, high-altitude and steep areas are less likely to be developed because it costs more to construct built-up areas [75]. At the same time, ecological protection policies are adopted in areas with high gradients, and most have protected soil and water [66]. Thus, tourism is the prioritized development industry. Therefore, for the CY, the higher the slope, the less construction there is, and the less urban CO 2 emissions; hence, the slope is a negative factor. For the YRD, protection and development policies are also implemented for areas with higher slopes [76]. Tourism and characteristic agriculture are also carried out in the region to increase local revenue. This situation provides a development concept for mountainous and hilly areas to make full use of the regional advantages and develop characteristic industries. Difference in the Driving Forces of Urban CO 2 Emissions at the Two Levels Hierarchy and scale effects existed widely in all fields of socioeconomic development [17]. Due to China's hierarchical administrative levels, a higher administrative level (e.g., China or urban agglomerations) generally has stronger administrative powers [44,77], consequently resulting in different spatiotemporal patterns and driving forces for urban CO 2 emissions across different administrative levels. In this study, the model results suggest that regardless of which level was adopted, the impacts of the driving factors on urban CO 2 emissions were quite different at the national and urban agglomeration levels. Temperature and precipitation showed negative impacts at the national level, with similar impacts in the CY and the BTH. However, temperature and precipitation positively affected urban CO 2 emissions in the YRD and the PRD. These two urban agglomerations are located in regions where the temperature and precipitation are relatively constant throughout the year; for example, residents in the YRD and PRD regions do not need to burn coal for heat in winter. Such climatic conditions are more conducive to productive activities. Therefore, temperature and precipitation are positive factors. Slope has a positive impact on urban CO 2 emissions. Because built-up areas are almost always distributed in flat terrain at the national level, a low slope is beneficial for socioeconomic development. However, for CY and the YRD, the built-up areas are very hilly, which might negatively influence the two urban agglomerations. In addition, it is clear that the population density and GDP both have a positive impact on urban CO 2 emissions at the national level. We also found that all driving factors were significant at the national level. However, for urban agglomerations, there were multiple conditions that amplify the effects. Although the conditions are complex, they always have intrinsic causes. For example, with regard to temperature and precipitation, the PRD has a subtropical-tropical humid monsoon climate that remains stable and results in high temperatures and high rainfall year round [78], so that the climate has only a slight effect. For the BTH, GDP is not a significant factor; this region's industry structure is multifaceted and multilayered [74], and the finance sector and high-tech industries account for the largest proportion of the GDP. Therefore, although GDP is high, it has a small impact on CO 2 emissions. The comparison of the national and urban agglomeration levels shows that the degree of influence at the national level is usually lower than that at the urban agglomeration level. On the one hand, for the various regions, the six factors (temperature, precipitation, GDP, slope, NDVI, and population density) have different dimension effects. However, the influence degree of each factor at the national level depends on the influence degree of that factor at the regional level. Therefore, the effect of each factor at the national scale may not be as great as that at the level of a small region of urban agglomeration [61,79]. Although the degree of influence of the factors on urban CO 2 emissions was usually lower at the level of a single urban agglomeration, there are exceptions, because each urban agglomeration has different conditions than the national region. Limitations and Future Directions There are several limitations that are worth mentioning. As a complex environmental problem, many other factors may affect urban CO 2 emissions, and many aspects should be further studied, such as agricultural and construction factors [80]. The selected factors are mostly natural potential forces, but socioeconomic factors also play a significant role, such as transportation, distance from a body of water, built-up area, and residential emissions. These factors should be incorporated into the study. In addition, with the development of urban agglomerations, the regions are changing, and the number of urban agglomerations will continue to rise; therefore, we should update the urban area data over time by refining the spatial resolution from 1000-m to 500-m, even at the larger scale. The images captured by sensors on high-resolution satellites, such as Landsat series images, can be used to more accurately interpret urban areas. The model in this study is an economic one that did not consider spatial locations, and therefore, the impact of the spatial location was not examined; hence, the applicability of the geographic data needs to be improved. Other appropriate models could also be used, such as the panel model, which has been improved with regards to capturing undesirable environmental outputs; in addition, panel data models [81], static and dynamic panel models [82], panel cointegration models [83], and modified input-output models could be used [84]. Conclusions and Policy Implications This study explored the driving forces of urban CO 2 emissions in China with a comparative analysis between national and urban agglomeration levels. We selected four years-2000, 2005, 2010, and 2015-to clearly determine the total amount and rate of urban CO 2 emissions growth by analyzing the spatial and temporal changes in urban CO 2 emissions from 2000-2015. Temporally, it was observed that the total urban CO 2 emissions have consistently increased from 2 × 10 8 t to 8 × 10 8 t, but the rate of increase drastically declined after 2005 and then stabilized, which corresponds with the implementation of the policies outlined in the "Development Plan of New and Renewable Energy Industry in China for 2000-2015". A probit model was used to quantify the effects of six factors (population density, GDP, slope, temperature, precipitation, and NDVI) on urban CO 2 emissions. At the national level, a cold living environment in China in winter might play an important role in promoting coal burning and heating, and thus, the temperature has a negative impact on urban CO 2 emissions. The NDVI is an interesting positive factor and is an indicator of the degree of urban CO 2 emissions. Vegetation in cities is limited, and although a large proportion of urban CO 2 emissions could be absorbed, vegetation only slightly affects the amount of urban CO 2 emissions. At the urban agglomeration level, we observed certain phenomena and examined the results in detail. We found that even though the YRD and PRD are both economically-developed regions with GDPs among the highest in China, the influence of GDP on urban CO 2 emissions in the YRD and PRD are vastly different. GDP has the highest degree of influence in the YRD and the lowest degree of influence in the PRD. Upon closer examination, the YRD and the PRD have different industry structures, with the YRD having many secondary industry enterprises that are mainly engaged in manufacturing, resulting in the area being at the top in China. However, the GDP percentage of tertiary industry in the PRD is higher than that in the YRD. The comparison of the two urban agglomerations reveals that the YRD should readjust and upgrade its economic structure. In SP, the data show that vegetation has a cumulative effect that is evenly distributed throughout SP; the vegetation in built-up areas does not absorb a large proportion of the urban CO 2 emissions. The high coefficient indicates that SP still has many CO 2 -intensive industries. Given the scattered built-up areas, it is not difficult to observe that the cities in SP are not close to each other, which further explains that the central city does not play a strong driving role. Moreover, the population density factor has a high degree of influence on the urban CO 2 emissions in BTH and SP, but there are again different reasons. In BTH, a large number of people engage in low-level service employment, and in terms of transportation, a high number of residents will cause high CO 2 emissions. At the same time, in BTH, the population is mainly concentrated in Beijing and Tianjin. For mountainous areas, the slope factor negatively affects the urban CO 2 emissions in CY and the YRD; the higher the slope, the lower the CO 2 emissions. We also compared the results at the national and urban agglomeration levels. In contrast to the national level, the climate is a positive factor in the YRD and the PRD. At the national level, each factor is significant, but for a single urban agglomeration, because of the characteristics of the region itself, the situation is more complex. The degrees of influence of most factors at the national level are lower than at the urban agglomeration level. According to the results, several policy proposals are presented. First, the government policy-making department should introduce and support the development of new and renewable energy industries to effectively decrease urban CO 2 emissions while contributing to the national economy. Spatially, we found that urban CO 2 emissions are concentrated in the central cities of the urban agglomerations; thus, for the central cities, relevant governmental departments should actively adjust and upgrade the industry structures from traditional industries, such as low-level service industry, heavy industry, and manufacturing and handicraft industries, to intensively develop high-level service industries, high and new technology industries, and the finance sector. Moreover, we should learn from the development of central cities, and the central cities should guide the new and rapidly-developing cities. Second, the YRD should readjust and upgrade its economic structure to increase its growth. In the SP, the government should intensively develop new, high-technology industries. In addition, because the SP has a long coastline and good harbors, the marine industry can be further developed. We believed that the SP should develop multiple industries based on its advantages. In the future, the Shandong government should foster the exchange of ideas and cooperation among cities and expand the leading role of the central cities. In the BTH, to mitigate the population pressure of the two largest cities, the government should intensively develop the surrounding areas, such as the cities in Hebei in the BTH. In the CY, the government should make full use of its regional advantages to develop characteristic industries. Third, China's government should establish a better regulatory system to coordinate urban CO 2 emissions across the various regions, determine the problems that most affect the regions based on the available data, and develop national urban CO 2 emission reduction guidelines to guide local governments to formulate specific policies according to the actual context.
2019-10-03T09:10:10.975Z
2019-09-30T00:00:00.000
{ "year": 2019, "sha1": "02ecb3b37d4b0c2cbad1b2b89db8ae09b6c5ff9d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/16/19/3692/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "b998aec0d5b9938e4fafd5b318fabdef67eeb52e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science", "Medicine" ] }
208318122
pes2o/s2orc
v3-fos-license
The risk of developing cardiovascular disease is increased for patients with prostate cancer who are pharmaceutically treated for depression Objective To examine the associations between pharmaceutically treated anxiety and depression and incident cardiovascular disease (CVD) among 1‐year prostate cancer survivors. Patients and methods A registry‐based cohort study design was used to describe the risk of incident CVD in adult 1‐year prostate cancer survivors without a history of CVD. Patients with prostate cancer diagnosed between 1999 and 2011 were selected from the Netherlands Cancer Registry. Drug dispenses were retrieved from the PHARMO Database Network and were used as proxy for CVD, anxiety, and depression. Data were analysed using Cox regression analysis to examine the risk associations between pharmaceutically treated anxiety and depression entered as a time‐varying predictor with incident CVD in 1‐year prostate cancer survivors, while controlling for age, traditional CVD risk factors, and clinical characteristics. Results Of the 5262 prostate cancer survivors, 327 (6%) developed CVD during the 13‐year follow‐up period. Prostate cancer survivors who were pharmaceutically treated for depression had an increased risk of incident CVD after full adjustment compared to prostate cancer survivors who were not pharmaceutically treated for depression (hazard ratio [HR] 1.51, 95% confidence interval [CI] 1.06–2.15). The increased risk of incident CVD amongst those pharmaceutically treated for depression compared to those who were not pharmaceutically treated for depression, was only valid among: prostate cancer survivors who were aged ≤65 years (HR 2.91; 95% CI 1.52–5.55); those who were not treated with radiotherapy (HR 1.63; 95% CI 1.01–2.65); those who were treated with hormones (HR 1.76; 95% CI 1.09–2.85); those who were not operated upon (HR 1.55; 95% CI 1.07–2.25); and those with tumour stage III (HR 2.21; 95% CI 1.03–4.74) and stage IV (HR 2.47; 95% CI 1.03–5.89). Conclusion Patients with prostate cancer who were pharmaceutically treated for depression had a 51% increased risk of incident CVD after adjustment for anxiety, age, traditional CVD risk factors, and clinical characteristics. The results emphasise the need to pay attention to (pharmaceutically treated) depressed patients with prostate cancer prior to deciding on prostate cancer treatment and for a timely detection and treatment of CVD. Introduction A considerable proportion of prostate cancer survivors experience late effects from the cancer itself and its treatment, such as co-morbid cardiovascular disease (CVD) and psychological distress [1,2]. A recent case-control study by our group concluded that prostate cancer survivors have an increased risk of incident CVD compared to their age-matched cancer-free controls [3]. This increased risk may be the result of cardiotoxic cancer treatment, as several chemotherapeutic agents and hormone treatments can lead to a heterogeneous group of CVDs [4]. As more patients with prostate cancer survive their cancer, cardiotoxic side-effects demand consideration. Moreover, there are similar underlying behavioural risk factors for both prostate cancer and CVD, like obesity and smoking [5]. The incidence rate of cardiac co-morbidity in prostate cancer survivors who received cardiotoxic treatment is~3% [3]. This suggests that there are additional factors involved in the pathogenesis of incident CVD amongst prostate cancer survivors. Knowledge on predictors of incident CVD is vital as a prostate cancer patient's individual risk should be considered when opting for a cancer treatment that has a high probability of cardiac complications. Little is known about risk factors for incident CVD amongst prostate cancer survivors. However, there is ample knowledge on predictors for CVD in non-cancer populations. First, traditional CVD risk factors such as hypertension, hypercholesterolaemia, and diabetes mellitus, are known to play an important role in the pathogenesis of CVD [6,7]. The involvement of these risk factors has been confirmed to play a role in incident CVD among various malignancies as well [3,8,9]. More recently, several studies have shown that psychological distress, such as being anxious or depressed, also increases the risk of the development and progression of CVD among non-cancer populations, independent of traditional biomedical risk factors [10][11][12][13]. It is well known, that many prostate cancer survivors experience high levels of psychological distress, which can persist for years after cancer treatment has finished [14]. Prevalence rates for anxiety and depression range from 15% to 27%, hence one in every five prostate cancer survivors are afflicted [14,15]. Consequently, prostate cancer survivors may have an increased risk of incident CVD if only by these elevated levels of psychological distress after cancer diagnosis [16]. Indeed, a study by our group amongst breast cancer survivors showed that pharmaceutically treated anxiety prior to cancer diagnosis increases the risk of incident CVD, while controlling for depression, traditional cardiovascular risk factors, and clinical factors [17]. Interestingly, the predictive value of anxiety and depression for incident CVD, in addition to the traditional CVD risk factors and cancer treatments, is a key clinical objective in the field of CVD but has never been studied within the field of prostate cancer survivorship. The aim of the present study was therefore to examine the associations between pharmaceutically treated anxiety and depression and incident CVD in 1-year prostate cancer survivors. Study design and setting A registry-based cohort study design was used to describe the risk of incident CVD in adult 1-year prostate cancer survivors without a history of CVD. Patients with prostate cancer diagnosed in the Southern Region of the Netherlands between 1 January 1999 and 31 December 2011 were selected from the Netherlands Cancer Registry (NCR). The NCR includes all newly diagnosed patients with cancer and registers type of malignancy, date of diagnosis, stage, and primary cancer treatment [18]. For patients with cancer diagnosed from 1998 and onwards the PHARMO database network was linked to data from the NCR, see a detailed description of this linkage elsewhere [19]. PHARMO is a large, patient-centric multi-linked data network, which entails longitudinal data on drugs dispensed by community pharmacies, date, and amount of dispensing [19]. Drug prescriptions are coded according to the international Anatomical Therapeutic Chemical (ATC) Classification developed by the WHO [20], and were used as a proxy for CVD, anxiety and depression in this study. This study does not fall under the medical Research Involving Human Subjects Act in the Netherlands, as anonymous observational patient information was used. Therefore, this study was exempted from medical ethics review and no informed consent was required. The procedures of the study were in accordance with the Declaration of Helsinki. Participants Adult patients with prostate cancer diagnosed between 1 January 1999 and 31 December 2011 were selected from the NCR. The aim of the study was to examine risk of incident CVD secondary to prostate cancer diagnosis. Therefore, prostate cancer survivors who had a history of CVD in the year prior to cancer diagnosis were not eligible. Start of follow-up was set at 1 year after diagnosis because cancer treatment is generally finished within the first year. This moment allowed avoidance of CVD detection due to increased medical evaluations and exempted inclusion of the direct and sometimes reversible effects of cardiotoxic treatment. Follow-up time was measured until CVD, death, loss to follow-up, or until the end of the study period (31 December 2011), whichever occurred first. Drug prescriptions for CVD The following algorithm to define CVD was used: ≥2 drug dispenses of 'cardiac therapeutics' (i.e., ATC code C01 [21]) at unique dates within 6 months. When participants dispensed two ATC code C01 drugs within a 2-week period (<15 days in between) they were classified as having CVD only if they had at least three ATC code C01 dispenses at unique dates. To avoid false classifications of CVD, our definition was solely based on the use of cardiac therapeutics (ATC code C01). Usage of drugs that have a broad treatment range including non-cardiac indications, such as diuretics (ATC code C03), or b-blockers (ATC code C07), was insufficient to be classified as having CVD. Drug dispensing information for anxiety (ATC code N05B) and depression (ATC code N06A) in the year prior to and after cancer diagnosis was included. Survivors were classified as being anxious or depressed (yes/no) as indicated by using ≥1 drug prescriptions. Traditional risk factors for CVD Information on traditional CVD risk factors was obtained based on prescription drugs for the duration of follow-up for hypertension (ATC code C02, C03 [except C03c], C07, C08, C09 [except C09x]), hypercholesterolaemia (ATC code C10), and diabetes mellitus (ATC code A10). Having one of the traditional cardiovascular risk factors (yes/no) was defined as ≥1 drug prescription. Demographics and clinical characteristics Demographics at index date were extracted from the PHARMO database. Clinical information on tumour stage and primary cancer treatment (having received chemo-, radio-, hormone therapy, or surgery [yes/no]) was obtained from the NCR. Statistical analyses Differences in demographics, clinical characteristics, traditional CVD risk factors, pharmaceutically treated anxiety and depression between prostate cancer survivors with and without incident CVD were tested using ANOVA, the chisquared and t-tests for independent samples. To examine the associations between pharmaceutically treated anxiety and depression and incident CVD in 1-year prostate cancer survivors Cox proportional hazard regression analyses were performed. In other words, we examined whether the risk of incident CVD between those pharmaceutically treated for anxiety and depression was different compared to the risk for prostate cancer survivors who were not pharmaceutically treated for anxiety and depression. Anxiety and depression were entered in a time-dependent manner and the used timescale was the follow-up time since diagnosis starting at 1 year after diagnosis. Separate analyses were performed to examine the associations between pharmaceutically treated anxiety and depression with incident CVD. Additionally, both pharmaceutically treated anxiety and depression were entered together. Analyses included covariates that were entered in three steps. First, we adjusted for age (continuous). Second, we added the traditional CVD risk factors (i.e., hypertension, hypercholesterolaemia, and diabetes mellitus) as time-varying covariates. Finally, in the fully adjusted model, clinical characteristics (i.e., tumour stage and cancer treatment) were added to the model. The test assumption of the Cox proportional hazard regression analyses was evaluated using visual inspection of the Kaplan-Meier curve. The assumptions of linearity of continuous covariates were checked by plots of residuals. Sensitivity analyses were conducted to examine whether the associations between pharmaceutically treated anxiety and depression with incident CVD risk in the fully adjusted model differed by the timing of the development of anxiety or depression (prior or secondary to the cancer diagnosis) by means of stratified analyses. Furthermore, to explore whether the relation between pharmaceutically treated anxiety and depression with incident CVD was similar across subgroups, stratified analyses were performed for age (≤65 vs >65 years at cancer diagnosis), the presence of traditional CVD risk factors, cancer treatment category (chemo-, radio-, hormone therapy, and surgery), and tumour stage in the fully adjusted model. Missing data were handled in previous steps and described elsewhere [19]. All analyses were conducted using the Statistical Package for the Social Sciences (SPSSâ), version 22 (SPSS Inc., IBM Corp., Armonk, NY, USA). Statistical significance was set at P < 0.05. We chose not to use a more stringent a level, as this is the first study relating both pharmaceutically treated anxiety and depression to incident CVD in prostate cancer survivors, hence we wanted to avoid making a type-2 error. Patient characteristics Of the 5924 eligible prostate cancer survivors included in the NCR, 541 survivors were excluded as they had received CVD medications in the year prior to or after their cancer diagnosis. Additionally, 121 were excluded as they were deceased or lost to follow-up in the first year after cancer diagnosis. In total, 5262 were included in the present analyses. The excluded survivors were older, had more traditional CVD risk factors, had less often tumour stage I, were more often treated with hormone therapy and less often with radiotherapy or surgery, but most importantly they were not different in prevalence for pharmaceutically treated anxiety and depression (data not shown). The follow-up time ranged from 1 to 13 years (Table 1). During this period 6% of the prostate cancer survivors developed CVD (n = 327). The prostate cancer survivors who developed CVD differed from those who did not; see Table 1 for the specific differences. Number of prostate cancer survivors who were pharmaceutically treated for anxiety and depression before and after prostate cancer diagnosis In total, 859 (16%) prostate survivors were pharmaceutically treated for anxiety, of which 235 (4%) men started taking medication for anxiety before cancer diagnosis and 624 (12%) started taking medication for anxiety after their cancer diagnosis (Table 1). In addition, 546 survivors (10%) were pharmaceutically treated for depression, of which 172 (3%) men started taking antidepressants before cancer diagnosis and 374 (7%) started taking antidepressants after their prostate cancer diagnosis. Associations between anxiety and depression with incident CVD The association between pharmaceutically treated anxiety and depression, entered as time-varying predictors, with incident CVD risk was analysed separately (data not shown). Hence, we analysed whether prostate cancer survivors who were pharmaceutically treated for anxiety or depression had an increased risk of incident CVD compared to the risk of prostate cancer survivors who were not pharmaceutically treated for anxiety or depression. Pharmaceutically treated anxiety was positively associated with incident CVD risk in all three models: age-adjusted (hazard ratio Information is provided in n (%)for categorical variables, whereas follow-up time and age are presented in median years (range). There were missing values across all variables. As patients with prostate cancer could have received more than one treatment the total number does not add up to 5262. ANOVA was used for the categorical variables, chi-squared tests were used for dichotomous variables and t-tests were used for continuous variables. *Significant difference (P < 0.05) between those with and without incident CVD. † Being pharmaceutically treated for at least one of the traditional cardiovascular risk factors (i.e., hypertension, hypercholesterolaemia, or diabetes mellitus) during the 1 year prior to cancer diagnosis, yes/no. ‡ The total number of survivors who were classified as pharmaceutically treated for anxiety/depression regardless of when they started taking medication, either before or after prostate cancer diagnosis. § The total number of survivors who were classified as pharmaceutically treated for anxiety/depression before and after prostate cancer diagnosis. risk of incident CVD in the age-adjusted model (both P < 0.05). The increased risk of incident CVD for pharmaceutically treated depression attenuated slightly but remained significant when controlling for traditional CVD risk factors in the partially adjusted model (HR 1.48, 95% CI 1.04-2.11). Pharmaceutically treated anxiety was no longer significantly related to incident CVD risk when controlling for traditional CVD risk factors. In the full model, after additionally controlling for clinical cancer characteristics, the positive association between pharmaceutically treated depression and incident CVD risk remained significant (HR 1.51, 95% CI 1.06-2.15). Cumulative incidence plots illustrating the risk of incident CVD among those pharmaceutically treated for depression compared to those who were not during the follow-up period of 13 years are presented in Appendix S1. 436 Results of the sensitivity analyses showed that the associations between pharmaceutically treated anxiety and depression with incident CVD risk amongst prostate cancer survivors was not different in those with anxiety or depression present prior to or developed after prostate cancer diagnosis (data not shown). Stratified analyses for age, traditional CVD risk factors, cancer treatment, and tumour stage Results of the analyses examining whether the relation between pharmaceutically treated depression and incident CVD was similar across subgroups on age (≤65 vs >65 years at the time of cancer diagnosis), traditional CVD risk factors, cancer treatment categories, and tumour stage are presented in Table 3 and Fig. 1 Discussion In the present study, we found that pharmaceutically treated depression and anxiety increased the risk of incident CVD in 1-year prostate cancer survivors when controlling for age. This increased risk for incident CVD amongst pharmaceutically treated depressed prostate cancer survivors remained statistically significant after controlling for anxiety, traditional CVD risk factors and clinical characteristics, and was limited to younger prostate cancer survivors, those who received no radiation, those who were treated with hormones, those who were not operated upon, and those with tumour stages III and IV. The increased risk for developing CVD when pharmaceutically treated for depression in our present study (51%) is consistent with a meta-analysis among healthy individuals where the effect sizes of various populations and methodological characteristics varied between 32% and 57% [13]. The association between pharmaceutically treated depression and incident CVD risk amongst prostate cancer survivors did not differ with respect to timing of the development, that is, prior or secondary to the prostate cancer diagnosis. It is well known that the diagnosis of cancer and the additional cancer treatment is associated with increased risk of developing depression [16,22]. Apparently, patients who have a depression before cancer diagnosis already have a higher risk for developing CVDs regardless of cancer and cancer treatment [16,23,24]. Various behavioural and pathophysiological mechanisms are suggested to be underlying the association between depression and the risk of CVD [24,25]. A possibility is that prostate cancer survivors who are pharmaceutically treated for depression, smoke more often, and perform limited exercise compared to prostate cancer survivors who are not pharmaceutically treated for depression [26][27][28]. As the majority of studies in cardiovascular research have shown that smoking and limited exercise increases the risk of CVD amongst non-cancer populations, smoking and limited exercise have been included in the European Guidelines of cardiovascular risk prevention in clinical practice [29]. Although previously found increased CVD risks amongst (prostate) cancer survivors may not be attributed to lifestyle factors [30], other mechanisms (i.e., pathophysiological) may be relevant amongst cancer populations. Pathophysiological mechanisms (e.g., high cortisol levels, impairment in platelet function, reduced heart rate variability, immune functioning, oxidative stress) could be involved as they are known for their association with depression and contribute to reduced cardiac reserve possibly resulting in CVD [23,24,31]. In addition, direct pharmacological pathways could play a role in the association between depression and the risk of CVD [32][33][34]. The two most common pharmacological treatments for depression in the Netherlands are selective serotonin reuptake inhibitors (SSRIs) and tricyclic antidepressants (TCAs) [35,36]. As it is clear that TCAs have cardiovascular side-effects, TCAs are no longer prescribed in patients with or at risk for CVD. SSRIs are the preferred antidepressants; however, the literature is heterogeneous about whether SSRIs have a small negative or even a positive effect on CVD [32][33][34]. As we have no data on the type of antidepressants used, we were unable to explain the pharmacological relation in the present study. The lack of an association between pharmaceutically treated anxiety and an increased risk for CVD was surprising, as previous studies demonstrated that anxiety is a risk factor for the development of CVD [10,37]. This difference could be because most studies relate depression and anxiety separately to CVD risk instead of simultaneously. Interestingly, the increased risk of incident CVD associated with pharmaceutically treated depression was present amongst younger but not amongst older prostate cancer survivors. This finding is consistent with previous research, where psychological distress plays a greater role in younger individuals, as older individuals already have a higher CVD risk due to biological ageing, which limits the likely course of psychological distress [38]. Furthermore, the association between pharmaceutically treated depression and incident CVD was limited to those who were not treated with radiotherapy, those who received hormone treatment, and those who were not operated upon. Prostate cancer survivors who did not receive radiotherapy were generally younger and were less often diagnosed with one or more traditional CVD risk factors. Hence, these men had a relatively lower a priori risk for developing CVD, which means there was room for the increased risk of developing CVD by being pharmaceutically treated for depression. As previously demonstrated in this patient sample, there is an increased risk of receiving hormone treatment for incident CVD [3]. The present results show that there might be an additive effect, as pharmaceutically treated depression increases the risk of CVD only amongst men who are treated with hormones. However, there is controversy on the association between patients with prostate cancer treated with hormones and the risk of CVD and cardiovascular death. A meta-analysis of randomised studies found no association between cardiovascular death in patients with prostate cancer treated with hormones vs controls [39], although several observational studies have found the opposite [40][41][42][43]. According to a large observational study, the risk of CVD is increased for men with a history of CVD during the first 6 months of androgen-deprivation therapy [40]. Unfortunately, we were not able to investigate in the present study whether the duration of hormonal therapy was associated with the risk of CVD. Overall you have to live long enough to develop CVD after cancer (immortal bias), that is lower tumour stage is associated with longer survival, hence more time to develop CVD. Nevertheless, we additionally saw that pharmaceutically treated depression was associated with newly developed CVD amongst patients with prostate cancer with the higher tumour stages III and IV. This can be explained by both lifestyle and biological factors. Lifestyle factors like obesity, physical activity, and smoking are associated with advanced, and less with non-advanced prostate cancer, and are also known risk factors for CVD [5]. Biological factors like increased levels of inflammation and accelerated cellular ageing are also associated with both advanced prostate cancer and increased risk for CVD [44]. The present study has several limitations. First, there was a lack of information on residual confounders due to the observational nature of our study. Second, we used drug dispenses as a proxy for CVD, anxiety, and depression. Nevertheless, the specificity of algorithms is greater when they are based on pharmacy drugs, although sensitivity is greater when based on medical diagnosis [45]. Furthermore, we used a tight algorithm to define CVD-related drug dispenses, as this was based on a minimum number of two ATC code C01 drug dispenses within a 6-month period. This may have led to an underestimation of the incidence of CVD. Additionally, we could have missed a number of patients with CVD who used other drugs, e.g., angiotensin-converting enzyme inhibitors or b-blockers, but no ATC code C01 drug. Finally, we excluded prostate cancer survivors with CVD in the year prior to their cancer diagnosis, as we were interested in incident CVD. Hence, the present study investigated a subpopulation of 1-year prostate cancer survivors. An important strength of the study is the inclusion of a large population-based sample of prostate cancer survivors of highquality databases of the NCR and PHARMO allowing a 13year follow-up period. Furthermore, the index date for CVD was set 1-year after prostate cancer diagnosis as the best compromise between not starting too late and missing incident CVD, which allowed us to exclude the effect of detecting incident CVD due to ongoing cancer treatment or increased clinical evaluations. Moreover, the present study is to our knowledge, the first to investigate whether pharmaceutically treated depression and anxiety are associated with incident CVD in prostate cancer survivors. In conclusion, pharmaceutically treated depression, but not anxiety, increases the risk of incident CVD in 1-year prostate cancer survivors. Future studies should focus on understanding the behavioural and pathophysiological mechanisms that play a role in the development of incident CVD in depressed and anxious prostate cancer survivors. It is important that in addition to the current focus on traditional CVD risk factors, physicians pay sufficient attention to patients with prostate cancer who are pharmaceutically treated for depression or anxiety prior to deciding prostate cancer treatment and for a timely detection and treatment of CVD.
2019-11-28T12:28:18.442Z
2019-11-26T00:00:00.000
{ "year": 2019, "sha1": "dba14eb11ed9bb853326058e131ad33faa34634e", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1111/bju.14961", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f9e736764aa4e8962a80c525eb7b197d75d628bf", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
235675038
pes2o/s2orc
v3-fos-license
Multi-institutional survey of malignant pleural mesothelioma patients in the Hokushin region Malignant pleural mesothelioma (MPM) is a major occupational and environmental neoplasm. The purpose of this study was to validate the clinical and epidemiological factors, diagnosis, and initial treatment among MPM patients in the Hokushin region. We surveyed retrospective data from 152,921 cancer patients in 22 principal hospitals. A total of 166 MPM cases were newly diagnosed. These patients consisted of 136 men and 30 women, with a median age of 69 years. We estimated the incidence rate for MPM to be 0.55 cases per 100,000 person-years in this study. The ratio per 100,000 population-years was 0.39 in Fukui, 0.60 in Ishikawa, 1.02 in Toyama and 0.35 in Nagano. Forty-five patients were discovered when diagnosed incidentally in patients under observations for other diseases. Forty-six cases were diagnosed as localized disease, while 13 had accompanying regional lymph node metastasis. Furthermore, 44 cases showed infiltration into adjacent organs. A histo-cytological diagnosis was made in 164 cases (98.8%). A surgical approach, chemotherapy, and radiotherapy were performed for 33, 88, and 6 patients, respectively, while 44 patients (26.5%) received best supportive care. Multimodality therapy was conducted in just 3.0% of the MPM patients MPM has a tragically rapid progression if discovered under observations for other diseases. Workers in health-related fields should be on high alert for aggressive MPM. Better evaluation and multi-disciplinary approaches to MPM in these regions are needed to optimize multimodality therapy. Introduction Malignant pleural mesothelioma (MPM) is a lethal cancer primarily caused by the inhalation of asbestos particles, with a latency period of up to several decades and a poor survival (Lanphear and Buncher 1992). Over 70% of MPM cases in Japan are associated with asbestos exposure (Gemba et al. 2012). In Europe, the MPM incidence rates have been expected to peak around the year 2020 in some countries, and a deceleration or decrease in the incidence may have already have begun in the United States of America (Pelucchi et al. 2004;Henley et al. 2013). However, asbestos remains a subject of public interest in Japan. The spraying of asbestos was outlawed in 1975, and the manufacture of asbestos cement pipes ended in 1985. Asbestos disorder prevention regulations were established in 2005, and in principle, the manufacturing, import, and use of asbestos products are prohibited. However, millions of old buildings including asbestos in their makeup are likely still standing in Japan. As such buildings are ultimately destined for dismantling, a process by which asbestos can be easily spread, the number of cases in Japan is likely to peak around 20 years from now, according to predictions made by the Ministry of the Environment. In fact, the number of deaths due to MPM has increased threefold in the past 20 years. However, investigations concerning MPM are limited because of the rarity of the disease as well as its highly aggressive potential. Furthermore, while the role of chemotherapy has been partially established (Vogelzang et al. 2003;Zalcman et al. 2016), the roles of surgery (Hasegawa and Tanaka 2008;Rice 2011) and radiotherapy (Price 2011;Cho et al. 2021) remain controversial. Several studies have focused on surgery (Rintoul et al. 2014), chemotherapy (Vogelzang et al. 2003), or immunotherapy (Baas et al. 2021) in MPM patients; however, the data on patients who do not visit the hospital are scarce. Recently, the Japanese Joint Committee of Lung Cancer Registry (JJCLCR) established a project to develop a prospective registry database of patients with MPM with the goal of clarifying the epidemiology, pretreatment laboratory values, immunohistochemical evaluation, respiratory function, postoperative morbidity, and follow-up characteristics of MPM (Shintani et al. 2018). This effort started in 2017 and will be conducted until 2026, so its findings have not yet been made public. Therefore, a finegrained analysis of a wide area performed over a moderate duration is expected to provide useful data in the interim. The Hokushin region of Japan comprises the Hokuriku region (Ishikawa, Toyama and Fukui Prefectures) and Nagano Prefecture, which all have relatively old populations and snowy climates during the winter (Fig. 1). We created a database based on cancer registration in the Hokushin region, referred to as the Hokushin Ganpro database, to clarify the circumstances concerning cancer patients in a super-aging society, which not only Japan but also countries all over the world will be faced with soon, as the risk of cancer increases with age. We surveyed retrospective data containing hospital-based cancer registries (HBCRs) and information on clinical epidemiological factors of MPM using the Hokushin Ganpro database. Hokushin Ganpro database Maintenance of an HBCR is mandated for all cancer care hospitals designated by the Ministry of Health, Labor and Welfare in Japan (Higashi 2014). These designated cancer care hospitals are expected to serve as hubs for providing standard care, including surgery, chemotherapy, and radiotherapy, to cancer patients in their respective regions and to register newly diagnosed and/or treated cancer cases at their hospitals every year (Uramoto et al. 2021). These institutions maintain HBCRs and collect basic information on all newly encountered cancer cases, such as the tumor location, histology, route of referral to the hospital, and treatment (Uramoto et al. 2021). The definition of malignancy corresponds to behavioral code 3 in the International Classification of Diseases for Oncology, third edition (ICD-O-3). All target neoplasms newly encountered at the hospitals are registered. The Hokushin region has been considered a superaging region according to the Statistics Data, Statistics Bureau, Ministry of Internal Affairs. Hokushin Ganpro is a program supported by the cooperation of six universities located in the Hokushin region: Kanazawa University, Kanazawa Medical University, Shinshu University, Toyama University, Fukui University, and Ishikawa Prefectural Nursing University ( Fig. 1) (Uramoto et al. 2021). Hokushin Ganpro established a large-scale database based on hospital cancer registration covering this region between January 1, 2010, and December 31, 2015 (data set 1). The database includes information on the number of patients in each prefecture, the patient age, sex, process of cancer detection, pre-treatment process, basis for the diagnosis, histological diagnosis, and treatment performed for the registered patients (Uramoto et al. 2021). The present protocol was approved by the Kanazawa University Institutional Review Board (IRB) (reference number 2750-2), Kanazawa Medical University (I328), and the IRBs at the Hokushin Ganpro database project, all of which granted a waiver of consent for the study. Study cases and analyses We surveyed retrospective data of 152,921 cancer patients in 22 principal hospitals in the Hokushin region registered with the Hokushin Ganpro database. We collected MPM patients who were classified as code C384 (pleura) and analyzed the patients classified as code 2 (diagnosed and treated in the registering hospital) and code 3 (diagnosed in another hospital and treated at the currently registered hospital). The extent of disease was classified as localized, regional lymph node metastasis, regional extension, or distant metastasis, defined as follows: 'localized', localized in the primary organ; 'regional lymph node metastasis', regional lymph node metastasis but no invasion to neighboring organs; 'regional extension', invasion to neighboring organs; 'distant metastasis', metastasis to other organs or distant lymph nodes (Uramoto et al. 2021). We examined the histological subtype, patient age at the diagnosis, patient sex, and treatments. In addition, we calculated the incidence rate of MPM for each individual prefecture according to the total Japanese population using the numbers of cancer cases and national population statistics for each year. Population estimates in Japan and each prefecture were obtained from the official statistics of Japan portal site (https:// www.e-stat. go. jp/). Patient characteristics A total of 166 MPM cases were newly diagnosed. The number of patients with MPM in each prefecture were 18 in Fukui, 41 in Ishikawa, 64 in Toyama and 43 in Nagano (Fig. 1). The incidence rate was estimated to be 3.32/100,000 over the 6-year period. Therefore, we estimated the incidence rate for MPM to be 0.55 cases per 100,000 personyears in this study. The ratio per 100,000 population-years was 0.39 in Fukui, 0.60 in Ishikawa, 1.02 in Toyama and 0.35 in Nagano. These patients were 136 (82%) men and 30 (18%) women, with a median age of 69 years old (range 45-92 years old). The age-specific number of patients during the observation period is shown in Fig. 2. The highest incidences were observed in those 60-69 years old. The diagnosis and stage of MPM Actual numbers of MPM patients each year were 26 in 2010, 23 in 2011, 34 in 2012, 22 in 2013, 32 in 2014, and 26 in 2015. Next, we investigated the referral pathway. Nine patients were discovered at cancer screening, 9 at health checkups, 11 in a voluntary setting, 45 under observations for other diseases, and 92 cases by introduction from another hospital (Fig. 3). One hundred and forty-nine (89.8%), 15, 1, and 1 case were diagnosed as the first, second, third, and fourth cancer, respectively. Eighty-seven, 59, and 1 case were right sided, left sided, and bilateral MPM, respectively; 19 cases had 'unknown data'. Figure 4 shows the data regarding the extent of disease. Forty-six cases were diagnosed with localized disease, while 13 had accompanying regional lymph node metastasis. Furthermore, 44 cases showed infiltration into the adjacent organs, and 47 cases had distant metastasis; 16 cases were reported to have 'unknown data'. The clinical stages were as follows: stage I (n = 39), stage II (n = 29), stage III (n = 34), and stage IV (n = 51), with 13 cases having 'unknown data'. Only two cases were diagnosed by radiological imaging Treatment The therapies applied for MPM are summarized (Fig. 5). Overall, 73% of patients were recorded to have had at least one specific anti-cancer treatment (surgery, chemotherapy, or radiotherapy). A surgical approach was performed for 33 patients, with surgery alone performed in 29 and surgery plus radiotherapy in 4 cases. Chemotherapy was performed for 88 patients, including chemotherapy alone in 87 cases. Radiotherapy was performed for six patients, including radiotherapy alone and chemotherapy plus radiotherapy in one case each. Multimodality therapy (more than 2 approaches) was conducted in just 3.0% of MPM patients (5/166). Forty-four patients (26.5%) received best supportive care (BSC). Discussion This study provided the latest data on the accurate epidemiology of MPM in a specific region of Japan. The ratios of staging recorded and actual rates of histopathological confirmation in this study were 92 and 98.8%, respectively, which is quite high compared with a recent UK report (Murphy et al. 2020;Beckett et al. 2015). These results suggest that the information obtained in this study is reliable. The incidence rates of MPM have been reported to be relatively high in some European countries (UK, the Netherlands) and Oceanian countries (Australia, New Zealand); whereas, rates in Japan and countries from central Europe have shown relatively low incidence rates (Bianchi and Bianchi 2014). In fact, the incidence rate was estimated to be 0.55 cases per 100,000 person-years in this study, which is around half of the value in the United States from 2003 to 2008 (average of 1.05 cases per 100,000 person-years) (Henley et al. 2013). Fortunately, the rates have decreased in the United States (Henley et al. 2013). However, the actual annual numbers of MPM patients have not decreased according to our data, suggesting that closer attention should be paid to this disease, even in the face of governmental regulations. Regional differences might exist concerning the incidence of MPM due to differing concentrations of asbestos-related factories. Interestingly, the incidence rate in Toyama Prefecture was almost two to three times higher than that in the nearby Nagano and Fukui Prefectures. Regarding the reason why the incidence of MPM in Toyama is so high, Toyama Prefecture has the highest mortality rate due to mesothelioma in this area (http:// www. mhlw. go. jp/ toukei/ saikin/ hw/ jinkou/ tokus yu/ chuuh isyu05/ index. html). We failed to obtain adequate data related to the differences in the usage of asbestos by prefecture. Detailed information concerning asbestos use, including the type of material, dose, time, duration, factory location, and type of factory will be essential for a proper evaluation of the geographical distribution. Furthermore, various investigations have shown a high level of asbestos exposure in Japanese shipyards on the coast facing the Pacific Ocean or sea on the mainland of Japan, such as Kure and Yokosuka city. These cities were the sites of Japanese naval shipyards before World War II (Kurumatani et al. 1999;Morinaga et al. 2001). There are fisheries grounds linked to the shipbuilding industry at Himi Bay in Toyama. The high incidence reported by fishing industry officials might, therefore, be related to the shipbuilding industry. Kishimoto et al. concluded that 79.2% of cases of mesothelioma in Japan were caused by asbestos exposure. According to their report concerning occupational history of asbestos exposure, among occupations, shipyard workers showed the second highest frequency of cases (Kishimoto et al. 2010;Gemba et al. 2012). In the present study, we found that 45 cases were newly diagnosed during follow-up of other diseases. In general, there are more hospitals and diagnostic modalities in Japan than in Western countries. Today, Japan faces the problem of a rapidly aging population. Therefore, the high rate of discovery of MPM under observations for other diseases might be unique to Japan. One issue needs to be addressed: the ratio of localized disease reached 27.7% (46/166), but a surgical approach which specifically cytoreductive treatment as part of a multimodality approach (Waller et al. 2021), was performed in only 33 patients (19.9%). This result is unexpected because the diagnosis of one disease during followup for another is usually expected to indicate early detection. These discrepancies suggest that the Hokushin region is a reasonable example of an area with an ultra-declining birth rate and aging population. Ninety-two cases were also newly diagnosed by introduction from another hospital. These findings suggest that this disease has widely affected various area. Unfortunately, there was a high incidence of BSC in the present study (26.5% 44/166), and 47 patients (28.3%) had distant metastasis. The age was significantly higher in the BSC group (mean 77.5 years old) than in the treatment group (mean 67.4 years old) (p < 0.0001). However, there were no significant differences between the two groups in terms of the extent of disease. Multimodality therapy was conducted in only 3.0% of MPM patients. This ratio seems to be very low from the perspective of general clinical practice and may be due to the extremely biologically malignant behavior of MPM and its rapid progression. There is some concern that most physicians do not have experience diagnosing this rare disease. Clinicians should, therefore, ask about a patient's occupational history in order to check for asbestos-related diseases, especially among patients with a history of involvement in the asbestos product industry or shipbuilding. Future directions of centralizing care or an increase in treatment knowledge in this region is needed. Several limitations associated with the present study warrant mention. These include the retrospective nature of the study and the fact that it was carried out at domestic institution based on cancer registration data. Therefore, the survival information, treatment sequence, and therapeutic effect were unclear. The type of asbestos fibers in the pleura of the patients could not be analyzed because surgery or autopsies were not performed in these limited cases. Nevertheless, the current findings highlight an important issue, namely our study highlights the added value of a multi-institutional survey to analyze one type of rare cancer which was conducted in this area over a long period of time. Innovative modalities for curing MPM are needed. We recently conducted an observation study of a regional cancer database between 2016 and 2017 as dataset 2, including the Diagnosis Procedure Combination (DPC) survey data in the Hokushin region. A detailed examination conducting using data from a continuous study will provide new findings to help healthrelated staff monitor and control the disease through various new approaches (Baas et al. 2021).
2021-06-30T06:17:09.097Z
2021-04-12T00:00:00.000
{ "year": 2021, "sha1": "6fbebf1711572329aec3a0c8121000ac800a8169", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-299157/v1.pdf?c=1631895123000", "oa_status": "GREEN", "pdf_src": "Springer", "pdf_hash": "47f8fe05f77d7592facd05d421e0895079239ddb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235829375
pes2o/s2orc
v3-fos-license
A Novel Deep Learning Method for Thermal to Annotated Thermal-Optical Fused Images Thermal Images profile the passive radiation of objects and capture them in grayscale images. Such images have a very different distribution of data compared to optical colored images. We present here a work that produces a grayscale thermo-optical fused mask given a thermal input. This is a deep learning based pioneering work since to the best of our knowledge, there exists no other work on thermal-optical grayscale fusion. Our method is also unique in the sense that the deep learning method we are proposing here works on the Discrete Wavelet Transform (DWT) domain instead of the gray level domain. As a part of this work, we also present a new and unique database for obtaining the region of interest in thermal images based on an existing thermal visual paired database, containing the Region of Interest on 5 different classes of data. Finally, we are proposing a simple low cost overhead statistical measure for identifying the region of interest in the fused images, which we call as the Region of Fusion (RoF). Experiments on the database show encouraging results in identifying the region of interest in the fused images. We also show that they can be processed better in the mixed form rather than with only thermal images. Introduction Compared to optical images, the thermal Images are difficult to work with because the objects are not well segregated like optical images and different signatures are visible in black and white. This is because Thermal Infrared (TIR) images work on the principle of passive thermal radiation, as opposed to reflected light in optical images or Near Infrared (NIR) images. Moreover, thermal images are produced by radiation, which are of higher wavelength than visible light, leading to a lower resolution. As such, while there exist works like [2][3][4] which focus on thermal images, we did not come across any work which tries to fuse thermal and optical images represented in the grayscale domain directly. While we have presented a work [5] which tries to prepare color images in a fused domain, that is different from our present work, because we try to create a fused image which can be used in the optical domain. E.g. we can do Deep Learning (DL) based colorization trained only on optical images, which would not be possible with the work described in [5]. We demonstrate this in Fig. 2. Moreover, our work is specifically focused on data synthesis in a single grayscale level instead of a mask in the RGB or the 3 channel luminance-chrominance (LAB) domain, and thus we have different data distributions for our deep network. We anticipate that our work will be useful in domains that need the input from thermal images to process the data for further information like defense, drone imaging at night, forensic domain images etc. Also, our method is unique in the sense that we are proposing a network that works on a transformed space (DWT). Our network creates different levels of parallel pathways for DWT transform and captures the data distribution at different levels of abstraction [21]. Finally, we return to the normalized gray-level image domain, the reason for which is described in Section 2.1. Non DL methods on fusion of thermal images include works like [39] which works on contour formation, [40] for long range observation and [41] which handles multiple thermal image fusion for focus improvement. The DL based methods usually need more data, but in many cases such methods outperform even humans. Some examples are, fine grain image identification [35], image coloring [37,38] or medical image diagnosis [34,36]. Thus, in specialized areas, DL based methods are used to handle jobs that are difficult in classical methods. This is one motivation of our current work. We hypothesize that the distribution of the fused image is similar to both thermal and optical image. Therefore, these images should be properly processed by a machine learning method which is trained to work on either optical or thermal images with nominal retraining. In Sec.3 we show this in our blind testing method. We also provide objective measures in support of our claim in Table 2. Our database [19] is based on an existing database [6], which contains complex real-world scenes of 5 classes, namely nature, modern infrastructure, animal, human and crowd which we collected over a period of 1.5 years. These images were picked from our work on cross domain colorized images [5], which were not annotated. We manually annotated all collected images and marked the Regions of Interest (ROI). Since these were real (nonsynthetic) images, the total process took about 130 working hours to complete. We went on to fuse these images with the input thermal image to obtain the fused image. Each image is then finally Histogram Equalized to obtain the final presented output. A database called CVC-14, [24] has annotated thermal images, but the database has only 1 class annotated namely pedestrians unlike 5 classes in ours. Such annotated databases are not publicly available. Our database provides data distributions which are widely different from each other, which is needed in training DL based models. We also present a simple new statistical measure for obtaining the region which has been changed most in the output image in comparison to the input image, which we call Region of Fusion (RoF). In summary: • We demonstrate that it is possible to produce grayscale fused images containing information from both the thermal and optical images. • We introduce a novel DL architecture that works on a separate logical space (DWT) than the input or output space (normalized images). • We introduce a unique dataset containing annotated thermal images across multiple classes based on our existing database [6]. • We define a simple statistical score for focusing on a region of interest in fused images. Related Works Machine Learning techniques for working with thermal images have been growing significantly over the last few years. This includes methods for reconstruction of thermal images [46], super resolution [47] imparting color to thermal images [2-5, 37, 42-45], depth estimation [48] and even unsupervised data extraction [49][50]. Similarly, innumerable methods exist for optical domain image processing including for colorization [38,43], automatic annotation [8][9], denoising [27] etc. However, we have not encountered any work related to processing the TIR images via a fusion method in the grayscale domain. We opted to work on this domain because we hoped to be able to extract and process the information in the fused domain better than either the thermal or the optical domain images individually. We chose Discrete Wavelet Transform as the base of our work as it has been used extensively over the years for processing different kinds of data distributions from audio [26], image compression [31,27], face detection [17], spliced image detection [32] and even generalized signal [28]. Almost all of these works focused on either restoration or detection. This is primarily because DWT offers an easy to use method which transforms the input signal into a separate (frequency) domain, which helps observe the data from a different viewpoint. This in turn, is used to separate the high frequency and the low frequency information in the data, often even at the pixel level in images [29] providing a statistically inexpensive method. While there are several works on non DL based image fusion techniques, very few of them like [39][40][41] handle thermal-optical fusion. All of them work by trying to find an output following some pre-defined rules. CNN tries to achieve this by aiming to find the optimal data distribution [30] for a given input. While there are fusion based DL networks which discriminatively train on multiple domains while providing outputs, like [16], we have not encountered a work that calculates the kernels and computes the full model in a different logical space. We do this because of 2 reasons. The first is that we are eliminating the preprocessing step of converting the visible image into the discrete wavelet transformed data. The second, and more important one is that transforming an image from the visible domain into the DWT domain converts the image into 4 different sub bands for image enhancement. This is often used for blurriness reduction. However, this comes at the cost of distortion of the input images, often at the corners [33]. TIR images possess a data distribution that is blurry by virtue of its capturing sensors and thus, a preprocessing DWT method would result in further degradation of the input data distribution. Hence, we argued that instead of using a discrete wavelet transformed image, if we use a normalized image as the input instead, we should be able to minimize this initial degradation. Of course, an argument can be made that Convolutional Neural Networks (CNNs) themselves work in a separate function space than the input/output space, but we are going beyond that to propose a method that works on a logical space mirroring an established statistical method for signal processing and go on to show that deep networks are capable of effectively learning relations in this space. Thus, while there are existing works like [21] and [23] that work on the DWT domain, it must be understood that these use wavelet transformed feature maps as input, thus changing the domain of the work completely from visible images. Our method, on the other hand, uses images directly and computes all relations directly in the proposed deep network. Even on non-image based fields with DWT based DL, like in [22], we see that the methods are based on pre-processing the data to obtain the n th level decomposition at the first level before feeding it into the deep network. We avoid this step directly, thus simplifying our process considerably. Proposed Method We are using a deep network to produce a mask, creating a masked image followed by Histogram Equalization to create the final output. The output mask is created by training the model to optimize the loss from an input thermal image to an output image which is the thermal image embedded with the optical-thermal average in the annotated region. This ensures that only a particular region in the thermal image is different from the input image, thus highlighting it. The deep model we are using is described in Fig. 1 and described in Table 1 and our data is presented in Supplementary Section 1. Deep Network Our network can roughly be divided into 3 different blocks: Input Encoder/Decoder, DWT Layer and Output Encoder/Decoder. We outline this in Table 1. Creates an output layer of the same size as the input layer, with the specified depth (filter) encode_half (input, filter, kernel, dropout = True, normalization = True) Creates an output layer of half the size (length/width) as the input layer, with the specified depth (filter) encode_double (input, filter, kernel, dropout = True, normalization = True) Creates an output layer of double the size (length/width) as the input layer, with the specified depth (filter) Creates an output layer of the same size as the input layer, with the specified depth (filter), using ReLU activation function intermediate_enc_dec (input, filter) In the above table all output layers have a depth equal to the number of filters used as the function arguments for convolution. Thus, for example, the layer d0 has a shape of 64x64x16 when using an input shape of 128x128 in the inp layer. For slicing layers, outputs have a depth of 1 (i.e., it is a 2D matrix only). The Input Encoder/Decoder is a basic encoder coupled with a decoder with Convolutional 2D Transpose layers (instead of statistical Upsampling layers) to preserve the gradient in between the layers. Also, we do not form a complete encoder-decoder, but put 1 less layer of the input dimension for feeding into the first level of our DWT layer. The Output Encoder/Decoder is also a general encoder decoder with a concatenation of the output from the input encoder with the output encoder. The concatenation is a layer-wise concatenation along the last axis. This is done so as to help the network learn the input texture in the output mask. Our method does not convert an input thermal image completely into a full optical mask, but instead minimizes the loss against a partial output image which has an averaged area embedded into the input thermal image. Thus, it would make sense to include this distribution into the output, which is what we try to encapsulate with the help of the skip connections. The output mask is obtained by using a last 2D Convolutional layer with the sigmoid activation which normalizes the output to (0,1) values. We wanted to check if we could work in the Discrete Wavelet transform (DWT) domain instead of the usual normalized image domain for our fusion. The reasoning for this is that 2 Dimensional DWT (2DDWT) works on iteratively smaller scales of an image by halving the 2 axes of an input image, processing it and then reconstructing it back. In fact, this is similar to the logic of an encoderdecoder, except that an encoder-decoder based CNN works on the spatial domain and 2DDWT works on the frequency domain. Since conversion from the spatial to the frequency domain is a standard signal processing algorithm, our logic was that a logically sound sufficiently complex deep network should be able to intuitively model it by itself. In fact, this is precisely the reason we alternate between 2 different blocks of deep network for modelling the LL blocks as opposed to the other (LH, HL, HH) blocks in our model. As can be noticed, for the LL blocks, we specifically use ReLU as the activation function, while we use LReLU for other blocks. In 2DDWT, LL blocks are confined to lie between 0 and a positive integer, which doubles in value with every Of course, while the 1 st and the 2 nd level LL bands might be normalized to lie between 0 and 1, the 2 nd level LH, HL and HH blocks may contain values below 0. However, these blocks combine to form the 1 st level LL band, which is again normalized to lie between 0 and 1. This is exactly how we model our method as well. It needs to be noted here that we design our model to represent only till the 2 nd level decomposition. However, theoretically, one could go even deeper. We did not opt for this because of two reasons. Firstly, the size of our data was 128x128. We could not have data at the 256x256 size since the database we used had several images which had a maximum size of 240 in one dimension. Secondly, another level of decomposition would bring the output size of the patches down to 4x4, which would render our method unusable. Also, at one point, the increase in complexity would overrule the optimization of the loss. As can be seen in our model, we decide to create a method that is able to create patches of localized data by creating an encoder-decoder structure for each scale of resolution we work on. The reason we decide to use this structure for the localized resolution paths is because we find that a localized encoder-decoder structure is able to lower the absolute loss by about 20%, as opposed to using patches with more depth at the same resolution. We also use skip connections between the 4 levels of the input encoder (before feeding it into our multi resolution kernels) and the output encoder (after we have obtained the final DWT kernel). We find that this optimizes the absolute loss by around 18% at the cost of about 10 times as many parameters. We wanted to find the optimum loss with skip connections between all the layers of the input encoder and the output encoder till a resolution of 1x1 was reached. However, that created an excessive overhead of parameters at 110 times the first variant, which did not fit in our hardware. So, we produced the model with only 4 localized encoder-decoder like structure as 6 skip connections did not give any significant change in accuracy over using 4, even at the cost of 20 times the initial number of parameters. We decided to use Adaptive Moment Optimization (ADAM) as the optimizer because our model actually tries to optimize the loss function similar to what ADAM is doing as explained below. The optimizer tries to lower the loss by changing values of n th moment, which is defined as mn = E [ X n ] (1) where X represents the data and E is the expectation. Since ADAM works by minimizing loss through moving average optimization, with the help of 2 constants (β1 = 0.9 and β2 = 0.999), the kth mini-batch moving averages at the i th level reduces to where m and v represent the moving averages. Thus, we see that as the level goes deeper, the loss becomes lower and more local. The local convergence of ADAM optimizer has been relied on heavily for its choice of optimizer and has been already proven in [25]. This is what we are trying to achieve with our method as well, wherein the levels are logically represented by the parallel levels of DWT layer described in Table 1. However, there is another hyper parameter, the loss function which forms an integral part of a deep network. We use logcosh as the loss for our current model. As stated in [1], if we consider wavelet based data, geodesic distance based on the Riemannian manifold is a good estimator as a distance measure. Since the Riemannian manifold is a part of the hyperbolic plane, we decided to use the logcosh loss measure representing the logarithm value of the hyperbolic cosine of the error predicted, given by Once we have the mask from our deep network, we fuse it with the thermal prior according the simple averaging rule: Oi = ( Ti + Mi )/2 (4) where Oi represents the averaged output image pixel, Ti is corresponding the thermal prior pixel and Mi is the equivalent mask pixel for each i th pixel. We are trying to obtain an image that already has the thermal image as a part of the output. Finally, we go on to equalize the fused image in order to obtain a final output image which has a better distribution of illumination for better visibility. Here we point out that there is no meaning to equalizing a thermal image. This is because thermal images are already histogram equalized by the capturing device since thermal images use all 256 levels of illumination. This is evident from the thermal bar present on the right side of thermal images. We present a comparison of the thermal input, the fused image and the final output in Fig. 2. (RoF) When looking at research works focused on fusion, we have found that there is no objective measure which could provide a bounding box for the localized regions of fusion. This becomes relevant in works such as this one, where we are focusing on regions which should have localized content for fusion. Hence, we propose a measure called Region of Fusion (RoF), based on localized Region of Interest (ROI), which can be objectively calculated given a fused image and the input from which it is obtained. This method is fully customizable in regards to the distance metric that is used to calculate the region similarity and can be used on fusion methods which are either DL or statistically based. It has a low computational complexity on par with the size of the image (constrained by the similarity measure being used). The idea behind the method is to take a score for the variation of full image between the thermal and the fused output and then calculate the area of both. We iteratively reduce the size of the region by 1 and check the percentage reduction of variation with the percentage reduction in area. If variation is less than the change in area, we stop and define the region as the final region. The full algorithm is discussed in details as Algorithm As can be understood from the above algorithm, the measure for similarity (or dissimilarity) can be changed as needed. We have used the sum of the square of difference of pixel values as a measure of dissimilarity in our case, but one can use other scores, like Structural Similarity Index Measure (SSIM). Also, since this is an unbiased score, one can opt to combine existing DL based fusion methods like [11,12] with our score to provide a better measure of the fused region to focus on the output. One may note that while RoF can model any fusion based distribution for a region of interest, the data distribution needs to be on the same scale. For example, if we work with data distributions modelled on Wavelet transformed data (without converting it back into an image), which are multi resolution data distributions containing several scales of an image, it would not work. This is because RoF is designed to work on data in a single scale only by determining a local maximum for the bounding box. Database We use the thermal-visual paired images dataset [6] for our work. It was presented as a part of a work on colorizing thermal images. We use 10 random cuts of the input images, while keeping the annotated region inside the cut. This limits our data, and we are able to create only 89,442 pairs of images for our experiment. This is because while it is possible to take a random cut from an input pair if the objective was just the production of a fused thermal-optical image, we try to create a model which is able to create a localized region for the final output instead of a uniform fused image. Examples of these images are shown in the Supplementary Section 2. The database we propose has 1873 thermal images hand annotated by us. The annotation in rectangular bounding boxes is done using the tool VGG Image Annotator (VIA) [7]. We annotate the images into 5 different classes, Nature (nat), Animal (ani), Human (hum), Crowd (cro) and Modern Infrastructure (inf). Each image may have multiple annotated objects inside it, which is how we are able to obtain more than one sub-image for each individual annotated image. A few of these are included in the Supplementary Section 1. However, the database [6] also contained the paired visual equivalent for each of these thermal images, which has lent a way to further augment our dataset. We applied an optical Region of Interest bounding box algorithm on the optical images to create additional data. The logic behind this is that since we are proposing a localized fusion method, the fusion should occur from both directions, and not just from thermal to optical. There might be objects present in the optical domain which are not well visible in the thermal domain (objects at the same thermal profile range). We came across different object identification algorithms like [8,9] etc, but most of them were either image classifiers only or did not provide multiple bounding boxes over a single image, as we required. Moreover, since the database we were using as our background base was focused on multiple classes, not of a very high resolution and of different sizes, we wanted to find a low overhead high accuracy algorithm which would be able to fit in our case. Thus, we decided to opt for DEtection TRansformer (DETR) [10] for our use. DETR is a state of the art localized low cost object annotator which has multiple object classes, trained on optical images. We use the public code that they provide, and obtain object annotations on the optical images in the database and transpose these boxes on their thermal counterparts to finally obtain the localized database we are using for our use. Of course, since we have only 5 classes in our annotation, we simplify the annotation provided by DETR into our annotation labels by changing classes like bus, car, laptop, truck etc into Modern Infrastructure, sheep, horse etc into animal and so on. The final database we propose has 5 different classes labeled as a number from 1 to 5. Once this is done, we take 10 random cuts around the annotated region, for each of the images with individual annotations constraining the minimum size for each image to be 128 x 128 and combine the thermal and optical information in the annotated region following Eq. (4). We finally obtain 89,442 image pairs, which we use in our work. It should be noted that since there is no extra restriction on the annotated region size, our final database for the model comprises of images of widely differing sizes, which we normalize to 128x128 keeping parity in the input and output image sizes in the deep network. We reshape it back to the original image size after we obtain the mask to create the final masked output for equalization and obtain the final output. Experimental Results We use 3 different objective scores to evaluate our method. Since we did not find any thermal image fusion evaluation score, we choose Structural Similarity Index Measure (SSIM) [20], Cosine-similarity (Cossim) and Mean Square Error (MSE) for our evaluation. These scores are denoted in Table 2 Table 2, we show a comparison between our masked average outputs versus their thermal and optical counterparts. We use 3 different measures of similarity. The first column shows how similar our averaged output is to the thermal images. Similarly the third column shows their similarity with the visual counterparts. The middle one is the similarity of the thermal images to their optical counterparts and provides the baseline against which we compare our values. Of course, the scores between the thermal and the averaged output is much better than the ones between the optical and the average output, because we fuse the mask with the thermal image before comparison. Discussion Since there is no direct method of comparison for showing that our method produces a significantly different output (as all of it is in grayscale domain and optical features incorporated in the mask are not immediately identifiable), we opt for an indirect method to show this. We use a neutral testing method of coloring of the thermal image and the final output we are producing. The coloring is done via the method explained in [18] for optical image colorization. We use the online demo method they provide for the same, without training it on our database for the blind testing. As can be seen in Fig 2, the Histogram Equalized (HE) images contain more texture as compared to the thermal images. This is especially noticeable in image (e), (a) and (c). We include (a) since it has a noticeably bad optical image since the setting for the photograph possessed a bad illumination. Similarly, (b) has a binary thermal image. These kinds of images occur when the levels of the thermal imager for the upper and lower limits of temperature to be captured are very near the limits of temperature of the surrounding. As can be seen, both (a) and (b) provide outputs which have noticeably improved texture in the masked images. The colored mask as well as the colorized HE masked images show the same. In case of (c), we see that the walls of the structure in the image have a lower temperature. This region becomes prominent in the HE masked images. Finally, we note that the images for both (d) and (e) are quite different from the input thermal images. This is especially noticeable in all of these images when we consider the blind testing method we provide, in which we color each of these images via the method described in [18], which is a pure optical coloring technique. Our method shows that the color improves as compared to the thermal images. Of course, this is possible because the temperature profile is not well segmented in most thermal images, which is why these are different from optical images. However, if there exist thermal images which have very well defined levels of separation in between objects, our method would not perform as well since the texture might be interpreted as noise by a machine learning method. None of the images shown in Fig. 2 are of the same size because all of them were random cuts from thermal images. We show the results from our measure of fusion, RoF in Fig. 3. The images we use here are those which are published in [6] as being unregistered. Thus, we did not have the optical counterparts of these images, and they were not used in training our DL algorithm. We obtain the HE masked images for each of the thermal inputs and then run our RoF identifying algorithm on them. The texture difference is relevant in case of images (a), (b) and (c), where we see a clear region of interest. In case of (d), the region is around almost the full image. However, in (e), the region is quite outside the expected region of interest. This is because, in (e), we see that the thermal image has well defined levels of separation for regions. However, our method does detect a region where the score varies enough to make a RoF. In images such as this, since the thermal image itself is well segmented and possess well defined visual features, we would not opt for a fusion method. However, we include this result to show that this case may also occur. We use 89,352 images for training and 90 images for validation. We finally use 294 images for testing against paired images, which are random cuts from registered images in [6] and 438 images for testing on a blind dataset, images that were unregistered in the dataset. All experiments have been conducted on Keras 2.2.4, with Tensorflow 1.13.1 as the backend, using a 1080Ti GPU on a 7820X i7 chipset processor. This work was supported by the Computer Visions and Biometrics Lab (CVBL), IIIT-Allahabad. Conclusion We present a novel method demonstrating that it is possible to fuse thermal images with optical priors having annotated regions for focusing on specific regions. The model is both unique in its scope of work and the theoretical basis, wherein we show that the calculations are based on a separate logical space, constructed on the principles of 2 Dimensional Discrete Wavelet Transform. We also introduce a simple statistical score for identifying regions with significantly different distribution in output fused images. Lastly, we introduce a unique database [19] containing annotated thermal images on varying classes as a part of this work for public use. While the outputs are promising, further scope lies in checking how the method behaves under the use of other metrics for loss optimization, like geodesic distance, what is the behavior of the method, when we use deeper networks and how to extract and process information in the joint domain for better processing of fused images.
2021-07-15T01:15:53.085Z
2021-07-13T00:00:00.000
{ "year": 2021, "sha1": "bc6325bcd0643b59ae9f7d7faa76bd97766ebe36", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bc6325bcd0643b59ae9f7d7faa76bd97766ebe36", "s2fieldsofstudy": [ "Engineering", "Computer Science", "Physics" ], "extfieldsofstudy": [ "Computer Science" ] }
207489925
pes2o/s2orc
v3-fos-license
Are institutional review boards prepared for active continuing review? Continuing review is an important responsibility of Institutional Review Boards (IRBs). Though being mentioned by many of the national and international guidelines, it is carried out routinely only in UK. The reasons may be inadequate training, overworked IRBs, less enthusiasm among the IRB members, cost bearing, etc. So, the oversight mechanism at the local site, which is the responsibility of IRB is not fulfilled. Are there any solutions to overcome these difficulties? The IRBs should have a Standard operating procedure for continuing review, members can be regularly trained, institutions can create their own internal Data and Safety Monitoring Boards who will only monitor studies where monitoring systems are non-existing and there can be budget allocated at the start of the study by the sponsor or the institution. In this way, we can try to safeguard the rights and well-being of the study participants. INTRODUCTION Clinical research is an ever expanding field.More than 1,800 trials are registered in the Clinical Trials Registry of India. [1]As the field of clinical research expands the issues such as protocol deviation, discrepancies in the informed consent process etc., come to the fore-front, endangering the rights, safety, and well-being of the subjects.Various regulatory bodies such as European Medicine Agency, [2] International Conference on Harmonization-Good Clinical Practice (GCP) [3] and Indian Council of Medical Research investigator-initiated studies, which is a rarity in India.Whereas, pharmaceutical sponsored studies have inherent Data and Safety Monitoring Boards ( DSMB) for all studies that monitor each study and report to the concerned IRBs.The IRB of King Edward Memorial (KEM ) hospital, Mumbai, India conducted seven site visits to monitor protocol adherence and informed consent process and found major aberrations in informed consent issues (6/7 ), protocol deviations (5/7 ) among others. [9]gtay et al., studied the warning letters issued by United States Food and Drug Administration (US-FDA) to investigators and IRBs, in which 15 out of 32 were due to issues related to informed consent processes and 2 out of 15 for inadequate or lack of monitoring systems. [10] Passive versus active Continuing review by IRB can be passive or active.Active monitoring means monitoring studies by visiting study site by IRB members while passive monitoring is a review of documents submitted by an investigator in the form of periodical updates on the study.Most commonly IRBs do only passive monitoring, which includes reviewing data such as serious adverse event (SAE) reports, reviews of protocol violations, progress reports, protocol amendments etc., at pre-specified regular intervals according to the guidelines. [9]However, "active monitoring" should also be conducted, which includes the creation of safety monitoring committee, random audit of the consent process, site visits etc. [11] ICMR 2006 guidelines recommend site visits by IRB as one of mechanisms to monitor the on-going studies. [4]IRB of Seth G.S. Medical College and KEM Hospital does passive monitoring by following their standard operating procedure (SOPs) for continuing review of study protocols, [12] review of study completion reports, [13] and review of SAEs reports and unexpected adverse events. [14] Routine versus for cause Routine For routine monitoring, Tata Memorial Centre, Mumbai in its SOP15/V1 states that "sites will be identified for routine monitoring at the time of approval of the project by the full board which will be recorded in the minutes." [15]r cause Increased protocol violations, many studies going on simultaneously, higher than expected enrollment rate, significant SAE reports, complaints from study participants, non-compliance, reports of inadequate infrastructure at study sites, and incidences of missing documents may prompt IRBs to conduct a site visit and continuing review. [15,16]Bs tend to be more rigorous about monitoring those clinical trials involving path breaking research who might have greater media coverage.This might not only be attributable to increased risks to the participants, but also a higher sense of responsibility on the part of the IRBs, which might in turn be due to feeling of exposure of the IRBs itself.For example, University of Utah, appointed a patient of artificial heart implantation as a non-voting member of the IRB, to ascertain that the protocol review procedures were followed to the word.Another example that can be cited here, is the case of xenotransplantation at Loma Linda Medical Center where an IRB member monitored the consent process being administered. [17]ntensive monitoring is essential in trials that involve a higher unconfirmed risk, aggressive intervention and highly susceptible participants. [18]rrent scenario A 4 year review of Canadian Research Ethics Boards published by the National Council on Ethics in Human Research in 1995 revealed that only 53% IRBs had a requirement of submitting an annual report from investigators, which is the bare minimum for continuing review and only 18% stated that they performed ongoing review of research. [11]A similar study conducted in Scotland bared the facts that 56% of the studied IRBs never conducted monitoring of research while progress reports were never requested by 44% of them. [19]ata obtained by a review of Australian committees showed that only 44% undertook the on-going review of which in almost all (99%) cases involved only annual review. [20]The aforementioned examples highlight the fact, that monitoring of research is the exception not the rule.A study published by Gogtay et al., reported that 40 warning letters issued by the US-FDA to IRBs between January 2005 and December 2010 showed the following major reasons: 93.8% highlighted that IRBs failed to follow SOPs and maintain documentation, 59.4% had inappropriate membership and quorum problems, 46.9% pointed toward informed consent issues, 21.9% failed to follow regulatory requirements etc. [10] Furthermore, a report by McCusker et al., conducted at St. Mary's hospital, Montreal showed that there were incidences of wrong consent forms being used, discrepancies in fulfilling the inclusion criteria, incidences of unsigned and undated consent forms were observed along-with missing signatures of investigators and witnesses.Incidences of patients signing Informed Consent Documents (ICDs) in the language that they do not understand and participants having very little understanding of the risks associated with the study participation were also noted. [21]Hence, for the ethical conduct of clinical research, continuing review by IRBs is imperative. Differences between monitoring by IRB and by DSMB DSMBs review data from on-going clinical trials and advise the sponsor on the safety of trial participants, continuing validity, and scientific value of the trial; while IRBs are accountable for assessing a trial to verify whether the risks to trial participants are curtailed.As compared to IRBs, DSMBs, by and large, have greater access to trial data, such as interim efficacy and safety out-comes.DSMBs are liable to monitor the study until the intended completion of follow-up, regardless of the treatment period as in certain studies trends in survival or other severe results may not manifest until follow-up.On the other hand, IRBs continue reviewing a trial only until its completion at the site.DSMBs review the study quality and definitive capability to concentrate on the scientific questions of interest along with effectiveness and safety measures.Study data such as recruitment rate, non-compliance reports, protocol violations, drop-outs, comprehensiveness of data, difference between site monitoring reports, and centralized review, baseline characteristics of study arms are also evaluated by DSMBs.However, IRBs are more concerned with the ethical aspect of the trial and continuing review conducted by them focuses on whether the risk has substantially changed in lieu of new safety data that becomes available or due to suspected mismanagement of the trial.Conflict of interest might arise since DSMB is appointed by the sponsor. Similarly, one major problem faced by the IRBs is to find suitable and trained site monitors.According to our institutional experience IRB members monitoring the site are mainly faculty of the institute and by that extension, peers of those whose site will be monitored.This might induce professional rivalry or engender enmity amongst them.The monitors of pharmaceutical sponsored studies are usually trained by their Medical affairs teams.On similar lines IRBs in India should create monitoring bodies that are trained by IRB members and training should be customized as per every protocol.Furthermore, the monitoring by sponsor appointed DSMB is pre-decided and is mentioned in the protocol submitted to the IRB.However, the monitoring conducted by IRBs is not pre-decided and is carried out when there are increased incidences of violations from the study site or can be routinely performed. Objectives of continuing review by IRB • Ascertaining the ethical conduct of clinical research. • Reviewing study protocol, relevant background information, ICDs, proposed plans for informing participants about the trial, and any other procedures associated with the trial. [22]Ensuring safety and wellbeing of the study participants.• Quality assurance and continued education of research staff. [21] Ensuring data integrity. [20] Requirements Fulfilling the aforementioned objectives might further burden the already overworked IRBs in the form of additional manpower, training, and financial resources. Action plan To fulfill the aforementioned requirements, IRBs and regulatory bodies from different parts of the world have come-up with innovative plans. Research, which may involve more than minimal harm to the participants such as possible serious adverse drug reactions, serious morbidity and mortality, the IRB, in the non-existence of a special committee might appoint one to monitor data, and safety. [15,20]In 1998, a Tri-Council Policy Statement issued by three research funding bodies of Canadian government, suggested that every institution conducting funded research should have its own monitoring programs.Apart from annual submission to the IRB, there should be an official review and arbitrary inspection of the informed consent process, assessment of adverse events reports, setting up of safety monitoring committees, intermittent review of study documents by a third party and uninformed evaluation of patients' charts. [21]One can cite the example of internal DSMB appointed by TATA Memorial Hospital Human Ethics Committee for conducting monitoring activities on behalf of the IRB. [15]rthermore in view of the lack of additional manpower, certain ethics committees have come up with novel strategies for continuing review.This is exemplified by a Scottish program, in which the IRB sent a questionnaire to around 300 investigators of ongoing projects.10% of the projects were followed-up by two board members, who reviewed their responses, assessed them further with a detailed questionnaire, inspected consent forms and case record forms.They concluded that following such a strategy would require an average of 6 person/h, at a cost of £120. [11]This process of continuing review though adds value to the conduct of clinical research, who should bear the cost of this process is a topic of heated debate.Weijer points out potential sources of such funding.Some IRBs in Canada charge $1000 or more from pharmaceutical companies to review a protocol.This will not only take care of the direct costs of continuing review, but also allow the IRBs to increase staff and computerize their systems, thereby increasing the efficiency of review. [11]s per McCusker et al., expenses related to monitoring of non-funded projects might have to be borne by the institution. [21]Since, the types of protocols received and the available infrastructure of each IRB vary, there should be customized SOPs for continuing review. One of ways for continuing review is monitoring of the consent process.According to the Guidelines on Research Involving Human Subjects, Canada, there are two means of doing this.One can inspect the way, in which the consent is administered to the study participants or can enquire with the subjects after the consent process to know how much they have understood. [11]art from the usual practice of participants' family member giving consent on behalf of the participant for his/her involvement in the study, there are instances when IRB has hired an advocate for the study participants who would be in attendance when the consent is being administered.McGrath and Briscoe also cite an example where a research center had employed a permanent advocate for this purpose. [11]or monitoring data integrity, IRBs usually review the audit documents of monitoring committees employed by the pharmaceutical companies for sponsored research.However, the biggest concern for IRBs is the investigator initiated research, which is not scrutinized by external agencies, like data safety monitoring boards as in pharmaceuticalsponsored trials.To circumvent this problem, institution may set up an internal program for intermittent inspection of data. [11]Taking this into consideration the IRB of Seth G.S. Medical College and KEM Hospital undertakes internal monitoring by following their SOP. [16]rthermore, Pilon states that annual reports from investigators to ethics boards do little to protect research participants there should be collaboration between IRBs and investigators where they classify protocols according to the risk involved and build up new systems to monitor high-risk research protocols. [18]There are a number of ways, which might help the IRB classify the studies on the basis of risk involved so as to decide different planes of monitoring.Levine recommends categorizing risks as social, economic, physical, and psychosocial.Furthermore, benefits can be sorted as direct health benefits, psychosocial, and kinship.This structure might help IRBs choose studies, which should be monitored. [18]e following actions were taken by KEM Hospital IRB in view of the violations found by its site monitoring committee.In case of protocol deviations and non-filing of progress reports, the IRB asked for explanations and the investigators were cautioned to avoid recurrence of the same.Furthermore, further enrollment was constrained and sponsors were asked to submit reports to the IRB.There were instances when the investigator was unaware of protocol and informed consent process.In such situations continued GCP training of the investigators was suggested. [9]NCLUSION In summary, though certain disagreements over the role of continuing review such as those that may affect the trust element between IRBs and investigators exist among researchers and IRBs finding continuing monitoring unjustifiably costly; continuing monitoring and timely project reviews by IRBs ascertains the ethical conduct of research.Continuing review by IRBs should be recognized as means of quality assurance and not as moral policing, thereby achieving the definitive goal of educating researchers and safe-guarding the safety and well-being of the participants.
2018-04-03T02:32:06.852Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "ebbe6e367c3988eb5b64a7c6069e1c1fe4337b9d", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/2229-3485.124553", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8d6eece446a25df280cef1aaf4c24fd7e0b3dcd8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16001927
pes2o/s2orc
v3-fos-license
Comparative Endothelial Cell Response on Topographically Patterned Titanium and Silicon Substrates with Micrometer to Sub-Micrometer Feature Sizes In this work, we evaluate the in vitro response of endothelial cells (EC) to variation in precisely-defined, micrometer to sub-micrometer scale topography on two different substrate materials, titanium (Ti) and silicon (Si). Both substrates possess identically-patterned surfaces composed of microfabricated, groove-based gratings with groove widths ranging from 0.5 to 50 µm, grating pitch twice the groove width, and groove depth of 1.3 µm. These specific materials are chosen due to their relevance for implantable microdevice applications, while grating-based patterns are chosen for the potential they afford for inducing elongated and aligned cellular morphologies reminiscent of the native endothelium. Using EA926 cells, a human EC variant, we show significant improvement in cellular adhesion, proliferation, morphology, and function with decreasing feature size on patterned Ti substrates. Moreover, we show similar trending on patterned Si substrates, albeit to a lesser extent than on comparably patterned Ti substrates. Collectively, these results suggest promise for sub-micrometer topographic patterning in general, and sub-micrometer patterning of Ti specifically, as a means for enhancing endothelialization and neovascularisation for novel implantable microdevice applications. One area of particular interest in this regard has been the modulation of endothelial cell (EC) response to polymeric vascular graft materials, where the ultimate goal has been to facilitate endothelialization over the graft surface and minimize cellular detachment. A number of recent in vitro studies have demonstrated that patterning of such materials with micrometer to submicrometer scale gratings can favorably affect EC responses such as adhesion, proliferation, and morphology, among others [9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25]. This, therefore, suggests promise for enhancing endothelialization in vivo. However, understanding of EC response to other patterned materials, such as titanium (Ti) and silicon (Si), is limited. This represents an important knowledge gap, since patterning may provide means for enhancing the performance of other novel implantable microdevices based upon these materials (e.g., Ti-based pro-healing vascular stents [26], or Si-based wirelessly-controlled implantable drug delivery microchips [27]). Herein, we begin to address this knowledge gap through the study of in vitro EC response on Ti and Si substrates with identically-patterned surfaces composed of groove-based gratings with groove widths ranging from 0.5 to 50 mm, grating pitch twice the groove width, and groove depth of 1.3 mm. Fabrication of these precisely-defined patterned surfaces is enabled by both conventional Si micromachining techniques, as well as our novel Ti Deep Reactive Ion Etching (Ti DRIE) process [28,29], which provides opportunity for machining of Ti at length scales well beneath that which is possible with other techniques (e.g., laser micromachining, microelectrodischarge machining, ultraprecision CNC, etc.). Using EA926 cells, a human EC variant, we show that cellular responses such as adhesion, proliferation, elongation, and atheroresistant signaling are enhanced with decreasing pattern feature size for both materials; however, the magnitude of these responses is considerably larger on patterned Ti relative to comparably patterned Si. Collectively, these results suggest promise for submicrometer patterning of Ti as a new means for enhancing endothelialization and neovascularisation for novel implantable microdevice applications. Materials and Methods Patterned Substrate Design Figure 1 schematically illustrates the layouts of the patterned Ti and Si substrates used in this study, both of which share identical dimensions and patterning. One of the sub-patterns in each substrate is left unpatterned as a control, while the remainder are surface gratings consisting of periodic groove arrays with groove widths ranging from 0.5 to 50 mm, and grating pitch equal to twice the groove width (i.e., grating pitch = groove width + ridge width). Each grating sub-pattern is orthogonally-oriented with respect to its neighbors, and is surrounded by a 100 mm wide unpatterned border (thus yielding 200 mm total width of unpatterned region between neighboring sub-patterns). Use of this substrate layout provides opportunity for simultaneous evaluation of a broad feature size range within the same substrate, and therefore, within the same cell culture conditions. Figure 2 outlines the fabrication processes for the patterned Ti and Si substrates. In both cases, polished substrates were first subjected to a standard solvent cleaning procedure consisting of sequential sonication in acetone and isopropanol, followed by rinsing in deionized (DI) water, and drying with N 2 gas. Patterned Substrate Fabrication For the patterned Ti substrate fabrication, polycrystalline, Grade 1, commercially-pure Ti substrates were used (99.6% Ti, 200 mm thickness; Tokyo Stainless Grinding). Following cleaning, an etch mask of 200 nm SiO 2 was deposited using plasma enhanced chemical vapor deposition (PECVD) (VLR, Unaxis). Hexamethyldisilazane (HMDS) was then applied as an adhesion promoter, followed by photoresist (PR) spin-coating (mr-I 7020, Micro Resist Technology). Afterwards, the PR was patterned using a Si imprint master and thermal nanoimprint lithography (NIL) (NX2000, Nanonex). Oxygen-based dry etching was used to remove the residual PR layer at the base of the features after imprinting (E620-R&D, Panasonic Factory Solutions). This was followed by transfer of the PR patterns into the underlying SiO 2 etch mask by fluorine-based dry etching (E620-R&D). The mask patterns were then transferred to a depth of 1.3 mm into the underlying substrate using a modified version of the Ti DRIE process optimized for nanoscale features. Finally, fluorine-based dry etching was used to remove the remaining etch mask. For the patterned Si substrate fabrication, single crystal wafers were used (100 mm diameter, P/B doping, ,1-0-0., and 525625 mm thickness; Silicon Quest International). Following cleaning, the wafers were dipped in buffered hydrofluoric acid, rinsed with DI, dried with N 2 , and dehydration baked. The wafers were then primed with HMDS, followed by PR spin-coating (AZ nLOF 5510, Clariant). After lithographic patterning using projection lithography (GCA Autostep 200 i-line Wafer Stepper, 3C Technical), the substrates were subjected to brief descuming by O 2 plasma (PE-IIA, Technics) to ensure complete removal of PR residues that may remain at the base of the features after developing. The PR patterns were then transferred into the underlying Si substrate using fluorine-based dry etching (Plasmatherm SLR 770, Unaxis). Using this process, patterned Si substrates were produced for two purposes: 1) Substrates with grating depths of 1.3 mm were used for cell studies; and 2) Substrates with grating depths of 0.3 mm served as imprint masters in the fabrication of the patterned Ti substrates. For the latter, a coating of perflourodecyltricholorsilane (FDTS) was applied using molecular vapor deposition (MVD 100E, Applied Microstructures) to minimize resist adhesion during imprinting. Patterned Substrate Characterization The fidelity and uniformity of the substrate patterning was characterized using scanning electron microscopy (SEM) (SUPRA 55, Leo). Imaging was performed at 5 kV accelerating voltage without need for application of conductive coatings on either substrate. Mean groove width, ridge width, and grating pitch for each sub-pattern were calculated based on measurements made at five different locations within each sub-pattern (i.e., center and four corners). The groove depths and profiles in the patterned substrates were characterized via cross sectioning and SEM imaging. Focused ion beam (FIB) milling was used to cross section the Ti substrates (CrossBeam XB1540, Carl Zeiss Microscopy), while cleaving was used to cross section the Si substrates. The depths of the larger gratings of both substrates were corroborated using a surface profilometer with 12 m tip diameter stylus (Dektak 8, Veeco Metrology Group). The surface roughness of the patterned substrates was characterized using atomic force microscopy (AFM) (Dimension 3100, Nanoscope IIIa, Veeco Metrology Group). Commerciallyavailable silicon nitride tips were used (tip radius of curvature , 10 nm, tip height = 14-16 mm, and spring constant = 1.2-6.4 N?m 21 ; Bruker AFM Probes). Imaging was performed in tapping mode with 1 Hz scan rate. Measurements were made within the middle of the ridge-tops for all gratings, well away from the ridge edges. For the 0.5 and 0.75 mm gratings, measurements were made over 0.20 mm60.20 mm areas (i.e., 0.04 mm 2 measurement area). For the 50 mm gratings and unpatterned sub-patterns, measurements were made over 0.53 mm60.53 mm areas (i.e., 0.28 mm 2 measurement area). Average roughness, root mean square roughness, and maximum roughness values for each subpattern were calculated based on measurements made at five different locations within each sub-pattern (i.e., center and four corners). Endothelial Cell Assays Prior to all assays, the patterned substrates were subjected to standard solvent cleaning, followed by sterilization by autoclaving (121uC for 35 min) and overnight UV exposure. Trypsinized HECs were seeded at a density of 22,000 cells/cm 2 on the patterned substrates, which were not subjected to any pretreatment prior to seeding (i.e., neither oxygen plasma treatment, nor pre-incubation with fibronectin, collagen, BSA, etc.). The cellseeded substrates were cultured for various durations (30 min, 1 d, Endothelial Cell Response on Patterned Titanium and Silicon PLOS ONE | www.plosone.org and 5 d), after which non-adherent cells were removed by rinsing twice in phosphate buffered solution (PBS). Cells that remained on the surface were visualized by fluorescent staining of nuclei to evaluate adhesion and proliferation response (Hoechst 33342, Life Technologies), or the cellular membrane to evaluate substrate coverage (Rhodamine 123, Life Technologies). Live/dead assays were also performed on surface-adhered cells using propidium iodide and 49,6-diamidino-2-phenylindole (Life Technologies). Mean cell densities for each sub-pattern were calculated based on measurements made at five different locations within each subpattern (i.e., center and four corners). Immunostaining For morphological and cytoskeletal architecture studies, nonadherent HECs were removed by PBS rinsing after prescribed culture durations. Remaining HECs on the patterned substrates were fixed with 4% paraformaldehyde for 15 min, permeabilized with 0.2% Triton X-100, blocked with 1 mg/ml BSA for 10 min, rinsed with PBS, and stained for 10 min with Alexa Fluor 488 phalloidin (Life Technologies) for F-actin, and Hoechst 33342 (Life Technologies) for nuclei. The elongation and orientation of HECs on the patterned substrates were determined based on measurements made on immunostained cells after 5 d culture. Using ImageJ, elongation was calculated as the ratio of major to minor cell axis lengths (as defined by the actin microfilament network), while orientation was characterized as the angular deviation between the cell major axis and the grating axis. Angular deviation for HECs on the unpatterned sub-patterns was determined relative to an arbitrary reference axis whose orientation was held fixed for each field of view the measurements were made over. Means were calculated based on measurements made at five different locations within each sub-pattern (i.e., center and four corners). For phenotype and function studies, the expression of two EC markers, von Willebrand Factor (vWF) and vascular cell adhesion molecule-1 (VCAM-1), was characterized. vWF maintains homeostasis through binding to FVIII, platelet surface glycoproteins, and constituents of connective tissue. It also initiates platelet aggregation via binding to exposed structures of injured vessel walls at high arterial shear rates. Furthermore, it is thought to assist during platelet aggregation by bridging adjacent platelets at high shear rates. The function of vWF is strongly shear rate dependent, whereas fluid dynamic conditions, as well as mechanical forces, are crucial for the conformational transition of VWF to develop its interaction with endothelial matrix proteins, as well as platelets, in case of vessel injury [30,31]. Essentially, vWF is considered an anti-thrombotic biomarker [32]. VCAM-1 plays an important role in both immune responses and in recruitment of lymphocytes, monocytes, leukocyte adhesion to sites of inflammation [33]. It appears to function as a leukocyte-endothelial cell migration molecule [34]. Because of this, VCAM-1 is recognized as a significant biomarker of endothelial dysfunction [35,36]. HECs were seeded at a density of 50,000 cells/cm 2 and cultured for 1 day. Substrates were then washed with PBS and adherent HECs were fixed and permeabilized in 220uC methanol for 20 min. Substrates were then washed in PBS, incubated with blocking buffer (4 g BSA+80 mL PBS+150 mL Triton6100) for 1 h, and incubated with 1:400 of rabbit polyclonal anti-vWF/ VCAM-1 in antibody dilution buffer (4 g BSA+40 mL PBS+ 120 mL Triton6100) overnight at 4uC. Substrates were then rinsed in PBS and incubated with 1:1000 Texas Red 598 donkey anti-rabbit secondary antibody for 1 h at 25uC. Finally, nuclei were cross-stained using Hoechst 33342. Using ImageJ, protein expression per cell area was quantified by measuring the average fluorescence signal intensity within a cell and dividing this value by the area of the cell. Means were calculated based on measurements made on at least 10 cells per sub-pattern. Fluorescent Imaging Fluorescent imaging of HECs on the patterned substrates was performed using a fully apochromatic corrected stereomicroscope with 12.5:1 zoom (M125, Leica). Images were acquired with a 10X objective lens, binning of 464, gain of 8.0, and brightness of 1.2 was used for image acquisition. Stained cells were imaged using a Leica SP5 confocal microscope. Spot Imaging Software and Leica SP5 LAS Software were used for image acquisition, and images were processed using ImageJ (v1.46, NIH). Scanning Electron Microscopy of Cell-seeded Substrates SEM imaging of HECs on the patterned substrates was also performed. Prior to imaging, cells were washed with PBS, fixed with 4% glutaraldehyde, and post-fixed with 0.5% OsO 4 for 1 h each. They were then dehydrated through a graded series of alcohols and dried in a critical point dryer (CPD 030, Balzer). Imaging was performed at 5 kV accelerating voltage without need for conductive coatings on either substrate type. Statistical Analyses All cell studies were repeated in triplicate. Statistical analyses were performed using single-factor ANOVA and commerciallyavailable software packages (Excel, Microsoft; and SigmaPlot 5.0, Systat Software). Results Patterned Substrate Characterization Figure 3 shows representative SEM micrographs of selected patterned Ti and Si substrates. Precisely defined and highly uniform patterning is observed on both substrate materials. This is further corroborated by the excellent agreement between the expected and measured grating groove widths, ridge widths, and pitches for selected sub-patterns reported in Table 1. Similar agreement was observed for the other sub-patterns (data not shown). As discussed earlier, the Ti DRIE process is the only technique capable of producing such diminutive and preciselydefined features within bulk Ti substrates. Figure 4 shows cross section SEM micrographs of the 0.5 mm gratings on the patterned Ti and Si substrates. As can be seen, nearly identical gratings have been produced in both substrates. Moreover, groove profiles are observed to be nearly rectangular. Finally, identical groove depths of 1.3 mm are achieved for both substrates. Results from surface profilometry measurements of wider grooves elsewhere on the patterned substrates returned similar depths (data not shown), thus indicating a uniform groove depth across all sub-patterns on the Ti and Si substrates. Table 2 reports AFM-based surface roughness measurements for the grating ridge-tops of selected patterned Ti and Si substrates. Average roughness, R a , represents the average height of the roughness irregularities, while root mean square roughness, R sq , is more sensitive to low and high points, and maximum roughness, R max , reports the largest aberrations. In all subpatterns, R a and R sq are extremely small (i.e., #2 nm), thus suggesting that this should minimally influence cellular response. Figure 5 shows HEC densities at various time points on the patterned Ti and Si substrates. At the 0 d time point (i.e. 30 min), we observe a trend of increasing adhesion with decreasing feature size on the patterned Ti substrates, and response on the patterned Ti surfaces is greater than the unpatterned Ti control, e.g., HEC densities on the 0.5 mm Ti gratings are 2.32 times greater than unpatterned Ti. For the patterned Si substrates, similar sizedependent response is observed, e.g., HEC densities on the 0.5 mm Si gratings are 2 times greater than unpatterned Si. However, adhesion on patterned Si is generally lower than on comparably patterned Ti, e.g., HEC densities on the 0.5 mm Si gratings are 14% lower than on 0.5 mm Ti gratings. Finally, adhesion on both patterned and unpatterned Ti and Si is greater than the tissue culture plastic control. Endothelial Cell Adhesion and Proliferation At latter time points (i.e., 1 d and onwards), Figure 5 shows that HEC proliferation increases with decreasing feature size on the patterned Ti substrates, and response on patterned Ti surfaces is greater than the unpatterned Ti control, e.g., at 5 d, HEC densities on the 0.5 mm Ti gratings are 2.79 times greater than unpatterned Ti. A similar trend for the patterned Si substrates is also seen, e.g., at 5 d, HEC densities on the 0.5 mm Si gratings are 4.14 times greater than unpatterned Si. However, proliferation on patterned Si is generally lower than on patterned Ti at comparable Endothelial Cell Morphology and Cytoskeletal Architecture Figure 6 shows SEM micrographs of HECs on 0.5 mm gratings and unpatterned sub-patterns of both substrate materials after 1 d. Significant cellular elongation is observed on the Ti grating and alignment along the grating axis is clear. For the Si grating, significant cellular elongation is also seen, as is alignment with the grating axis. However, more favorable cellular morphology occurs on the Ti grating relative to the Si grating, as evidenced by greater flattening and spreading. Greater spreading is also seen on the unpatterned Ti relative to unpatterned Si surfaces. Similar trends are observed at latter time points, as illustrated in Figure 7, which shows SEM micrographs of HECs on patterned Ti and Si substrates after 5 d. These images were taken at the boundary between the 0.5 mm and 50 mm sub-patterns, which are separated by a 200 mm wide unpatterned region, and are orthogonally oriented with respect to one another. Cellular elongation and alignment are again seen on the 0.5 mm gratings of both materials, with more favorable morphology and greater coverage on the 0.5 mm Ti grating. Moreover, the Ti micrograph clearly illustrates the spatial specificity of HEC response to the 0.5 mm Ti grating, as evidenced by the decreasing cell alignment and density with distance from the boundary of the 0.5 mm subpattern. Further evidence of the influence of sub-micrometer patterning and substrate material is provided by Figure 8, which shows fluorescence micrographs of HECs on 0.5 mm gratings and unpatterned sub-patterns of both substrate materials after 5 d. Strong elongation and alignment are observed on the Ti grating. Moreover, a nearly confluent HEC layer is observed on the Ti grating, while sparser coverage is seen on the unpatterned Ti. For the Si grating, increased elongation and alignment are also observed relative to the unpatterned Si control. However, HEC coverage on the Si grating is lower than the comparable Ti grating. The influence of sub-micrometer patterning and substrate material on cytoskeletal architecture is illustrated in Figure 9, which shows HECs stained for nuclei and F-actin on 0.5 mm gratings and unpatterned sub-patterns of both substrate materials after 5 d. Strong cytoskeletal alignment is evident on the gratings of both materials, which is corroborated by the increasing cellular elongation ratios and decreasing angular deviations with decreas- ing feature size reported in Table 3. However, the microfilament network on the 0.5 mm Ti grating is observed to be more robust relative to the 0.5 mm Si grating, and greater elongation is seen on the Ti grating. Cytoskeletal alignment is not observed on the unpatterned controls for either substrate. This is further corroborated by the measured ,45u mean angular deviation on the unpatterned controls for either substrate, which is indicative of randomized cell orientation. Endothelial Cell Function Figure 10 shows results for expression of two important EC markers, vWF and VCAM-1, on the patterned Ti and Si substrates. As discussed earlier, vWF is a functional marker expressed by ECs that plays a key role in homeostasis, whereas VCAM-1 is a marker for inflammation. As shown in Fig. 10A, vWF is uniformly distributed within the cytoplasm of HECs on the 0.5 mm gratings of both Ti and Si, but is more confined to the perinuclear regions on the unpatterned controls. Moreover, quantitative measurements show increasing expression with decreasing feature size on both Ti and Si patterned substrates, and response on the patterned surfaces is greater than their respective unpatterned controls, e.g., expression on the 0.5 mm Ti gratings is 2 times greater than unpatterned Ti. However, expression on patterned Si is considerably lower than on comparably patterned Ti, e.g., expression on the 0.5 mm Si gratings is 46% lower than on 0.5 mm Ti gratings. Finally, expression on both patterned and unpatterned Ti and Si is generally greater than the tissue culture plastic control. Figure 10B shows that expression of VCAM-1 is also feature size-dependent, although in a converse manner, with expression largely confined to the perinuclear regions in the 0.5 mm gratings of both Ti and Si, but more widely distributed on the unpatterned controls. Moreover, quantitative measurements show decreasing expression with decreasing feature size on both Ti and Si patterned substrates, and expression patterned surfaces is lower than the unpatterned controls, e.g., expression on the 0.5 mm Ti gratings is 3.6 times lower than unpatterned Ti. However, expression on patterned Si is higher than on comparably patterned Ti, e.g., expression on the 0.5 mm Si gratings is 41% higher than on 0.5 mm Ti gratings. Finally, expression on sub-micrometer patterned Ti and Si is lower than the tissue culture plastic control. Discussion The data presented herein demonstrate that EC response is enhanced with decreasing feature size on patterned Ti substrates down to 0.5 mm; specifically, decreasing feature sizes are shown to promote greater adhesion, proliferation, and elongation, as well as a more athero-resistant phenotype in vitro, which therefore suggests promise for facilitating the reestablishment of a functional endothelium in vivo. Moreover, the data show that patterning of Si substrates also elicits favorable trending in EC response with decreasing feature size, albeit to a lesser extent than comparable Ti gratings. This therefore suggests that while topographical cueing can be used to promote enhanced EC response on both materials, their differing surface chemistries affect the ultimate magnitude of cellular response. Endothelial Cell Response as a Function of Substrate Material and Topography While exploration of the mechanisms underlying the differential EC response observed on comparably patterned Ti and Si substrates is beyond the scope of the current study, one potential explanation could lie in the differing stiffnesses of these substrates. Studies have shown that ECs, like many adherent cells, are affected by variation in substrate stiffness. Typical responses include migration toward stiffer regions, as well as increased adhesion and spreading with increasing substrate stiffness [37][38][39][40]. However, our observations (e.g., Figure 6) show reduced spreading on the stiffer Si substrates (E Ti = 105 GPa [41] & E Si = 130-188 GPa [42]). Moreover, the stiffnesses of the current substrates are likely to be well-beyond the threshold for significant differential response, since their moduli greatly exceed those of the polymeric substrates used in previously reported studies (0.1 kPa-2.5 MPa). Consequently, this suggests against substrate stiffness as a potential explanation in the current study. An alternative, and perhaps more plausible explanation for the observed differential response may lie in the differing adhesive ligand presentation on these materials. It is well known that adhesive ligands (e.g., R-G-D) presented by surface-adsorbed plasma proteins (e.g., fibrinogen and fibronectin) act as binding sites for transmembrane integrins. Moreover, it is well known that the specific presentation of these ligands, in turn, depends upon the physicochemical properties of the substrate [7]. Finally, it has been recently shown that EC adhesion and spreading typically increase with increasing ligand density until saturation [43,44]. Consequently, this suggests that adhesive ligand density and/or presentation is more favorable on Ti compared to Si (or more specifically, on the oxide surfaces presented thereon [45] Regarding topography specifically, it is well-known that microto nano-scale topography can independently affect the amount, spatial distribution, and conformation of adsorbed proteins, as well as the composition of the adsorbed protein layer that forms during exposure to complex biological fluids [46]. Moreover, it has been recently shown that fibronectin adsorption is increased considerably on patterned Si substrates with deep submicrometer scale gratings when compared to planar controls [47,48], as is the degree of native folding [48]. This suggests that the enhanced EC responses observed in the current study may arise from increasing protein adsorption with decreasing features sizes, which provides increased protein-protein interactions that help stabilize native conformations against denaturization by protein-substrate interactions. However, the considerable differences in grating depth (i.e., 0.09 mm in [47,48] vs. 1.3 mm in the current study) and profile (i.e., sinusoidal in [48] vs. rectangular in the current study) demonstrate the need for further studies to quantify protein adsorption and conformation on the current patterned Ti and Si substrates specifically. Comparison to Previous Studies Although the current study is the first to explore differential EC response on micrometer to sub-micrometer patterned Ti and Si substrates, it is instructive to compare our observations to those reported by others for polymeric substrates patterned with gratings of similar feature size. As in the current study, cellular alignment and elongation along the grating axis is reported in nearly all studies with features smaller than the dimensions of typical ECs (i.e., features ,20 mm) [9][10][11][12][13][14][15][18][19][20][21][22][23][24][25]. However, while increasing proliferation with decreasing groove width is observed in the current study, the opposite [13,19,25] or minimal differential response [11] has been reported in studies with patterned polymers. Moreover, while decreasing features sizes are shown to upregulate vWF and downregulate VCAM-1 expression in this study, minimal differential expression has been reported by others [13,23]. These differences may arise from a number of factors, including, among others: a) the relatively shallow nature of the grooves in those earlier studies (i.e., groove depth #1 mm vs. 1.3 mm for the current study); b) the different cell types used (i.e., human umbilical vein ECs, human endothelial progenitor cells, & bovine aortic ECs vs. HECs in the current study); c) the differing substrate surface chemistries and mechanical properties (e.g., polyurethane, cylic olefin copolymer, polydimethylsiloxane, etc. vs. Ti & Si in the current study); and/or d) the absence of substrate pre-treatment prior to cell seeding in the current study (i.e., neither oxygen plasma treatment, nor pre-incubation with fibronectin, collagen, BSA, etc. were used in the current study). Implications for Ti-and Si-based Implantable Microdevices The favorable trending of EC response with decreasing feature size in the current study suggests promise for use of patterning as a means for improving the safety, efficacy, and performance of novel Ti-and Si-based implantable microdevices. For example, within the context of vascular stents, the observed enhancement of proliferation and athero-resistant phenotype in vitro suggests potential for use of patterning to accelerate endothelialization and healing in vivo. This could therefore provide means for addressing a lingering safety concern associated with current drug-eluting stents, namely late-stent thrombosis resulting from delayed healing [49]. Indeed, first evidence demonstrating this potential in vivo was recently reported by Sprague et al., who showed that endothelialization was accelerated on stents patterned with 15 mm groove width gratings [50]. However, since the current study demonstrates favorable trending of EC response with decreasing feature size into the submicrometer realm, this suggests potential for even further improvement. We have recently demonstrated the fabrication of Table 3. Confocal microscopy measurements of human endothelial cell elongation ratio and angular deviation (from the grating axis) after 5 day culture on patterned Ti and Si substrates. Angular deviation on unpatterned sub-patterns was determined relative to an arbitrary reference axis that was held fixed for each field of view. Data = mean 6 standard deviation (n = 5). doi:10.1371/journal.pone.0111465.t003 the first sub-micrometer patterned stents that will eventually provide capability for evaluating this potential in vivo [26]. It is also conceivable that patterning-induced enhancement of EC response could provide means for facilitating greater neovascularization on Si-based wirelessly-controlled implantable drug delivery microchips [27]. As reported by Bettinger et al., submicrometer patterning of a polymeric substrate can promote the formation of EC-based supercellular band structures that facilitate the subsequent formation of capillary tubes roughly aligned with the grating axis [13]. As such, potential may exist for using patterning to guide capillary tube formation towards the reservoir openings on the surface of drug delivery microchips. Although further studies are required to validate this conjecture, this suggests potential for facilitating the establishment of a direct connection to the surrounding vasculature for applications where rapid systemic delivery is required. Moreover, the observation in the current study of greater EC response on patterned Ti relative to patterned Si suggests the potential superiority of Ti for such devices. Conclusions We have demonstrated the fabrication of precisely-defined, grating-based patterns on bulk Ti and Si substrates with groove widths ranging from 0.5-50 mm. In vitro studies evaluating HEC adhesion, proliferation, morphology, and functionality on these substrates have shown favorable trending of cellular response with decreasing feature size on the patterned Ti substrates. These studies have also shown that patterning enhances HEC response on Si substrates, although to a lesser extent than comparably patterned Ti. Collectively, these results suggest promise for using sub-micrometer topographic patterning to enhance the safety, efficacy, and performance of implantable microdevices based on these substrate materials. Moreover, these results particularly highlight the potential superiority of sub-micrometer patterned Ti for such applications, thus motivating further studies in this regard.
2017-04-20T15:47:43.999Z
2014-10-30T00:00:00.000
{ "year": 2014, "sha1": "ddf94a9607b7b0c524800c089308de27c33c4ed3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0111465", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ee7ef668f7ab602368b2398fe1dbb909521b5bcd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
57737418
pes2o/s2orc
v3-fos-license
Prevalence of primary and secondary infertility in the Colombo District Objective: To estimate the prevalence of primary and secondary infertility in the Colombo District Design: A cross sectional survey Setting: District of Colombo Sub jec t s : T w o thousand currently married women of the reproductive age group Measurements : Prevalence of primary and secondary infertility using the WHO definitions. When a woman has never conceived in spite of cohabitation and exposure to pregnancy for a period of twelve months, the condition was defined as p r imary infer t i l i ty . Seconda ry infertility was defined as being present when a woman had previously conceived but had been unable to conceive subsequently despite cohabi­ tation and exposure to pregnancy for a period of 24 months. If the woman had breast fed the previous infant, then exposure to pregnancy was calculated from the end of the period of lactation amenorrhoea. Results: The point prevalence of primary infer­ tility was estimated as 40.5 per 1000 married women of the reproductive age group, (95% C.L. 32.0 49.0 per 1000). The prevalence increased progressively with increasing current age and age at marriage of women and their partners and was higher among employed women. The point prevalence of secondary infertility was estimated as 160 per 1000 women of the repro­ ductive age group, (95% C.L. 143.9 176.0 per 1000). Increasing current age of women and their spouses, higher age at marriage of the male and low socio economic status were associated with increased prevalence. A history suggestive of post partum or post abortal infection was ob­ tained in 20% of persons who were secondarily infertile. Conclusions: Prevalence of primary infertility is low in the Colombo District, but amounts to an estimated 10,700 16,500 currently married women. The prevalence of secondary infertility is high, with post-partum and post-abortal infec­ tion contributing to a fifth of the cases. Introduction There is a paucity of information on the preva lence of primary and secondary infertility.The limited data available are derived from demo graphic sources and from clinical studies.Data derived from census is liable to be inaccurate as pregnancy wastage may be recorded as child lessness, or a childless woman may report an adopted offspring as her own during a census.Such estimates also assume that all currently mar ried women are exposed" to the risk of pregnancy. Information from clinical sources suggest that infertility is an increasing problem in Sri Lanka.It is not clear whether there is a true increase in the number of infertile couples or more couples are seeking treatment as a result of improved services and a changing social environment *Author for correspondence which permits the acknowledgment of the prob lem.There may be a duplication of statistics as couples are known to move from one provider to another.Furthermore, cases identified in the hospital setting cannot be related to a definite geographic area or a population base and as such an estimation of incidence or prevalence is not possible.Thus, the present cross sectional com munity based survey was undertaken with the objective of estimating the prevalence of pri mary and secondary infertility. Methodology The target population consisted of an estimated 336,000 currently married women of the repro ductive age group (15-49 years) (1) living within the Colombo District.Sample size was calculated assuming a prevalence of 3% as the world wide prevalence is estimated to be 2-10%, (2).Confi dence limits (C.L.) for the estimate was set at 95% and margin of error at ± 1%.Since a cluster sampling procedure was planned, the design factor was taken as 1.5 and a further 15% was added to account for possible non response (3).Sample size was thus calculated to be 2000.The women were selected using a multi stage strati fied cluster sampling procedure.A cluster included 40 women and was based in a Grama Sevaka area.Allocation of clusters was carried out probability proportional to size. The unit of enumeration was the individual woman 15-49 years of age, currently married and resident in the household for at least 6 months.All such women living in an identified house hold were included in the study.Data were col lected in respect of the woman identified and her spouse.Socio-demographic characteristics, a his tory of medical and gynaecological illness past or present, details of menstrual history, fertility patterns and a history of contraceptive use were obtained from the couple using a structured pre tested questionnaire administered by trained interviewers.Informed consent was obtained from all individuals and ethical approval was ob tained from the Ethical Review Committee of the Faculty of Medicine, University of Colombo.All couples identified as being infertile were offered investigation and treatment if they so desired. The definitions given by the World Health Or ganization (WHO) were used to classify pri mary and secondary infertility.When a woman has never conceived in spite of cohabitation and exposure to pregnancy for a period of twelve months, the condition was defined as primary infertility.Secondary infertility was taken to be present when the woman had previously con ceived but had been unable to conceive subse quently despite cohabitation and exposure to preg nancy for a period of 24 months.If the woman had breast fed the previous infant, then exposure to pregnancy was calculated from the end of the period of lactation amenorrhoea (4). Results The ages of the women in the sample ranged from 15-48 years while the range for men was 17-53 years.The modal age group was 30-34 years for both men and women, the mean age for the women was 30.4 years (SD=6.3years) and for men 34.2 years (SD=7.2years).The majority of the married women of the reproductive age group (MWRA) (76%) and their spouses (75%) were Sinhalese and 72% of the men and women were Buddhists.The distribution of these charac teristics in the sample were very similar to that of the population of the district of Colombo recorded at the 1981 census (1). Primary infertility Of the 2000 married couples studied, 81 women were found to fit the WHO definition of primary infertility.Thus the point prevalence of primary infertility was estimated to be 40.5 per 1000 married women of the reproductive age group (95% C.L. 32 -49 per 1000 married women of the The Ceylon Journal of Medical Science reproductive age group).That is, within the Colombo district there are around 10,700 -16,500 married women in the reproductive age group who are considered infertile using the above definition. The mean duration of infertility in this group was 2.2 years (S.D. 1.3 years).In 53% the duration of infertility was two to five years and a further 12% had failed to achieve conception after being ex posed to the risk of pregnancy for 5 years or more. Tables 1 and 2 show the association between se lected socio-demographic characteristics and pri mary infertility.It is seen that the rate of pri mary infertility progressively increased with in creasing current age and increasing age at mar riage of women (Table 1) and their spouses (Table 2).The trends observed were statistically signifi cant.There was no statistically significant differ ence in rates of infertility between those who re ported menarche below 14 years and those who" reported menarche at 14 years and over.The mean age at menarche in the sample population was 13.9 years, same as the results of a prospec tive study on a large sample of girls (5).Ethnicity of the women, level of education of the women and their spouses were not related to rates of in fertility.Women who were employed had higher rates of infertility compared to those who were not, the difference being statistically signifi cant.A significant difference was not seen in re spect of men's employment, although high rates were seen among transport workers, clerks and commercial workers. Secondary infertility The study population included 1907 women who had previously conceived.Of these, 320 were considered to fit the WHO definition of secondary infertility and the point prevalence of secondary infertility was estimated to be 16 per 100 married women of the reproductive age group (C.L. 14.39 -17.60). Tables 3 and 4 examine the relationship between selected socio-demographic characteristics and secondary infertility.Increasing current age of both partners was associated with increase in sec ondary infertility, the trend observed being sta tistically significant.Age at marriage of the fe male was not seen to affect secondary infertility although a statistically significant trend was ob served with increasing age at marriage of the male partner. All women in the present study who were classi fied as secondary infertility were either Sinhala or Muslim and it was observed that the percentage affected were higher among the Muslims being 22.3% compared to 18.6% among the Sinhala (p < 0.05).A statistically significant difference was also observed between those who reported me narche before 14 years and others, higher per centages being reported among those with early menarche. Educational level of either partner or being a working woman showed no statistically signifi cant association with secondary infertility.Preva lence was seen to be high when the men were labourers followed by those employed in the transport industry and in the service sector. In the present sample 11.8% of the women repor ting secondary infertility had symptoms sugges tive of post-partum sepsis, and in a further 8.1% an abortion was followed by failure to conceive for a period of over 24 months i.e in 19.9% the secondary infertility probably is a result of infec tion post-partum or post-abortion. Discussion The study estimated the current point prevalence of primary infertility in the Colombo District to be 4.1% (95% C.L. 3.2% -4.9%).This rate is compatible with rates reported from other Asian countries such as China (3.2%), Pakistan (4%), and Korea (2%) ( 6) and the estimate of 5.2% made using data from the Sri Lanka Demographic and Health Survey of 1987 (7). An appreciable number of couples classified as infertile using the WHO definition of 12 months exposure to risk of pregnancy are likely to achieve a pregnancy spontaneously.On the other hand the definition may be of practical use in clinical practice especially in view of the observed trend in increasing age at marriage. The study estimated the current point prevalence rate of secondary infertility in the Colombo District to be 16% (95% C.L. 14.4% -17.6%).Neigh bouring Asian countries report similar rates, Indonesia and Bangladesh have reported a rate of 15% and Pakistan a rate of 10% while China has a higher rate 21% (6).Most African countries report higher rates compared to the Asian situa tion, the rates varying from 25% in Tanzania to 33% in the Cameroon (6).The present study high lights the importance of post-partum and post abortion sepsis in the aetiology of secondary infertility in our population. Increasing rates of primary and secondary infer tility seen with increasing ages of the couple is well documented.The findings of the present study are compatible with references in medical literature that female fertility begins to decline around 30 years of age ( 8) and a fall in the male fertility potential around 40 years of age (9).Caminiti (1994) identifies age as the most significant factor contributing towards infertil ity (10).Koetswang et al (11) reported a similar increase in secondary infertility with advancing age among Thai women of the reproductive age ' group. Increasing age at marriage in both males and females has been shown to play a role in fertility decline.This may partly be attributed to the fact that women marrying at age 35 years and after find difficulty in conceiving (8).The associations seen with age at marriage, current age of the man and the age at marriage of the man are inter related.In the present study the male partners of infertile couples were seen to be on an average 4 years older than the female.The increase in infertility associated with increasing age at marriage of the woman, current age and age at marriage of the man are probably mediated through the mechanism of increased infertility with increasing age of the woman. In the present study the primary infertility rate was seen to increase with increasing level of edu cation in both men and women although not sta tistically significant.This effect is also probably age related due to postponement of marriage by those who remain in the education process for a longer time.This is also likely to be related to the finding that working women are at higher risk of pri mary infertility.The secondary infertility rate on the other hand was seen to be higher among those who either had no school education or had studied only up to primary level and among non working women.This may be related to the quality of natal care services available to those in the lower socio economic strata or practices which make them more prone to the risk of infections.Occupation of the man was not associated with secondary infertility.However, the high rates of primary infertility associated with workers in the transport industry warrants further investigation to determine if the nature of employment has any effects on spermatogenesis. Although inter-country differences and racial differences in infertility have been documented (6) the present study did not find any statistically significant differences in respect of primary infertility.The secondary infertility rate was seen to be higher among women of the Moor commu nity and this may be associated with difference in rates of post-partum or post abortion sepsis or differences in practices associated with childbirth. Examination of the effect of age at menarche on infertility is justifiable as menarche marks the com mencement of the reproductive life of the woman. The Ceylon Journal of Medical Science Both primary and secondary infertility were found to increase when menarche was at a younger age although only in, secondary infertil ity was this difference statistically significant.Pe rusal of literature did not yield any references to this association. It is documented that advancing age affects the oocyte in the same manner that aging affects other tissues of the body and it has been identified as one reason for impairment of fertility with advancing age (13).The finding that anovulation is associated with advancing age is also support ive of this theory (14).It may well be that the relationship to chronological age is actually medi ated through "reproductive age" i.e. number of years of reproductive life, from menarche to current age and may be a plausible explanation of the association seen with early menarche. Table 1 . Distribution of primary infertility by selected characteristics of women CL = Confidence limitsThe Ceylon Journal of Medical Science Table 2 . Distribution of primary infertility by selected characteristics of men Table 3 . Distribution of secondary infertility by selected characteristics of women Table 4 . Distribution of secondary infertility by selected characteristics of the spouse of secondary infertile women
2019-01-09T14:05:41.235Z
2007-12-24T00:00:00.000
{ "year": 2007, "sha1": "30bc2206552d1926a4a4189000eca59eada8a56e", "oa_license": "CCBY", "oa_url": "http://cjms.sljol.info/articles/10.4038/cjms.v45i2.4854/galley/218/download/", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "30bc2206552d1926a4a4189000eca59eada8a56e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252638871
pes2o/s2orc
v3-fos-license
Quinoid Pigments of Sea Urchins Scaphechinus mirabilis and Strongylocentrotus intermedius: Biological Activity and Potential Applications This review presents literature data: the history of the discovery of quinoid compounds, their biosynthesis and biological activity. Special attention is paid to the description of the quinoid pigments of the sea urchins Scaphechinus mirabilis (from the family Scutellidae) and Strongylocentrotus intermedius (from the family Strongylocentrotidae). The marine environment is considered one of the most important sources of natural bioactive compounds with extremely rich biodiversity. Primary- and some secondary-mouthed animals contain very high concentrations of new biologically active substances, many of which are of significant potential interest for medical purposes. The quinone pigments are products of the secondary metabolism of marine animals, can have complex structures and become the basis for the development of new natural products in echinoids that are modulators of chemical interactions and possible active ingredients in medicinal preparations. More than 5000 chemical compounds with high pharmacological potential have been isolated and described from marine organisms. There are three well known ways of naphthoquinone biosynthesis—polyketide, shikimate and mevalonate. The polyketide pathway is the biosynthesis pathway of various quinones. The shikimate pathway is the main pathway in the biosynthesis of naphthoquinones. It should be noted that all quinoid compounds in plants and animals can be synthesized by various ways of biosynthesis. Introduction Quinoid compounds are an important class of organic compounds that have attracted the attention of researchers for many years [1,2]. This is due to the possibility of their practical application as biologically active and medicinal substances, stabilizers in the polymer industry, dyes, reagents in organic synthesis, dehydrating agents and complexing agents. Due to the peculiarities of the living conditions, often due to the high-pressure habitat and specific food preferences, marine aquatic organisms are a new source of chemical compounds, potential medicines and cosmetics, biologically active substances (BAS) and functional foods. These compounds are theoretically interesting for their high chemical activity, the ability to form complexes with charge transfer, etc. Of particular interest are heterocyclic quinoid compounds and 1,4-naphthoquinone derivatives. This is due to the fact that many natural-and synthetic-condensed quinones have potential biological activity and there is a possibility of studying their redox and complexing properties. In addition, these structures are part of a number of antibiotics and alkaloids of marine organisms, and they are also successfully used in modern medicine and technology. For example, 1,4-naphthoquinone derivatives can be inhibitors of the and alkaloids of marine organisms, and they are also successfully used in modern medicine and technology. For example, 1,4-naphthoquinone derivatives can be inhibitors of the phosphatase CDC25B [3], which is involved in the regulation of the cell cycle and in oncogenesis [4]. Based on computer modeling methods, a possible binding site of compounds of this class (1,4-naphthoquinone derivatives) with CDC25B protein was indicated [5]. Sources of such valuable BAS are the inhabitants of massive Pacific speciessponges, mollusks and echinoderms. Secondary metabolites isolated from them often perform protective functions against threats, such as predator attacks, biological fouling, microbial infections. Of particular interest to us are sea urchins-Scaphechinus mirabilis (Agassiz, 1863) and Strongylocentrotus intermedius (Agassiz, 1863)-because they have quinone pigments (echinochrome A and different spinochromes). Sea Urchins-S. mirabilis and S. intermedius Sea urchins (Echinoidea) appeared on Earth about 500 million years ago. They are divided into two types: right and wrong sea urchins. Flat Sea Urchin-S. mirabilis The first flat sea urchins (sand dollar) appeared about 30 million years ago in West Africa. Then they spread all over the coast. The irregular (wrong) sea urchin S. mirabilis (Scutellidae) ( Figure 1) is one of the widespread representatives of shallow-water benthos [6]. S. mirabilis lives in the Sea of Japan and forms stable settlements on the coast of the southwestern part of Peter the Great Bay at depths from 0.5 to 125.0 m, but depths of 3-6 m are preferred. The thermophilic species of this sand dollar lives exclusively on the surface layer of sandy soil, avoiding silty soil. Sand dollars live in surf zones with sharp fluctuations in salinity. In fact, the sandy S. mirabilis is located on the surface of the bottom, but sometimes it burrows into the sand to a depth of 1-4 cm. In addition, these animals are exposed to changing environmental factors during their life cycle. They can be exposed to seawater with significantly reduced salinity at various depths in hard soils. The size of the flat shell of adult S. mirabilis reaches 50-70 mm in diameter and up to 1 cm in thickness [6]. The thick shell of this sea urchin is covered with small and dense needles of dark purple color. Its diet consists of algae and detritus [7]. Spawning of this type of sand dollar occurs in the warm summer period from mid-July to the end of August in the Sea of Japan. After the fertilization of the S. mirabilis egg, a symmetrical pluteus larva formed on the third day. These larvae are carried by the current or move independently thanks to the cilia. After a few weeks, the larvae sink to the sandy bottom and turn into small round animals. In addition, these animals are exposed to changing environmental factors during their life cycle. They can be exposed to seawater with significantly reduced salinity at various depths in hard soils. The size of the flat shell of adult S. mirabilis reaches 50-70 mm in diameter and up to 1 cm in thickness [6]. The thick shell of this sea urchin is covered with small and dense needles of dark purple color. Its diet consists of algae and detritus [7]. Spawning of this type of sand dollar occurs in the warm summer period from mid-July to the end of August in the Sea of Japan. After the fertilization of the S. mirabilis egg, a symmetrical pluteus larva formed on the third day. These larvae are carried by the current or move independently thanks to the cilia. After a few weeks, the larvae sink to the sandy bottom and turn into small round animals. Gray Sea Urchin-S. intermedius The regular (right) gray sea urchin S. intermedius (Strongylocentrotidae) (Figure 2) is common in shallow coastal zones of the southern part of the Sea of Okhotsk and the Sea of Japan. The regular (right) gray sea urchin S. intermedius (Strongylocentrotidae) (Figure 2) is common in shallow coastal zones of the southern part of the Sea of Okhotsk and the Sea of Japan. This species of sea urchins mainly lives on rocky areas of the bottom, but sometimes they live on a sandy surface and in thickets of seagrasses [8][9][10][11][12][13][14][15]. The body of this sea urchin has a regular spherical shape, slightly flattened from the side of the mouth opening. The diameter of an adult sexually mature individual is from 3 to 8 cm, and the mass reaches 160-170 g. The color of the needles and the shell of S. intermedius is very diversered, purple, green, gray, brown. At a young age, these sea urchins feed on films of microscopic algae, and the adults feed on brown algae [8][9][10][11][12][13][14][15][16]. The spawning of this species of sea urchins occurs at different times, depending on the habitat. For example, in the Sea of Japan, the reproductive season is observed in May-June and September-October, and in the Sea of Okhotsk it lasts from June to October. After the fertilization of gray sea urchin eggs, the resulting embryo turns into a pluteus larva after 48 h. Further, from the end of July to the beginning of September, the larvae begin to settle. The process of their settlement ends by November. When settling on any substrate, the larva acquires external radial symmetry and new organs. Further, the larvae grow slowly, reaching a mass of 0.16 g and an average shell diameter of 0.65 cm by one year of life. In the period from July to September, gray sea urchins grow more intensively than in winter. At the age of three, sea urchins become sexually mature, and the size of the shell increases [8]. Naphthoquinoid Pigments of Sea Urchins Sea urchins, in particular S. mirabilis and S. intermedius, contain polyhydroxylated naphthoquinoid pigments, which are specific metabolites for this class of echinoderms [17,18]. These pigments are present in the soft and skeletal areas of sea urchins. In addition, naphthoquinones have also been found in starfish. The main ones are echinochrome A (1) and five spinochromes A (2), B (3), C (4), D (5) and E (6) ( Tables 1 and 2). Naphthoquinones of sea urchins differ from naphthoquinones of other marine animals by the presence in the structure of a large number of free hydroxyl groups and high antioxidant properties [19,20]. Then a separate group of other naphthoquinoid pigments-spinochromes-were isolated from the ovules, internal organs, shells and needles of various species of sea urchins (Tables 1 and 2) [1,[23][24][25]. Later, the only naphthazarin was isolated from the sea urchins Echinothrix diadema (Linnaeus, 1758) and E. calamaris (Pallas, 1774)-spino- This species of sea urchins mainly lives on rocky areas of the bottom, but sometimes they live on a sandy surface and in thickets of seagrasses [8][9][10][11][12][13][14][15]. The body of this sea urchin has a regular spherical shape, slightly flattened from the side of the mouth opening. The diameter of an adult sexually mature individual is from 3 to 8 cm, and the mass reaches 160-170 g. The color of the needles and the shell of S. intermedius is very diverse-red, purple, green, gray, brown. At a young age, these sea urchins feed on films of microscopic algae, and the adults feed on brown algae [8][9][10][11][12][13][14][15][16]. The spawning of this species of sea urchins occurs at different times, depending on the habitat. For example, in the Sea of Japan, the reproductive season is observed in May-June and September-October, and in the Sea of Okhotsk it lasts from June to October. After the fertilization of gray sea urchin eggs, the resulting embryo turns into a pluteus larva after 48 h. Further, from the end of July to the beginning of September, the larvae begin to settle. The process of their settlement ends by November. When settling on any substrate, the larva acquires external radial symmetry and new organs. Further, the larvae grow slowly, reaching a mass of 0.16 g and an average shell diameter of 0.65 cm by one year of life. In the period from July to September, gray sea urchins grow more intensively than in winter. At the age of three, sea urchins become sexually mature, and the size of the shell increases [8]. Naphthoquinoid Pigments of Sea Urchins Sea urchins, in particular S. mirabilis and S. intermedius, contain polyhydroxylated naphthoquinoid pigments, which are specific metabolites for this class of echinoderms [17,18]. These pigments are present in the soft and skeletal areas of sea urchins. In addition, naphthoquinones have also been found in starfish. The main ones are echinochrome A (1) and five spinochromes A (2), B (3), C (4), D (5) and E (6) ( Tables 1 and 2). Naphthoquinones of sea urchins differ from naphthoquinones of other marine animals by the presence in the structure of a large number of free hydroxyl groups and high antioxidant properties [19,20]. The total extract of the gray sea urchin S. intermedius contained spinochromes A-E (2-6), binaphtoquinones (7-9) (up to 40%) ( Figure 3) and an unknown pigment (10). The same qualitative composition of naphthoquinones was isolated from another species of sea urchins Mesocentrotus nudus (Agassiz, 1863), the extract of which additionally contained other quinoid pigments-echinochrome A (1), spinamin E (11) (Figure 3) [36]. Binaphthoquinones (7-9) were also isolated from the sea urchin of S. mirabilis by var- The authors of the article characterized the quinoid pigment compounds isolated from the sea urchins S. intermedius and S. mirabilis of the Sea of Japan [36]. Content of Main Spinochromes, % of Pigment Sum In addition, Ageenko and her colleagues developed an in vitro technology for inducing differentiation of pigment cells in culture (Table 3) [37]. The number of pigment cells was also evaluated during cultivation in different media (seawater-SW, in the coelomic fluid of intact sea urchins-CFn, in the coelomic fluid of wounded sea urchins-CFreg) and the qualitative composition of naphthoquinoid compounds in them (using MALDI and mass spectrometry with electrospray ionization). Spinochrome D and E were found in the pigment cells of the sea urchins S. intermedius, and cultured pigment cells of the sand dollar S. mirabilis contained echinochrome A and spinochrome E. Table 3. Content of naphthoquinone pigments in cultivated cells of sea urchins S. intermedius and S. mirabilis in different media: SW-seawater; CFn-coelomic fluid obtained from intact sea urchins; CFreg-coelomic fluid obtained from injured sea urchins. Data are presented as the mean ± standard error from two independent experiments (ESI MS) [37]. Naphthoquinone Pigments Cell Culture Medium SW CFn CFreg Fresh Biomass of Cells (mg/g) Scaphechinus mirabilis Spinochrome E 0.021 ± 0.003 0.062 ± 0.007 0.054 ± 0.006 Polyhydroxy-1,4-naphthoquinones are characterized by a labile quinoid structure. This instability is caused by the presence of hydroxyl substituents of the naphthazarin cycle, which are subject to redox transformations. It is known that at physiological pH values, polyhydroxyl 1,4-naphthoquinones are one or divalent anions, and this can affect their reactions with ions and radicals [38]. The authors [39], using methods ultraviolet-visible spectroscopy and UPLC-DAD-MS, studied the stability of quinoid pigment compounds (ethylidene-6,6 -bis(2,3,7-trihydroxynaphthazarin, spinochrome dimer and spinochrome D) isolated from the sea urchin S. droebachiensis. With a change in the acidity of ethanol solutions of naphthoquinones and the time of their incubation, a change in the concentration of these pigments in the solution was also observed. Thus, in order to create medicines based on naphthoquinones, there is a need for conditions for stabilizing the quinoid structure. This direction is very relevant in scientific research. The Main Ways of Biosynthesis of Quinoid Pigment Compounds The biosynthesis of pigments of any class proceeds only along one main pathway with subsequent modification of the structure and the production of individual compounds. Quinoid compounds, unlike other pigments, are a biosynthetic heterogeneous group of substances. Thus, different organisms can synthesize the same pigment quinoid substance using different biosynthesis pathways [1]. Previously it was known that there were only three ways of naphthoquinone biosynthesis-polyketide, shikimate and mevalonate (isoprenoid). Quinones are oxidized derivatives of aromatic compounds and can be synthesized in a similar way. Currently, it is known that in nature, naphthoquinoid pigments are formed as a result of other biosynthetic pathways [41,42]. Polyketide Pathway of Quinone Biosynthesis The polyketide pathway of quinone biosynthesis is a process of gradual chain elongation, similar to the biosynthesis of fatty acids (Figure 4) [42]. At the first stage of biosynthesis, condensation of acetyl-CoA and malonyl-CoA occurs. In addition, the elongation of the carbon chain is increased by the addition of C2 fragments of malonyl-CoA until the desired chain length is reached. Malonyl-CoA serves as an activated donor of acetyl groups. Thus, a polypeptide system is formed due to CO and CH 2 groups with the elimination of a water molecule and the formation of an aromatic or quinoid molecule [42,43]. This is the pathway of biosynthesis of various quinones-benzoquinones, naphthoquinones, anthraquinones and higher quinones (fungal quinones, anthracycline, pyrromycinone from nine acetate and malonate fragments) [42][43][44][45][46][47][48][49][50]. gation, similar to the biosynthesis of fatty acids (Figure 4) [42]. At the first stage of biosynthesis, condensation of acetyl-CoA and malonyl-CoA occurs. In addition, the elongation of the carbon chain is increased by the addition of C2 fragments of malonyl-CoA until the desired chain length is reached. Malonyl-CoA serves as an activated donor of acetyl groups. Thus, a polypeptide system is formed due to CO and CH2 groups with the elimination of a water molecule and the formation of an aromatic or quinoid molecule [42,43]. This is the pathway of biosynthesis of various quinones-benzoquinones, naphthoquinones, anthraquinones and higher quinones (fungal quinones, anthracycline, pyrromycinone from nine acetate and malonate fragments) [42][43][44][45][46][47][48][49][50]. The Shikimate Pathway of Quinone Biosynthesis The shikimate pathway is the main one in the biosynthesis of aromatic and colored quinones (naphthoquinones) and unpainted fragments of quinoid compounds in cells (plastoquinone, phylloquinone, menaquinone and ubiquinone) with the formation of shikimic acid [42,49,50]. The mechanism of quinone biosynthesis begins with the cyclization of 7-phospho-3deoxy-D-arabinoheptulosonic acid, with the further formation of 5-dehydroquinic acid, which then turns into shikimic acid (A), and then into 5-phosphoshikimic acid. After that, phosphoenolpyruvate ether is attached to these acids and chorismic acid (B) is formed, which undergoes intramolecular rearrangement and turns into prephenic acid (C) ( Figure 5) [42]. To date, the pathways of the biosynthesis of naphthoquinone pigments with the participation of shikimic acid have been studied in sufficient detail. The shikimate pathway of quinone biosynthesis is best studied by the example of 1,4-naphthoquinone biosynthesis [42,49,50]. phosphoenolpyruvate ether is attached to these acids and chorismic acid (B) is fo which undergoes intramolecular rearrangement and turns into prephenic acid (C) (F 5) [42]. To date, the pathways of the biosynthesis of naphthoquinone pigments with th ticipation of shikimic acid have been studied in sufficient detail. The shikimate pat of quinone biosynthesis is best studied by the example of 1,4-naphthoquinone biosy sis [42,49,50]. The Mevalonate Pathway of Quinone Biosynthesis Substituted naphthoquinones and anthraquinones are formed from mevalonic acid and its derivatives by mevalonate biosynthesis [42,50]. A combination of two biosynthetic pathways (shikimate and mevalonate) is often used by plants (Pyroleae (Dumort, 1829)) to produce substituted naphthoquinones. The quinoid structure of 1,4-naphthoquinonechimaphilin is derived from shikimate, and methyl substituents and benzoic acid atoms are derived from mevalonate ( Figure 6) [42]. It should be noted that all quinoid compounds in plants and animals can be synthesized by various ways of biosynthesis [40]. For example, mushroom quinones are mainly formed using the polyketide pathway, while higher quinones often undergo a mixed biosynthesis pathway (shikimate and mevalonate pathways together). Substituted naphthoquinones and anthraquinones are formed from mevalonic acid and its derivatives by mevalonate biosynthesis [42,50]. A combination of two biosynthetic pathways (shikimate and mevalonate) is often used by plants (Pyroleae (Dumort, 1829)) to produce substituted naphthoquinones. The quinoid structure of 1,4-naphthoquinonechimaphilin is derived from shikimate, and methyl substituents and benzoic acid atoms are derived from mevalonate ( Figure 6) [42]. It should be noted that all quinoid compounds in plants and animals can be synthesized by various ways of biosynthesis [40]. For example, mushroom quinones are mainly formed using the polyketide pathway, while higher quinones often undergo a mixed biosynthesis pathway (shikimate and mevalonate pathways together). The Pathways of 1,4-Naphthoquinone Biosynthesis The formation of 1,4-naphthoquinoid compounds in nature occurs through a cascade of complex reactions along several biosynthetic pathways. The biosynthesis of 1,4-naphthoquinones of plant and animal origin proceeds along the first (the acetate-polymalonate pathway) and the sixth pathway (the OSB pathway). The Pathways of 1,4-Naphthoquinone Biosynthesis The formation of 1,4-naphthoquinoid compounds in nature occurs through a cascade of complex reactions along several biosynthetic pathways. The biosynthesis of 1,4-naphthoquinones of plant and animal origin proceeds along the first (the acetate-polymalonate pathway) and the sixth pathway (the OSB pathway). The first six metabolic pathways provide the formation of quinoid compounds in the plant kingdom. The seventh pathway forms menaquinone in bacteria [40]. The biosynthesis of quinoid pigments-spinochromes and echinochrome A-in echinoderms, as well as anthraquinones in crustaceans and insects, proceeds along an independent pathway, but more often along the polyketide pathway. The study of the biosynthesis of echinochrome A was first carried out in the sea urchin Arbacia pustulosa (Leske, 1778) by French scientists [67], who determined the maximum inclusion of the radioactive label [2-14 C] in the cyclic part of the echinochrome A molecule and the minimum inclusion in the side chain. Thus, it was suggested that the cyclization of the polyketide chain of five acetate groups occurs first, and then a side chain is formed. In addition, sea urchins contain polyketide synthases (multi-enzyme complexes) capable of synthesizing polyketide chains from acetic acid residues. It is possible that the biosynthesis of sea urchin spinochromes occurs with the participation of their own enzyme complex without the participation of endosymbionts [68]. At the same time, Ageenko and colleagues [69] showed the effect of shikimic acid on the expression level of genes for the biosynthesis of quinoid compounds-polyketide synthase (pks) and sulfotransferase (sult)-in embryos and larvae of the sea urchin S. intermedius at different stages of development and in some tissues of adult animals. Perhaps these data may indicate the passage of spinochrome biosynthesis in sea urchins along a combined pathway. The mechanism of biosynthesis of 1,4-naphthoquinone derivatives in animals is still poorly understood, unlike fungi, microorganisms and plants [1]. Biological Activity of Naphthoquinoid Pigments of Sea Urchins Sea urchins, in particular S. mirabilis and S. intermedius, contain naphthoquinoid pigments. Echinoderm pigments and related compounds form a new class of highly effective phenolic-type antioxidants that exhibit high bactericidal, algicidal, antiallergic, hypotonic and psychotropic activity. Due to its unique antioxidant properties, echinochrome A has been of great interest to scientists around the world for more than 30 years [70][71][72]. Anti-Oxidant Activity of Naphthoquinones Echinoids are unique sources of various metabolites with a wide range of biological activity [73][74][75]. The biological functions of quinoid pigments are very diverse. First, quinones are involved in electron transfer [76][77][78]. Living organisms throughout their lives produce reactive oxygen species (ROS), which regulate all processes of vital activity. It is known that reactive oxygen species, depending on conditions, can affect cell division in various ways (stimulate or suppress), provoke cell differentiation or apoptosis, damage nucleic acids and proteins [79][80][81][82][83][84]. Thus, reactive oxygen species play an important role in the induction of free radical processes in the cell. Damage of DNA molecules by free radicals leads to violations of the nuclear apparatus and the appearance of somatic mutations [85]. Such violations of cells under the influence of the active form of oxygen can lead to their premature death and rapid aging of the body [86,87]. Currently, there are more than 100 diseases (atherosclerosis, diabetes, cancer, rheumatoid arthritis, hypertension, etc.) that have arisen as a result of the effects of free radicals on the cells of the body. It is known that the process of inhibiting the action of free radicals in the body is possible under the influence of various antioxidants (endogenous-enzymes, and exogenous-phenol compounds). Endogenous antioxidants include superoxide dismutase (SOD), catalase, glutathione-independent peroxidases and transferases that have removed organic peroxides [77,88]. Exogenous antioxidants-simple phenols, oxy-derivatives of aromatic compounds and naphthols-are effective interceptors of radicals [89]. Several thousand isolated phenol compounds exhibit pronounced antioxidant properties. Such a number of antioxidants include phenylalanine, tryptophan, vitamins E and K, and most animal and plant pigments (phenocarboxylic acids, flavonoids and carotenoids) [90]. In addition, phenolic antioxidants (α-tocopherol, bilirubin, lycopene and carotenes) are effective inhibitors of various forms of active oxygen (hydroxyl radical, singlet oxygen and superoxide anion radicals) [91]. Molecules of menaquinone (an electron transporter involved in anaerobic redox reactions generating ATP) and ubiquinone (an endogenous mitochondrial compound) are respiratory coenzymes that act similarly to vitamin E or vitamin K and are involved in electron transfer in microorganisms, plants and animals [92,93]. In conditions of oxidative stress, non-enzymatic low molecular weight antioxidants play an important role [77,92]. For example, fat-soluble α-tocopherol, located in the hydrophobic layer of biological membranes, inactivates fatty acid radicals. It is known that a large number of biologically active substances that exhibit antioxidant properties are part of the gonads, tissues of internal organs, shell and needles of sea urchins. Their antioxidants can be activated by phospholipids of plasma membranes, chelate metals, intercept free radicals and inhibit lipoxygenase enzymes [93][94][95][96][97]. The most important pigment of sea urchins with high biological activity is echinochrome A. Moreover, other naphthoquinone pigments of sea urchins-spinochromes B, C, D and E-exhibit pronounced biological activity [98,99]. The study of quinoid pigments of sea urchins is widely carried out by scientists from Japan, China, Korea, Vietnam and Russia. Different types of sea urchins contain naphthoquinoid pigments of different composition, showing different biological activity. It is known that all spinochromes are capable of regenerative properties. It has previously been shown that spinochrome A and echinochrome A are inhibitors of hydroxylases (dopamine-β-hydroxylase and tyrosine hydroxylase), which are targets for the treatment of hypotension. The antioxidant activity of spinochromes and echinochrome A was first studied by Russian scientists [100]. All quinoid pigments of sea urchins represent a new class of natural antioxidants. Isolated spinochromes A-E and echinochrome A from extracts of different species of sea urchins showed high antioxidant activity in models of inhibition of lipid substrate oxidation, chelation and reduction of iron ions, interception of hydrogen peroxide and superoxide radical anion, interaction with 2,2-diphenyl-1-picrylhydrazyl (DPPH) [19,20,[101][102][103][104]. Besides, many quinones are synthesized by organisms in self-defense. For example, many insects secrete toxic and aggressive simple benzoquinones. Some fungi synthesize naphthoquinones, which exhibit antibacterial and antiviral properties. Substituted 1,4naphthoquinone-marticin synthesized by the phytopathogenic fungus Fusarium martii (Appel and Wollenw., 1910)-contributes to the destruction of the host plant. Some quinones are also capable of causing various allergic reactions in humans and other mammals. Anti-Bacterial Activity of Naphthoquinones It is known that granules of coelomic fluid (red spherocytes) of some sea urchins [105] protect against microbial infection and are important when the shell is injured [106,107]. (Migula, 1895), and Bacillus subtilis (Ehrenberg, 1835; Cohn, 1872) was studied. Echinochrome A and spinochromes showed high anti-bacterial activity against the studied bacteria [106]. Echinochrome A and spinochromes A-E have also been investigated for anti-microbial activity against other bacteria. It was found that this anti-bacterial activity is very high against the gram-positive bacteria of Staphylococcus aureus (Rosenbach, 1884) and fungal cultures of Saccharomyces carlsbergensis (E.C. Hansen, 1908), Candida utilis (Berkhout, 1923) and Trichophyton mentagrophytes ( (Robin) Blanchard, 1853). With the growth of the aquaculture industry, bacterial infections are increasing, which leads to the death of millions of juvenile organisms, and this explains the increased interest in understanding how marine organisms protect themselves. Previously, the influence of marine bacteria on the embryonic development of the sea urchin S. intermedius, larval survival, the number of pigment cells and the expression level of genes for the biosynthesis of pigment quinoid compounds pks and sult was studied [106]. , 1985) and Vibrio (Pacini, 1854)) lead to a slowdown in embryonic development, deformation and reduced survival of larvae. The expression of pks genes increased significantly after incubation of embryos with all these bacteria, and the expression of sult genes increased only after the incubation of sea urchin embryos with Pseudoalteromonas and Shewanella bacteria. In addition, the precursor of pigment biosynthesis, shikimic acid, also increased the resistance of embryos to the action of these bacteria. Thus, it was found that the specific genes pks and sult, as well as shikimic acid, are involved in the bacterial protection of sea urchins. Anti-Viral Activity of Naphthoquinones The combined use of echinochrome A with ascorbic acid and α-tocopherol showed not only higher antioxidant, but also antiviral effects against tick-borne encephalitis virus (TBEV) and herpes simplex virus type 1 (HSV-1) than the use of echinochrome A alone [107,108]. Anti-Inflammatory Activity of Naphthoquinones In addition, the anti-inflammatory activity of spinochromes has been described in vitro [104]. Spinochromes A, B, E, a mixture of echinochrome A and spinochrome C increased the level of proinflammatory cytokine TNF-α in macrophages J774 after stimulation with lipopolysaccharide. It should be noted that during in vivo experiments, echinochrome A also demonstrated a pronounced anti-inflammatory effect [109]. Anti-Allergic Activity of Naphthoquinones Currently, the antiallergic properties of quinoid pigment substances are still poorly studied. However, in the work [110] for the first time, the potential possibility of using an extract of pigments isolated from the shell of sea urchins S. droebachiensis as an antiallergic agent was shown. The author and colleagues have revealed a significant inhibitory effect of polyhydroxy-1,4-naphthoquinone extract of sea urchins S. droebachiensis with the allergic inflammation of the isolated ileum of a guinea pig. It was also previously described the use of chrysophanol-8-O-β-D-glucopyranoside, a natural anthraquinone isolated from rhizomes of Rheum undulatum (Polygonaceae) (Linnaeus, 1753), as an antihistamine component in the treatment of asthma [111]. A similar property is possessed by the commercial drug "Disodium Cromoglycate" [111]. Cytotoxic Activity of Naphthoquinones The cytotoxic activity of spinochromes has been investigated against human HeLa tumor cells. The results of the study showed low cytotoxicity of sea urchin naphthoquinoid pigments for these tumor cells [104]. Study of Pharmacokinetic Properties of Naphthoquinones An important stage in the study of various properties of biologically active substances is the study of the pharmacokinetics of these substances, i.e., their chemical transformations (absorption of BAS, their distribution, metabolism and excretion) in animals and humans. Part of the review [114] is devoted to the description of the pharmacokinetic properties of echinochrome A isolated from sea urchins S. mirabilis. The authors [115] investigated the absorption of the sodium salt of the naphthoquinone pigment echinochrome A (Histochrome) in rabbits after parabulbar and subconjunctival injections. The drug is quickly distributed over the eye without being absorbed into the blood. Pharmacokinetic studies in humans have been conducted for a few compounds isolated from marine animals. The results of pharmacokinetic studies of echinochrome A are known when studying the effect of the human Histochrome drug [117]. This drug was administered intravenously (1%; 100 mg) to the subjects. As a result, a high volume distribution of the drug in plasma (5.7 l), low clearance (0.16 l/h) and an extended half-life (T1/2) up to 87.3 h were determined. The study of the pharmacokinetic properties of biologically active naphthoquinones and other substances isolated from marine animals helps to investigate their metabolism and improve the bioavailability of poorly soluble compounds when creating drugs [118][119][120]. The Prevention and Treatment of Cardiovascular Diseases, Disorders of Carbohydrate and Lipid Metabolism during Aging, Ophthalmological Substances Cardiovascular diseases occupy the first place among human diseases. Currently, scientists around the world are actively developing drugs and biologically active additives for the treatment of patients with lipid metabolism disorders, which form the basis of cardiovascular pathology. Modern therapy in medicine is aimed at normalizing carbohydrate metabolism, reducing the content and intensity of assimilation of saturated fats and cholesterol in food. An important direction is to reduce the production of free radicals, adhesion molecules, cytokines, reduce the formation of atherosclerotic plaques, as well as enhancing the antioxidant activity of the blood and human immunity. Currently, the most effective drugs for the prevention and treatment of coronary heart disease (CHD) are drugs of the statin group that have pleiotropic effects and block endogenous cholesterol synthesis by inhibiting 3-methyl-3-glutaryl-coenzyme-A-reductase. In addition, statins reduce the intensity of oxidative stress, improve the functional activity of the endothelium of blood vessels, stabilize atherosclerotic plaques and reduce platelet adhesion. Despite this, for medical reasons, not all patients can take statins. Echinochrome A, having the functions of antioxidants and vitamins of group K, can be part of medicines for the treatment of cardiovascular, oncological diseases; diabetes mellitus; liver pathology; aging; disorders of carbohydrate and lipid metabolism [73]. Echinochrome A and other polyhydroxyaphtoquinones isolated from the flat sea urchin S. mirabilis have anti-platelet properties, reduce cholesterol levels in the blood and the formation of atherosclerotic plaques, normalize erythrocyte membranes, reduce the accumulation of toxic peroxides in the cardiac muscle tissue, an" exhibit anti-viral and anti-microbial effects [103]. Based on the data obtained, new effective drugs with unique therapeutic properties Histochrome for cardiology and Histochrome for opthalmology were developed [121,122]. Histochrome interacts with free radicals and active oxygen forms in the first 10-20 min, stabilizing cell membranes [101,[123][124][125][126]. The work of Shikov [30] describes the main effects of the drug Histochrome (solution of sodium salt of Echinochrome A) in the treatment of humans. It is known that with the introduction of a single intravenous injection of Histochrome (100 mg), the aggregation of erythrocytes and platelets decreases, and there is an improvement in the structure of blood cells and stabilization of their function [127]. In addition, the use of the same concentration of Histochrome leads to inhibition of lipid peroxidation in plasma and a decrease in necrosis of heart tissues in acute phases of myocardial infarction [117,128]. With intramuscular administration of Histochrome (2 mL of 0.02% solution) for 10 days, patients with cardiovascular pathology experience modulation of the immune system [129]. Also, the use of Histochrome (0.5 mL, 0.02%, subconjunctival or peribulbar injection) leads to the epithelialization of defects in patients with corneal dystrophy and increased visual acuity (in 60% of cases), a decrease in the volume of edema [123], and to the complete resorption of retinal hemorrhages after traumatic intraocular hemorrhage [130,131]. Naphthoquinones Are Analogues of Medicines Echinochrome A, which has a similar structure to vitamins of groups K and C, exhibits vitamin properties [132]. In addition, echinochrome A exhibits a physiological effect similar to ascorbic acid [133], demonstrating a similar transport pathway into the cell. When echinochrome A penetrates the tissue, the oxygen content decreases and hydrogen peroxide is formed, which induces the release of transcription factors (PPARa, PPARß and PPARy) by peroxisomes, which reduce inflammation and regulate cellular metabolism. Using in Agriculture When freezing embryonic sea urchin cells in liquid nitrogen, the use of echinochrome A in combination with exogenous lipids ensures high cell viability (75-80%) and the ability to synthesize pigment granules and spicules after thawing [132]. Echinochrome A is also used in agriculture for cryopreservation of sperm from farm animals [132]. Biotechnological Using Cultures of embryonic cells of sea urchins represent a new model system for obtaining cells and their directed differentiation in vitro. At the same time, it is possible to use artificial and natural substrates, unique biologically active substances from the tissues of marine organisms and various growth factors. Currently, little is known about the genes of growth factors that are expressed in the tissues of marine invertebrates. For vertebrates, the key genes regulating the state of cells and ensuring a high level of proliferation of embryonic cells in culture are mainly two genes, nanog and oct-4 [133,134]. One of these genes, SpOct, was previously found in the sea urchin S. purpuratus. In addition, we found the nanog gene in the genome of sea urchins, the expression of which was detected at the stage of the mesenchymal blastula [135]. The expression of foreign genes (yeast gal4 gene and plant oncogenes rol) leads to abnormal development of sea urchin embryos [136,137]. The active proliferation of embryonic sea urchin cells in culture can be induced by the introduction of plasmids containing, for example, the gal4 gene. After 2 months of cultivation, the naphthoquinoid pigment echinochrome A was detected in cells obtained from transfected embryos [137]. Food Supplements Based on Freeze-Dried Sea Urchin Caviar Currently, food supplements are an important addition to human food, supporting a healthy lifestyle. In addition, they can be used in complex therapy for the treatment and prevention of various pathological diseases. Food supplements created on the basis of sea urchins are aimed at reducing the absorption of saturated fats and cholesterol by the human body with food. Such food supplements have a targeted effect on all stages of the atherosclerotic process [38]. To study the normalization of the blood lipid profile in dyslipidemia, Kovalev and his colleagues conducted a number of experiments using BME separately and together with different doses of atorvastatin. The experiment was carried out for 28 days on noninbred white mice (males) (60 individuals of the same age, body weight 18-20 g; nursery "Stolbovoe"), which received an emulsion of cholesterol in vegetable oil (through a probe daily at the rate of 0.4 g/kg of animal body weight) and an atherogenic diet (lard (25%), butter (5%) of the weight of the daily diet, wheat porridge). The concentration of BME was 250 mg/kg (5 mg per animal). The study was carried out in accordance with all the requirements of the European Convention (European Convention for the protection of vertebrate animal used for experimental and other scientific purposes, 1986). In the blood serum of the group 2 mice relative to the group 1 animals, there was a significant increase in the amount of total HC, HC in atherogenic classes (LDL-C, VLDL-C, and TG) and CA. In the 3rd group of mice in their blood serum, there was a decrease in the levels of total cholesterol, LDL-C and VLDL-C, CA, with a simultaneous increase in the relative content of HDL. However, there was no decrease in the amount of total HC in the blood serum of the group 4 animals, but the levels of LDL-C and VLDL-C, TG and CA significantly decreased in comparison with similar indicators in the group 2 mice [38]. Thus, the use of BME as a dietary supplement has a more significant effect on the body against the background of an atherogenic diet than after it ends. In addition, the dynamics of lipid metabolism indicators in patients with dyslipidemia (DLP) with the combined use of the drug atorvastatin (10 and 20 mg) with BME was studied. The therapeutic effect was achieved with the use of atorvastatin at a dose of 10 mg together with BME [38]. Moreover, the use of the therapeutic complex BME-atorvastatin (10 mg)-contributed to an increase in the level of HDL-C in the blood of patients, necessary to reduce the formation and stabilization of atherosclerotic plaques [38]. Thus, the combined use of BME with atorvastatin can enhance the pharmacological capabilities of statins and reduce the mortality rate from cardiovascular diseases. The Use of Food Supplements Based on Sea Urchin Caviar in the Treatment of Women during Menopause In Russia, the biologically active supplement "Extra Youth" ("EY") has appeared relatively recently (RU.77.99.11.003.E. 0011843.02.15) [38]. The composition of this drug includes sea urchin caviar, calcium alginate and rosehip fruits as a source of vitamin C. The drug is obtained by enzymatic hydrolysis technology. The calcium salt of alginic acid is a polysaccharide of the marine brown algae Laminaria japonica (Lamour, 1813), consisting of two monomers-residues of polyuronic acids (Dmannuronic and L-guluronic) in different proportions. Alginic acid is a source of dietary fiber and calcium, and it improves the digestive process; binds; and removes heavy metals, radionuclides, toxins, allergens from the body. Rosehip has a variety of vitamins and mineral salts, its fruits contain flavonoids (hyperoside, kaempferol, quercetin, quercitrin, lycopene, rutin). Vitamin C in the composition of this drug enhances the anti-inflammatory and effect and shows an antioxidant effect. Moreover, thanks to the composition of this dietary supplement, it has a positive effect on the musculoskeletal system, is a proflactic agent for inflammatory diseases of cartilage tissue and osteoporosis. In addition, "Extra Youth" also shows cosmetic effects and promotes flexibility of hair and nails. Due to the properties of all the components of this drug, its study was conducted in the complex therapy of women with hormonal disorders during menopause. The design of the study was that menopausal women took the drug "EY" for 30 days. Further, to assess the degree of changes, the patients answered questions from a specially developed test system. As a result, in patients after taking the drug "EY", it was determined: 1. in 66%, insomnia disappeared; 2. in 64%, nervousness and a feeling of depression and despondency disappeared; 3. in 75%, a feeling of constant fatigue disappeared; 4. in 65%, night sweating significantly decreased; 5. in 66%, the skin condition improved; 6. in 55%, joint pain decreased; 7. 63% have significantly rarer hot flashes. At the same time, changes in clinical blood and urine tests were not detected, and clinically significant changes in the level of sex hormones in the blood were also not detected. Thus, the results of the paraclinical examination showed that when taking the drug "EY", significant positive changes occurred in the state of women's health. Materials and Methods We conducted the writing of this review using various literary sources, including world databases. Literature search was conducted in Web of Science, Scopus, Pubmed, Scientific Electronic Library (Russian database-elibrary.ru). All literary sources are listed in the "References" section including publications from 1885 to the present. Conclusions Naphthoquinoid pigments of sea urchins are a promising source for the production of drugs with various pharmacological activities. Echinochrome A is used for the prevention and treatment of cardiovascular diseases, disorders of carbohydrate and lipid metabolism during aging. Echinochrome A is also used in agriculture. Sea urchin pigments could become the basis for the development of new natural drugs. There are great prospects for the future development of naphthoquinone pigments of sea urchins into drugs with rich pharmacological activities and the limitations for further research connect with sea urchin catch limits or with in vitro researches.
2022-10-01T15:14:40.223Z
2022-09-28T00:00:00.000
{ "year": 2022, "sha1": "7564f1a2105cc0793102678a9d95e22efc83675f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-3397/20/10/611/pdf?version=1664370102", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "675e39bb1b52c1d0e187384a695cd07e6ae00d94", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
56134183
pes2o/s2orc
v3-fos-license
Gate voltage controlled electronic transport through a ferromagnet/normal/ferromagnet junction on the surface of a topological insulator We investigate the electronic transport properties of a ferromagnet/normal/ferromagnet junction on the surface of a topological insulator with a gate voltage exerted on the normal segment. It is found that the conductance oscillates with the width of normal segment and gate voltage, and the maximum of conductance gradually decreases while the minimum of conductance approaches zero as the width increases. The conductance can be controlled by tuning the gate voltage like a spin field-effect transistor. It is found that the magnetoresistance ratio can be very large, and can also be negative owing to the anomalous transport. In addition, when there exists a magnetization component in the surface plane, it is shown that only the component parallel to the junction interface has an influence on the conductance. I. INTRODUCTION Topological insulators are new quantum states discovered recently, which have a bulk band gap and gapless edge states or metallic surface states due to the time-reversal-symmetry and spin-orbit coupling interaction 1 . The two-dimensional (2D) topological insulator has first been predicted theoretically as a quantum spin Hall state 2,3 and then observed experimentally 4 . The topological characterization of quantum spin Hall insulators can be generalized from 2D to three-dimensional (3D) case, and leads to the discovery of 3D topological insulator (TI) [5][6][7][8] . The TIs in 3D are usually classified according to the number of Dirac cones on their surfaces. Those strong topological insulators with odd number of Dirac cones on their surfaces are robust against the time-reversal invariant disorder, while the weak topological insulator is referred to those with even number Dirac cones on their surfaces which depends on the surface direction and might be broken even without breaking the time reversal symmetry 5,8 . When the TIs are coated with magnetic or superconducting layers, the surface states could be gapped and many interesting properties emerge, such as half-integer quantum Hall effect 9 , Majorana fermion 10 , etc. The topological surface states had been observed by several experimental groups by means of angle-resolved photoemission spectroscopy (ARPES) [11][12][13] and scanning tunneling microscopy (STM) 14,15 . Although the residual bulk carrier density brings much difficulty to the surface states transport experiments 16,17 , the signatures of negligible bulk carriers contributing to the transport 18 and near 100% surface transport in topological insulator 19 have been found recently in experiments. The low energy physics of the surface states of strong topological insulators can be described by the 2D massless Dirac theory 7 , which is different from that in graphene where the spinors are composed of different sublattices 20 . The topological surface states show strong spin-orbit coupling, which may be applied to the spin field-effect transistors in spintronics [21][22][23][24][25][26] . The electronic transport properties on topological insulator surface with magnetization has attracted a lot of attention [27][28][29][30][31][32][33][34] . In Refs. 27 and 28 the results are given in the limit of thin barrier (i.e., the width of barrier L→0 and barrier potential V 0 → ∞ while V 0 L is constant), and the physical origin of this thin barrier is the mismatch effect and built-in electric field of junction interface. Refs. 29 and 33 studied the spin valve on the surface of topological insulator, in which the exchange fields in the two ferromagnetic leads are assumed to align along the y axis direction. Refs. 30, 31, 32 and 34 investigated the electron transport through ferromagnetic barrier on the surface of a topological insulator. It is noted that both the electric potential barrier and the ferromagnetic barrier are the transport channels in these models. The bulk band gap of topological insulator is usually about 20-300 meV 7,[11][12][13]18 , in order to keep the transport at the Fermi energy inside the bulk gap, and the gate voltage on topological insulator should be finite. In this paper, we study the electronic transport through a 2D ferromagnet/normal/ferromagnet junction on the surface of a strong topological insulator where a gate voltage is exerted on the normal segment with a finite width, and the exchange fields in the two ferromagnetic leads point mainly to the z axis direction. So far such a system has not been well studied. We find that the conductance oscillates with the width of normal segment and gate voltage, and the maximum of conductance gradually decreases while the minimum of conductance can approach zero as the width increases. These behaviors are more obvious when the gate voltage is smaller than the Fermi energy. This gate-controlled 2D topological ferromagnet/normal/ferromagnet junction shows the property of a spin field-effect transistor. The magnetoresistance (MR) can be very large and could also be negative owing to the anomalous transport. In addition, when there exists a magnetization component in the 2D plane, it is shown that only the magnetization compo- nent which is parallel to the junction interface has an influence on the conductance. This paper is organized as follows. First, we will describe the theoretical model for the electronic transport through the topological spin-valve junction. Second, we will present our numerical results and discussions. Finally, a brief summary will be given. II. THEORETICAL FORMALISM We consider a 2D ferromagnet/normal/ferromagnet junction on a strong topological insulator surface as shown in Fig. 1. The bulk ferromagnetic insulator (FI) interacts with the surface electrons in TI by the proximity effect, and the ferromagnetism is induced in the topological surface states [27][28][29][30]32,34,[38][39][40] . The interfaces between ferromagnet (FM) and normal segment are parallel to y direction, and the normal segment is located between x = 0 and x = L with gate voltage V 0 exerted on it [35][36][37] . Here we presume, for the simplicity, the distance L between two interfaces is shorter than the mean free path as well as the spin coherence length. With this setup, the Hamiltonian for this system reads [27][28][29][30]32,34 with Pauli matrices σ = ( σ x , σ y , σ z ), the in-plane electron momentum p = ( p x , p y , 0), and Fermi velocity υ F . The piecewise magnetization ⇀ m(r) is chosen to be a 3D vector pointing along an arbitrary direction in the left region with ⇀ m L = (m Lx , m Ly , m Lz ) = m L (sin θ cos β, sin θ sin β, cos θ), and fixed along the z axis perpendicular to the TI surface in the right region with ⇀ m R = (0, 0, m Rz ). We can use a soft magnetic insulator for the left ferromagnet, which is controlled by a weak external magnetic field, and a magnetic insulator with very strong easy-axis anisotropy for the right ferromagnet. The configuration between the left and right ferromagnets directly depends on the weak external mag-netic field, where the interlayer (RKKY) exchange coupling between left and right ferromagnets 41 is ignored for the simplicity. In the middle segment, there is no magnetization, but instead, a gate voltage V 0 is exerted. Solving Eq.(1), we obtain the wave function in the left region as following: where the Fermi energy lies in the upper bands of Dirac cone, and E > 0. We also define φ as the incident angle, then k y with the ± corresponding to the upper bands and the lower bands of the Dirac cone respectively, and if V 0 = E, 42 it becomes The wave function in the right region is: There exists a translation invariance along the y direction, so the momentum k y is conserved in the three regions, and we omit the part e iky y in wave functions. These piecewise wave functions are connected by the boundary conditions: which determine the coefficients A,B,C,D and F in the wave functions. As a result, according to the Landauer-Büttiker formula 43 , it is straightforward to obtain the ballistic conductance G at zero temperature where w y is the width of interface along the y direction, which is much larger than L, and we take E as E F , because in our case the electron transport happens around the Fermi level. III. NUMERICAL RESULTS AND DISCUSSIONS We focus on the two cases about the electronic transport controlled by a gate voltage through this 2D topological ferromagnet/normal/ferromagnet junction. One is the conductance G and the magnetoresistance when the magnetizations in the left and right FM are collinear in the z-direction, and another is the influence of the magnetization component along the x/y direction on the conductance. A. The conductance and MR for collinear magnetization We show the normalized conductance G/G 0 as a function of k F L and V 0 /E F of parallel (Fig. a and c) and antiparallel (Fig. b and d) state in the left and right ferromagnet regions opened by the magnetization along the z-axis is 0.95E F . The conductance oscillates with the gate voltage V 0 (parameters E F L/ υ F and V 0 /E F in Fig. 2 are dimensionless). The maximum of conductance gradually decreases as the width increases. The minimum of conductance can approach to zero. The change of conductance between maximum and minimum by gate voltage is similar to the spin field-effect transistor, in which the conductance modulation arises from the spin precession due to the spinorbit coupling 21 . The gate voltage can be used to change the k ′ x such that the phase factor k ′ x L of quantum interference in the normal segment can be changed. The oscillation period of conductance with respect to V 0 depends on the width L and decreases with the increase of width L. The conductance has a period π with respect to z = V 0 L, when V 0 → ∞, L → 0, in 2D topological ferromagnet/ferromagnet junction 27,28 . In Fig. 2(b), the conductance changes with the width L and gate voltage V 0 in the same way as in Fig. 2(a). The difference is that the conductance is maximum in Fig. 2(b) while it is minimum in Fig. 2(a) and vice versa. The conductance in Fig. 2(c) and Fig. 2(d) show the same variation tendency with the width L and gate voltage V 0 as Fig. 2(a) and Fig. 2(b), respectively. However both the maximum and minimum of conductance in Fig. 2(c) and Fig. 2(d) are larger than those in Fig. 2(a) and Fig. 2(b), since the gap of surface states in left and right ferromagnet regions is 0.6E F in Fig. 2(c) and Fig. 2(d). The conductance changes more obviously with the gate voltage at the side of V 0 /E F < 1 than at the side of V 0 /E F > 1. In Fig. 2, both the maximum and minimum of the conductance become smaller when the gate voltage is closer to the Fermi energy, because the number of the incident wave functions transported through the normal segment by the evanescent waves (imaginary k ′ x ) becomes bigger. Fig. 2 shows that the conductance of this 2D topological ferromagnet/normal/ferromagnet junction could be changed by the same way as that in the spin field-effect transistor. While for the reason of the angular spectrum of electrons in the surface plane and the linear dispersion relation, how to get a large maximum/minimum ratio of the conductance is important for a transistor. After obtaining the conductance G P of parallel configuration and G AP of antiparallel configuration, we can get the MR directly, which is defined as M R = (G P − G AP )/G P . Compared with the conductance in Fig. 2(a) and Fig. 2(c), the conductance in Fig. 2(b) and Fig. 2(d) shows a property indicated below. On the one hand , the conductance in the antiparallel configuration can be less than that in the parallel configuration as in the conventional spin valve [22][23][24] and its counterpart in graphene 44 . On the other hand, the conductance in the antiparallel configuration can also be larger than that in the parallel configuration, which is an anomalous electronic transport property of topological spin-valve junction. Fig. 3 shows the MR as a function of the width L. When V 0 /E F = 1, the MR oscillates with the width L. The amplitude and period of oscillation of MR depend on the gate voltage V 0 . When V 0 /E F = 1, the MR does not oscillate and decreases monotonically with the increase of L, because the Fermi surface of normal segment is at the Dirac point in this case and the corresponding density of states is zero while the conductance is not zero, which is a typical property of Dirac fermion system 42 . The MR could be negative for the anomalous electronic transport 27,45 . The maximum of MR in Fig. 3(a) is larger than that in Fig. 3(b), and it can approach 100%. The big negative MR (more than -10) in Fig. 3(a) also means a big variation of conductance between parallel and antiparallel configuration. Next we will discuss the underlying physics quantitatively to understand the above results clearly. Since the electrons from all incident angles give contributions to the conductance which is proportional to the electron transmission probability, the physical origin of conductance oscillating with the width L and gate voltage V 0 in Fig. 2 is a direct result of summation of electron transmission probability over all incident angles. Fig. 4 plots the transmission probability as a function of incident angle φ and width L for different gate voltage V 0 . We find that the transmission probability mainly oscillates with the width L. Its period of oscillation becomes large as the gate voltage increases from V 0 /E F = 0 to V 0 /E F = 1. The reason for such a change can be illustrated in Fig. 5. Because the wave functions in the left and right FMs are connected through the wave function in normal segment, the transmission probability depends on the phase factor k ′ x L. Due to the conservation of momentum k y , k ′ x depends on the gate voltage. When the gate voltage from V 0 /E F = 0 to 1, the Fermi surface for the normal region reduces as in Fig. 5, and k ′ x reduces too, such that the transmission probability has a longer periodicity with the width L and changes considerably with incident angles as shown in Fig. 4(a) or 4(b) and 4(c) or 4(d). In these cases, the electronic transport through the normal segment occurs in the upper bands of Dirac cone. Although the Fermi surface for the normal segment in Fig. 4(g) or 4(h) is equal to that in Fig. 4(c) or 4(d), their transmission probability is different, because in Fig. 4(g) or 4(h) the electronic transport through the normal segment occurs in lower bands of Dirac cone. When the gate voltage V 0 /E F = 1, the electronic transport through the normal segment is totally due to the evanescent waves, the transmission probability is not a periodic function of width L as in Fig. 4(e) or 4(f). Now we consider the influence of magnetization configuration on the transmission probability. It is clearly that the transmission probability is an even function of the incident angle φ in the parallel configuration at the lefthand side of Fig. 4, while it is not an even function of the incident angle φ in the antiparallel configuration at the right-hand side. This is unusual, because the transmission probability is an even function of the incident angle φ on the antiparallel configuration in its counterpart in graphene 44 . This unusual property arises from the unequal spinor parts of the incident and transmission wave functions. At the normal incidence (φ = 0), the period of the transmission probability with the width L in the parallel configuration is the same as that in the antiparallel configuration and the position of maximum of the transmission probability has a shift of the half-period between two configurations. Now with the help of Figs. 4 and 5, the properties of conductance in Fig. 2(a) and 2(b) and MR in Fig. 3(a) could be understood explicitly. When the magnetizations in the left and right FMs are taken as 0.6E F in Fig. 5(b), one may see that the gaps of the surface states in the left and right ferromagnet regions decrease, and the Fermi surfaces in the left and right FMs become large. So, the range of k y expands, and those of k ′ x and the phase factor k ′ x L expand too. The transmission probability in Fig. 6 changes more dramatically than in Figs. 4(c) and 4(d), 4(g) and 4(h). Therefore, as the gap of surface states in left and right ferromagnet regions decreases, more incident electronic states will contribute to the conductance, such that the conductance becomes larger on the whole, and more unsymmetrical about the gate voltage V 0 /E F = 1.0 in Fig Fig. 2(d). The MR in Fig. 3(b) could be understood, similarly. Fig. 7. It is seen that the conductance decreases with increasing |m Ly |, so a large |m Ly | can lead the conductance to be zero. We also discover that the influence of magnetization m Ly on the conductance is different from that of −m Ly . 2(c) and Second, by keeping the magnetizations in the left and right FMs the same value, the direction of magnetization in the left FM is changed in the x-z plane (β = 0) or in the y-z plane (β = π/2), where θ and β are indicated as shown in Fig. 1. The conductance as a function of θ and the gate voltage V 0 is plotted in Fig. 8, which is different from that in ferromag- netic/normal/ferromagnetic graphene junction 45 . The distinction between Figs. 8(a) and 8(b) is more obvious at θ = ±0.5π, where the conductance changes slightly with the gate voltage in Fig. 8(a) while the conductance changes remarkably in Fig. 8(b). These results are from different connections of wave functions between left and right FMs. Since when θ = ±0.5π, the spin in the right FM is parallel to (υ F k R x , υ F k y , m) t , 27 and the spin in the left FM is parallel to (υ F k x1 ± m, υ F k y1 , 0) t in Fig. 8(a) which satisfies the relation E = (υ F k x1 ± m) 2 + (υ F k y1 ) 2 , while the spin in the left FM is parallel to (υ F k x2 , υ F k y2 ± m, 0) t in Fig. 8(b) which satisfies the relation E = (υ F k x2 ) 2 + (υ F k y2 ± m) 2 . In this case, the z component of spin in the left FM is 0 in Figs. 8(a) and 8(b). Because in Fig. 8(b) the Fermi surface of left FM shifts along the y direction about ±m, the difference of x component of spin between the left FM and right FM in Fig. 8(a) is larger than that in Fig. 8(b). Finally, we discuss the realization of our model. The bulk band gap of topological insulator is small and depends on the materials, which is for example, about 300 meV in Bi 2 Se 3 , 100 meV in Bi 2 T e 3 7,12,13 , and 22 meV in HgTe 18 . Far away from the Dirac point, the surface electronic states exhibit large deviations from the simple Dirac cone in Bi 2 T e 3 46 . The gap of surface states could be induced by putting the magnetic insulator on the surface of a topological insulator (such as EuO, EuS and MnSe). Depending on the interface match of the topological insulator and ferromagnetic insulator, the gap is several to dozens of meV 27,[38][39][40] . The gate electrode could be attached to the topological insulator to control the surface potential [35][36][37] . The predicted properties of our model may be observed when the Fermi energy of surface states is about 10 − 100 meV, and the junction width is about 10 − 100 nm. The calculated results in this paper are based on the ballistic transport. In order to observe experimentally our predicted properties, a clean 2D topological surface states with enough long mean free path is needed. It is interesting to note that the surface of topological insulator with such a long mean free path can be realized in experiments 36 . IV. SUMMARY In summary, we have studied the electronic transport properties of the ferromagnet/normal/ferromagnet junction on the surface of a strong topological insulator, where a gate voltage is exerted on the normal segment with a finite width. It is found that the conductance oscillates with the width of normal segment and the gate voltage. The maximum of conductance gradually decreases as the width increases and the minimum of conductance approaches zero. This gate-controlled conductance behaves in the same way as the spin field-effect transistor does, but a further study is needed to increase the maximum/minimum ratio of the conductance. The magnetoresistance can be very large and could also be negative owing to the anomalous transport. In addition, when there exists a magnetization component in the 2D plane, it is shown that only the magnetization component parallel to the junction interface has an influence on the conductance.
2012-11-28T03:38:16.000Z
2012-11-16T00:00:00.000
{ "year": 2012, "sha1": "46230c88872627c2d2e2891d45551010f73d192a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1211.6511", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "46230c88872627c2d2e2891d45551010f73d192a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
23254553
pes2o/s2orc
v3-fos-license
Comparative Assessment of the Accuracy of Cytological and Histologic Biopsies in the Diagnosis of Canine Bone Lesions Background Osteosarcoma (OSA) should be differentiated from other less frequent primary bone neoplasms, metastatic disease, and tumor‐like lesions, as treatment and prognosis can vary accordingly. Hence, a preoperative histologic diagnosis is generally preferred. This requires collection of multiple biopsies under general anesthesia, with possible complications, including pathological fractures. Fine‐needle aspiration cytology would allow an earlier diagnosis with a significant reduction of discomfort and morbidity. Hypothesis/Objectives The aim of this study was to compare the accuracy of cytological and histologic biopsies in the diagnosis of canine osteodestructive lesions. Animals Sixty‐eight dogs with bone lesions. Methods Retrospective study. Accuracy was assessed by comparing the former diagnosis with the final histologic diagnosis on surgical or post‐mortem samples or, in the case of non‐neoplastic lesions, with follow‐up information. Results The study included 50 primary malignant bone tumors (40 OSAs, 5 chondrosarcomas, 2 fibrosarcomas, and 3 poorly differentiated sarcomas), 6 carcinoma metastases, and 12 non‐neoplastic lesions. Accuracy was 83% for cytology (sensitivity, 83.3%; specificity, 80%) and 82.1% for histology (sensitivity, 72.2%; specificity, 100%). Tumor type was correctly identified cytologically and histologically in 50 and 55.5% of cases, respectively. Conclusions and Clinical Importance The accuracy of cytology was similar to histology, even in the determination of tumor type. In no case was a benign lesion diagnosed as malignant on cytology. This is the most important error to prevent, as treatment for malignant bone tumors includes aggressive surgery. Being a reliable diagnostic method, cytology should be further considered to aid decisions in the preoperative setting of canine bone lesions. T he majority of destructive bone lesions in dogs are neoplastic in origin, and almost all primary bone tumors are malignant. 1 Osteosarcoma (OSA) accounts for up to 85% of primary skeletal malignancies, followed by chondrosarcoma (CSA), hemangiosarcoma (HSA), fibrosarcoma (FSA), myeloma, and lymphoma. Additionally, the skeleton can be affected by metastatic lesions. A presumptive diagnosis of bone malignancy can be based on signalment, history, physical examination, and radiographic changes, including severe osteolysis and periosteal reaction. 1 However, several benign diseases might resemble malignant tumors both clinically and radiographically, such as osteomyelitis, traumatic, and dysplastic lesions, thus needing to be included in the differential diagnosis. 2,3 Although aggressive treatment is required for all primary malignant bone neoplasms, therapeutic decisions, and prognosis might differ considerably according to tumor type. The median survival time for dogs with appendicular OSA treated with limb amputation and chemotherapy ranges from 8 to 18 months. [4][5][6][7] Primary HSA of bone is an equally aggressive tumor, with median survival times <1 year. 1 Conversely, CSA and FSA share a lower metastatic potential and surgery alone might be curative. 1,8 Histologic interpretation of bone biopsies is usually recommended to obtain a preoperative diagnosis. Nevertheless, this procedure requires general anesthesia and complications might occur, including pathological fractures, increased pain, hematoma, and local seeding of tumor cells; 9-11 the latter might be a serious concern if a limb-sparing procedure has been planned. In addition, histologic results are not always conclusive, because biopsies might be of inadequate size or quality or not sufficiently representative. 3 Fine-needle aspiration cytology (FNAC) offers several advantages over histologic biopsy (HB), including minimal invasiveness, lower risk of complications, ease of sample collection, and rapid results. 12 However, it also has several limitations, including the inability to evaluate tissue architecture; this might prompt a generic diagnosis of malignancy without further classification of tumor type. [13][14][15] In addition, clinicians can be concerned about the difficulty in performing an adequate cytological sampling from a bone lesion, because of the challenge of penetrating the bone cortex. To determine the reliability of FNAC as a diagnostic procedure for canine bone lesions, we reviewed this experience over the past 15 years at our institutions. The accuracy of cytology was compared with that of HB. In particular, we evaluated the ability of both methods to discriminate between benign and malignant lesions and, among the latter, to correctly identify tumor type. Additionally, we evaluated the potential effects of multiple clinicopathological variables on FNAC accuracy, to determine the most appropriate use for this procedure. Case Selection A retrospective study was performed on canine osteodestructive lesions diagnosed from 2000 to 2016 at the Department of Veterinary Medical Sciences (University of Bologna) and the Department of Veterinary Sciences (University of Turin). Dogs receiving a FNAC, a HB or both for diagnostic purposes were considered for inclusion. The cases with a final diagnosis on a surgical or post-mortem histologic sample were selected for inclusion. If the first diagnosis was consistent with a benign or a non-neoplastic process, for which surgery was not indicated, the correctness of diagnosis was assessed by evaluating the long-term outcome. Primary cutaneous or oral tumors with local bone infiltration (eg, squamous cell carcinoma, melanoma) were not included. The radiographic images were retrospectively evaluated by 2 of the authors (PB, OC) to assess the amount of bone lysis and periosteal reaction. The amount of bone lysis was estimated in proportion to the entire lesion and classified as mild (<20%), moderate (20-50%), or severe (>50%). Periosteal reaction was assessed by comparing the thickness of the reaction (Rt) with the thickness of unaffected cortex (Ct) and classified as absent, mild (Rt < Ct), moderate (Rt = Ct), or severe (Rt > Ct). The cytological samples were collected under local anesthesia (if necessary) by fine-needle aspiration with 21-22 Gauge needles and 2.5-5-mL syringes. The sampling site was selected by radiographic examination and palpation, in order to collect cells from lessermineralized areas and to introduce the needle where the periosteal reaction was minimal. The collected material was then deposited and smeared on glass slides, which were allowed to air-dry, stained with May-Gr€ unwald-Giemsa, and cover slipped. The histologic samples were collected by means of an 8-11 gauge Jamshidi needle. The biopsy site was planned based on radiographic findings. Dogs underwent general anesthesia and were surgically prepared. After practicing a 2-3 mm skin incision with a scalpel, the cannula with the stylet locked in place was advanced through the soft tissue until bone was reached. Then the stylet was removed, and the bone cortex was penetrated with the cannula with the aid of rotation movements, being careful not to penetrate the cortex on the opposite side, and then withdrawn. The obtained specimens were expelled from the cannula with a probe. When feasible, the procedure was repeated through the same incision with a different redirection of the instrument. Histologic slides were obtained after formalin-fixation, decalcification (if necessary), and paraffin embedding. Sections were cut at 4 lm and stained with hematoxylin and eosin. Microscopic Evaluation All FNAC and HB preparations were re-examined (SS, AR, SD) without knowledge of the previous and final diagnoses. Diagnosis was by consensus. The pathologists were not blinded to imaging studies and clinical findings. The evaluated cytological variables included cellularity (classified as absent, low, moderate, or abundant), blood contamination (classified as absent, low, moderate, or abundant), prevalent cell population and its characteristics, secondary cell populations and noncellular material. The histologic assessment of HBs (first histologic diagnosis) and of surgical or post-mortem samples (final diagnosis) was carried out according to the schemes of the World Health Organization (WHO). 16 For sarcomas, the histologic grade of malignancy was assessed according to previously published criteria. [17][18][19] Osteosarcomas were further divided into 2 categories on the basis of osteoid production (productive and poorly productive). Statistical Analysis Accuracy, sensitivity, specificity, positive, and negative predictive values of FNAC and HB were assessed by comparing the first diagnosis (cytological or histologic) with the final histologic diagnosis on surgical or post-mortem samples, or with the clinical outcome if surgery was not performed. In particular, we evaluated the diagnostic accuracy of both methods in correctly identifying malignant neoplastic lesions and, within these, in diagnosing the specific tumor type. The results were presented in a confusion matrix. The confidence intervals of sensitivity and specificity were analyzed by the receiver operating characteristics (ROC) method. The effects of lesion site, tumor diameter, bone lysis, periosteal reaction, cellularity, blood contamination, tumor grade, and osteoid production on the possibility to obtain a correct cytological diagnosis were further evaluated with Fisher's exact test. Data were analyzed by SPSS statistical software a . P values < .05 were considered significant. Statement of Animal Care This study is a retrospective investigation carried out on archived tissue samples from canine bone lesions. As the research did not influence any therapeutic decision, approval by an Ethics Committee was not required. All the examined samples were collected for diagnostic purposes as part of routine standard care. Owners gave informed consent to the use of clinical data and stored biological samples for teaching and research purposes. In the remaining 10 cases, surgical or post-mortem samples were not available, but a malignant process was excluded based on the evidence of no clinical progression and long-term survival with no surgery or chemotherapy (median, 4 years; range, 1-6 years). Two of these cases (20%) were sampled by FNAC, 6 (60%) by HB and 2 (20%) by both methods. Among FNACs, 1 case was nondiagnostic, 2 were diagnosed as osteomyelitis and one as reactive bone. Among HBs, 1 case was diagnosed as osteochondroma, 1 as osteomyelitis, 4 as reactive bone and 2 as normal bone. The proportion of correct diagnoses did not differ significantly according to tumor location (appendicular or axial), tumor diameter (≤ o > than the median value), the amount of osteolysis or periosteal reaction, and tumor grade. Considering the characteristics of the smear, poor cellularity was significantly associated with a lower accuracy (P < .001). When OSAs were divided according to the production of osteoid matrix evaluated on surgical or post-mortem samples, poorlyproductive tumors were correctly diagnosed in 35.7% of cases, whereas productive OSAs were recognized in 52.4% of cases; however, this difference was not statistically significant (Table 2). Concurrent FNAC and HB Diagnoses Thirteen cases (3 benign and 10 malignant lesions) were sampled with both FNAC and HB. Concordance between the 2 methods was observed in 9 cases (69.2%), 2 benign, and 7 malignant lesions. In all the 13 cases, at least 1 of the 2 techniques provided the correct diagnosis. Discussion The aim of this study was to compare the diagnostic accuracy of bone cytology with that of HB, using the Understanding the diagnostic performance of cytology and histopathology for specific tissues and lesions can help choosing between FNAC and biopsy in a given clinical situation. 12 For bone lesions, the accuracy of preoperative diagnosis is particularly important. Thus, they require a diagnostic procedure allowing not only the differentiation between benign and malignant processes, but also between OSA, other primary bone tumors with a less aggressive biologic behavior and metastatic lesions. 3,19 Histologic examination is presently considered the gold standard method. The assessment of tissue architecture and relationships with surrounding tissues should allow a better identification of tumor type. Potential limitations of incisional biopsies are the small sample size, with limited tissue available for examination, and a high frequency of crush artifacts or morphologic artifacts because of tissue decalcification. Additionally, the sampled material might not be representative of the primary pathologic process, because of necrotic areas, hematic lacunae, or reactive bone. The advantages of cytology over histology can be, in addition to a lower morbidity, the possibility to carry out sampling at multiple points, increasing the likelihood of collecting neoplastic cells, and the possibility to observe the obtained preparations extemporarily, and repeat sampling at need. Moreover, the morphology of collected material is usually good, as it does not require decalcification. The intrinsic limitation of cytological diagnosis resides in the impossibility to appraise tissue architecture. Thus, bone cytology might yield a generic diagnosis of sarcoma and not allow for a further classification of tumor type. Another possible limitation is the fact that certain bone lesions exfoliate with difficulty, thus providing preparations with poor cellularity. 13,14 In this study, the accuracy of cytology (83%) in discriminating between benign and malignant lesions was similar to that of histologic biopsies (82.1%). With both techniques, in no case was a benign lesion diagnosed as malignant, although the specificity of cytology was decreased because of 1 nondiagnostic case. In comparison, sensitivity was lower, with 16.7% of malignant lesions not identified cytologically and 27.8% histologically. The presented accuracy of bone cytology is comparable to the data obtained in previous studies. In human medicine, cytology shows a sensitivity and specificity of 86 and 94.7%, respectively, and an accuracy of 83% in identifying a primary malignant bone tumor. 20,21 Similar studies in veterinary medicine reports accuracies between 97 and 69% in differentiating benign and malignant lesions. 11,15,22,23 Several elements partially limit the interpretation and comparison of the results with these studies. In some of those, the histologic diagnosis being compared with cytological diagnosis was indifferently obtained from surgical/post-mortem samples or from small incisional biopsies. This might obviously affect the reliability of results, since, as we observed in this study, the preoperative biopsy does not always correspond to the definitive histologic diagnosis. Additionally, most authors report a generic diagnosis of cancer, but it is not clearly specified whether the tumor type was identified. Finally, some authors have elected to exclude nondiagnostic cases from the assessment of accuracy, whereas others have included them. In this study, we considered appropriate to maintain nondiagnostic samples in data analysis, because the possibility of obtaining adequate cytological preparations from a bone lesion was among the hypotheses of the study and, indeed, sample inadequacy plays an important role in limiting the diagnostic accuracy of cytology. According to this results, the percentage of cases in which tumor type was correctly identified was limited and, quite surprisingly, similar between histology (55.5%) and cytology (50%). With both methods, most CSAs and all epithelial metastatic lesions were correctly identified. Conversely, more than half of the OSA cases were generically diagnosed as "sarcomas", both cytologically and histologically. A generic preoperative diagnosis of malignant primary bone tumor might not affect the type of surgical approach; however, it might limit the possibility to formulate a prognosis and impair clinical decisions. Both the histologic and cytological diagnoses of OSA are based on the detection of mesenchymal cells with malignant features in combination with osteoid. The amount of osteoid seemed to impact the likelihood of identifying OSAs on cytology, but not systematically. The finding of osteoid is a reliable proof of the origin of the neoplasm, but it not always possible, even in the case of productive tumors, presumably because of the large variability among different areas of the same tumor. Indeed, OSAs can present a very heterogeneous histologic appearance, resulting in areas with variable differentiation, which might resemble other mesenchymal tumors (CSA, FSA, HSA). Additionally, in some cases it might be difficult to distinguish osteoid from a fibrous or chondroid matrix. 3,24 Additional staining methods can be applied to allow the differentiation of OSA from other mesenchymal tumors, that is, cytochemical staining for alkaline phosphatase or immunohistochemical staining for specific bone matrix proteins like osteonectin and osteocalcin. The main limitation of these methods is that reactive osteoblasts will stain positive as well, so criteria of malignancy must be assessed. 15,[25][26][27] Overall, the concordance of cytology and histology with the final diagnosis was not completely satisfactory. Depending on the employed technique, this can be attributed to different causes. Four of 5 histologic diagnostic errors were due to a diagnosis of reactive bone tissue instead of a neoplastic process. In these cases, the pathologist correctly identified the process occurring in the observed preparations; however, the correct diagnosis was not reached because the sampling missed the neoplastic lesion. Indeed, the tissue surrounding a bone lesion is frequently involved in severe reactive processes, which might mislead the diagnostic judgment in case of superficial sampling. 2,3,28 As previously demonstrated, sampling the central areas provides the greatest accuracy rate in the case of destructive bone lesions. 28 Cytological mistakes accounted for 17% of the total and were in all cases due to inconclusive diagnoses because of hypocellular aspirates. In these cases, an extemporary evaluation of the cellularity of the samples either by macroscopic examination or by rapid stains could have helped to recognize the inadequacy of aspirates, and highlight the opportunity to obtain more samples at different sites. It has been reported that the cellularity of samples can affect not only the adequacy of preparations but also the level of accuracy in diagnostic cases. 23 However, the judgment of adequacy is subjective and often influenced by the experience of the pathologist and by the availability of clinical and radiographic data supporting the diagnostic evidence. Notably, over 90% of the diagnostic errors in this study, both histologic and cytological, were attributable to sampling rather than interpretation. This demonstrates that a correct sampling technique, an adequate number of samples and the choice of sampling sites are at least as relevant as the pathologist's experience. Most importantly, in the cases where both cytological and histological samples were available, at least 1 of the 2 methods allowed to reach the correct diagnosis, suggesting that the greatest chance of success can be obtained by combining these techniques. There are several limitations to the interpretation of these data. Because this was a retrospective study that required cytology and histopathology, it was biased toward neoplastic lesions, because they are more likely to have biopsy or surgery performed. Consequently, we had a proportionally lower number of cases with a diagnosis of inflammation or non-neoplastic proliferation. These included 10 cases with no final confirmation on surgical or post-mortem samples, in which the lack of malignancy was only hypothesized based on long-term survival and no progression of clinical and radiographic signs. Nevertheless, it must be stated that follow-up alone cannot completely rule out a malignant process. Finally, the number of cases in which cytology and HB were both performed was limited, thereby reducing the possibility to compare the utility of these techniques on the same lesions.
2018-04-03T01:27:50.258Z
2017-04-04T00:00:00.000
{ "year": 2017, "sha1": "21689b3fae2c65d79983736f3e23f0f38e67873b", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1111/jvim.14696", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "21689b3fae2c65d79983736f3e23f0f38e67873b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247273111
pes2o/s2orc
v3-fos-license
Chronic Obstructive Pulmonary Diseases: Journal of the COPD Foundation ® Current-Smoking-Related COPD or COPD With Acute Exacerbation is Associated With Poorer Survival Following Oral Cavity Squamous Cell Carcinoma Surgery Background: The survival effect of smoking-related chronic obstructive pulmonary disease (COPD) and COPD with acute exacerbation (COPDAE) before surgery on patients with oral cavity squamous cell carcinoma (OCSCC) is unclear. Methods: Using the Taiwan Cancer Registry Database, we enrolled patients with OCSCC (pathologic stages I–IVB) receiving surgery. The Cox proportional hazards model was used to analyze all-cause mortality. We categorized the patients into 2 groups by using propensity score matching based on the pre-existing COPD status (≤1 year before surgery) to compare overall survival outcomes: Group 1 (never smokers without COPD) and Group 2 (current smokers with COPD). Results: In multivariate Cox regression analyses, the adjusted hazard ratio (aHR; 95% confidence interval [CI]) of all-cause mortality in Group 2 compared with Group 1 was 1.07 (1.02–1.16, P =0.041). The aHR (95% CIs) of all-cause mortality for ≥1 hospitalizations for COPDAE within 1 year before surgery for patients with OCSCC was 1.31 (1.02–1.64; P =0.011) compared with no COPDAE in patients with OCSCC receiving surgery. Among patients with OCSCC undergoing curative surgery, current smokers with smoking-related COPD demonstrated poorer survival outcomes than did nonsmokers without COPD, for both OCSCC death and all-cause mortality. Hospitalization for COPDAE within 1 year before surgery was Current smoking-related COPD or COPD with acute exacerbation is associated poorer Abstract Smoking is the principal risk factor of OCSCC 25 and chronic obstructive pulmonary disease (COPD). [26][27][28] Current smoking status is also the primary risk factor for COPD with acute exacerbation (COPDAE). 29,30 Thus, pre-existing COPD and current smoking status are highly prevalent in patients with OCSCC. 25,31 Smoking [32][33][34][35][36][37][38][39] and overwhelming comorbidity such as COPD [20][21][22][23][24] are independently associated with poorer survival in patients with cancer as well as greater resistance to cancer treatments such as radiotherapy or CCRT. Surgical complications or perioperative risk of morbidity and mortality also increase in patients with cancer because of current smoking status or COPD. [20][21][22][23][24]40 Hospitalization of patients with COPDAE occurs in the severe stages of COPD (similar to the Global initiative for chronic Obstructive Lung Disease [GOLD] 30 classification groups 3-4), 41 which might represent the severity of currentsmoking-related COPD and could be straightforwardly used as an obvious predictor of overall survival before surgery in patients with OCSCC. However, although surgery is generally recommended as the initial therapy for early or locally advanced OCSCC, 10,14-19 unclear risk factors of mortality, including current smoking status and smoking-related COPD, before surgery still remain in patients with OCSCC. Sufficient prognostic factors of overall survival before surgery are lacking. Consequently, establishing the prognostic factors before surgery in patients with OCSCC is crucial and might support preventive medicine in the future. Preclinical and clinical studies have indicated that current-smoking-related COPD or COPDAE might be significant prognostic factors. 33,40,[42][43][44][45][46][47] Nevertheless, no clinical data for parallel comparative study exists for never-smoking non-COPD, current-smoking COPD, and hospitalization of patients with COPDAE before surgery for patients with OCSCC. Therefore, we conducted a parallel propensity scores matching (PSM) study to estimate the influence of COPD on overall survival for patients with current-smoking-related COPD and patients with never-smoking non-COPD with OCSCC who underwent surgery. Study Population We enrolled patients from the Taiwan Cancer Registry Database with a diagnosis of OCSCC between January 1, 2009, and December 31, 2017. The index date was the date of surgery, and the follow-up duration was from the index date to December 31, 2019. The Taiwan Cancer Registry Database contains detailed cancer-related data of patients, including the clinical stage, cigarette smoking habit, treatment modalities, pathologic data, and grade of differentiation. 10,[48][49][50][51] Our protocols were reviewed and approved by the institutional review board of Tzu-Chi Medical Foundation (IRB109-015-B). Inclusion and Exclusion Criteria The diagnoses of the enrolled patients were confirmed after reviewing their pathological data, and the patients with newly diagnosed OCSCC were confirmed to have no other cancers or distant metastases. All patents with OCSCC received curative-intent surgery including tumor resection, neck lymph node dissection, or both. Adjuvant treatments such as adjuvant CCRT or adjuvant radiotherapy were guided and performed with adherence to the National Comprehensive Cancer Network (NCCN) guidelines 19 depending on risk features such as margin positive finding, pathological tumor (pT) stages, pathological nodal (pN) stages, extranodal extension, lymphovascular invasion, or perineural invasion. 19, 52 The chemotherapy regimens administered concurrently with radiotherapy in our study were cisplatin-based regimens. 52 The patients were included if they received an OCSCC diagnosis and curative-intent surgery, were ≥20 years old, and had a diagnosis of pathologic stage I-IVB OCSCC without metastasis according to the American Joint Committee on Cancer criteria (AJCC). 53 Patients were excluded if they had a history of other cancers before the index date, an unknown pathologic stage, missing sex data, missing smoking record, unclear differentiation of tumor grade, or non-squamouscell carcinoma pathologic type. Patients who had received an adjuvant radiotherapy dose <60 Gy were excluded because this dose is considered insufficient for OCSCC according to the NCCN guidelines. 19 All of the radiotherapy techniques were intensity-modulated radiation therapy in the enrolled patients receiving adjuvant radiotherapy. We categorized the enrolled Patients and Methods patients into 2 groups on the basis of their current smoking and COPD status to compare all-cause mortality: Group 1 (OCSCC, never smokers without COPD) and Group 2 (OCSCC, current smokers with smoking-related COPD). We also estimated the survival outcome (SO) associated with the severity of smoking-related COPD (frequency of hospitalization for patients with COPDAE with 0 and ≥1 hospitalizations within 1 year before curative-intent surgery) and patients with pathologic stage I-IVB OCSCC. The incidence of comorbidities was scored using the Charlson Comorbidity Index. 9,54 Some specific comorbidities associated with COPD death (cardiovascular diseases, hyperlipidemia, hypertension, diabetes, and chronic kidney disease [CKD]) were excluded from the Charlson Comorbidity Index scores to prevent repetitive adjustment in multivariate analysis. Only comorbidities or COPD diagnosis within 12 months before the index date were included; they were coded and classified according to the International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) codes at the first admission or after >2 appearances of a diagnosis code at outpatient visits. Propensity Score Matching and Covariates To reduce the effects of potential confounders when allcause mortality between Groups 1 and 2 were compared, we performed 3:1 PSM with a caliper of 0.2 for the following variables: age, sex, diabetes, hyperlipidemia, CKD, hypertension, cardiovascular diseases, Charlson Comorbidity Index score, grade of differentiation, pT, pN, extranodal extension, lymphovascular invasion, perineural invasion, margin status, adjuvant CCRT, and adjuvant radiotherapy alone. A Cox proportional hazards model was used to regress all-cause mortality on various COPD statuses with a robust sandwich estimator used to account for clustering within matched sets. 55 Multivariate Cox regression analyses were performed to calculate HRs to determine whether the factors of distinct COPD status or frequency of hospitalization for patients with COPDAE within 1 year before the index date were potential independent predictors of all-cause mortality. Age, sex, diabetes, hyperlipidemia, CKD, hypertension, cardiovascular diseases, Charlson comorbidity index score, grade of differentiation, pT, pN, extranodal extension, lymphovascular invasion, perineural invasion, margin status, adjuvant CCRT, and adjuvant radiotherapy alone are potential prognostic factors of all-cause death for patients with OCSCC receiving curative surgery; these categories also might be independent prognostic factors of all-cause death with residual imbalance. 56,57 Potential confounding factors of OCSCC death or COPD death were controlled for in the PSM (Table 1), and all-cause mortality was the primary endpoint in both groups. COPD death and OCSCC death were also estimated according to the Cause of Death database (Table 1). After well-matched PSM, the actual real-world data would indicate the survival impact of COPD and COPDAE within 1 year before OCSCC surgery for all-cause death, COPD death, and OCSCC death for patients with OCSCC. Statistics After adjustment for confounders, all of the analyses were performed using SAS version 9.3 software (SAS Institute, Cary, North Carolina,). In a 2-tailed Wald test, P<0.05 was considered significant. The overall survival was estimated using the Kaplan-Meier method, and differences among the patient categories-non-COPD, COPD, and hospitalization for COPDAE-were determined using the stratified log-rank test to compare survival curves (stratified according to matched sets). 58 Propensity Score Matching and Study Cohort The PSM yielded a final cohort of 1208 patients with pathologic stage I-IVB OCSCC undergoing curative-intent surgery (906 and 302 in Groups 1 and 2, respectively) eligible for further analysis; their characteristics are presented in Table 1. Age, sex, diabetes, hyperlipidemia, CKD, hypertension, cardiovascular diseases, Charlson comorbidity index score, grade of differentiation, pT, pN, extranodal extension, lymphovascular invasion, perineural invasion, margin status, adjuvant CCRT, and adjuvant radiotherapy alone were similar between the 2 groups because of the PSM design (Table 1). All-Cause Mortality, COPD Death, and Oral Cavity Squamous Cell Carcinoma Death After well-matched PSM, the COPD death rate was higher in the current-smoking with COPD group than it was in the never-smoking without COPD group (P<0.001); the all-cause death and OCSCC death were also significantly higher in the current-smoking with COPD group than in the never-smoking without COPD group (P<0.001; Table 1). Multivariate Cox regression analysis indicated that COPD with ≥1 hospitalizations for COPDAE within 1 year before surgery in patients with OCSCC was associated with poor overall survival ( Kaplan-Meier Overall Survival Among Non-COPD, COPD, and Hospitalization for Patients With COPD With Acute Exacerbation The Kaplan-Meier overall survival curves for the 2 groups are illustrated in Figure 1. The overall survival of the current-smoking-related COPD group was significantly inferior to that of the never-smoking without COPD group (P=0.039). The overall survival of patients with ≥1 hospitalization for COPDAE within 1 year before surgery for OCSCC was significantly inferior to that of patients with 0 hospitalization for COPDAE (P<0.001; Figure 2). Smoking-Related COPD and Oral Cavity Squamous Cell Carcinoma Smoking is increasingly being established as a causal factor in the development of squamous cell tumors, at several sites in the head and neck, including OCSCC. 25 Evidence is accruing that smoking may also be causally related with a range of adverse outcomes in patients with cancer, including higher all-cause and cancerspecific death and increased risk of second primary cancers. 32 not specifically in OCSCC or for specific treatment such as surgery. In total, >20 studies have reported associations between smoking and perioperative complications after extirpative or reconstructive surgery. 40 Studies have indicated that smoking during treatment is associated with more resistance to radiotherapy 33 and that a history of smoking is associated with nonresponse to platinumbased chemotherapy. 34 Several studies of various head and neck sites have reported associations of smoking pre-diagnosis or a history of smoking with poorer survival. [35][36][37][38][39] However, these findings are not universal; several studies have reported no association between smoking and patient outcomes. 40,59-61 These inconsistent data might be attributed to the analyses of different cancer sites in the head and neck, smoking-related comorbidities, and surgery or non-surgery in these studies. [35][36][37][38][39][40][59][60][61] Moreover, COPD has been associated with poor survival in lung and extrapulmonary cancer treatments. [20][21][22][23] Patients with cancer and COPD have poorer survival than those without COPD 20-24 because COPD increases C-reactive protein levels, a biomarker of systemic inflammation that is associated with an increased risk of cancer mortality, including that for extrapulmonary cancers. 24 Similarly, in the largest meta-analysis of its type, Danesh et al indicated that plasma fibrinogen, another nonspecific marker of systemic inflammation, is associated with both pulmonary and extrapulmonary cancers in smokers and never smokers. 47 Therefore, a reasonable assumption is that current-smoking-related COPD and COPD severity, such as hospitalization for patients with COPDAE, before surgery might be associated with poorer survival in patients undergoing curative surgery for OCSCC compared with those who never smoked and do not have COPD. To elucidate the survival impact of smoking-related COPD (COPD is a highly common smoking-related comorbidity 62 ) on patients with OCSCC receiving surgery, we conducted the parallel PSM analysis. Smoking-Related COPD and Surgery for Oral Cavity Squamous Cell Carcinoma OCSCC differs from other HNSCC sites because surgery is the first choice for OCSCC as part of primary therapy according to NCCN guidelines, 10,14-19 unlike oropharyngeal, larynx, or hypopharyngeal squamous cell carcinoma, which can be treated by radiotherapy or definitive CCRT as the first-line therapy. Thus, surgery is a crucial curative treatment for OCSCC. 10,[14][15][16][17][18][19] However, Linda et al revealed that the relationship between smoking and overall survival was stronger among those who underwent cancer-directed surgery than those who did not, and in the treatment combination analysis, the hazards for current versus never smokers were most substantial in the groups who had surgery, either alone or with radiotherapy or chemotherapy. 63 Numerous epidemiologic studies have indicated that smoking is overwhelmingly the foremost risk factor for COPD and COPDAE. [26][27][28] Additionally, a high prevalence of COPDAE requiring hospital admission was noted in patients who continue to smoke. 29 Diagnosis of COPD and COPDAE before cancer is an independent prognostic factor of overall survival for breast cancer and lung cancer. 22,64 COPD is a common comorbidity in patients with lung and head and neck cancer. 31 Although patients with lung cancer who also have COPD have a poorer prognosis than do patients with lung cancer and no COPD, 22 no report exists of the influence of smoking-related COPD or COPDAE on the overall survival of patients with OCSCC receiving surgery. No solid evidence is available to clarify the importance of prevention of COPD progression to COPDAE for patients with OCSCC receiving curative-intent surgery. Our study is the first to use current-smoking-related COPD or COPDAE within 1 year before surgery for patients with OCSCC as a straightforward prognostic factor of overall survival. Our findings may serve as a reference for shared decision-making by physicians and patients with OCSCC in the future and health policy establishment for preventing COPD progression to COPDAE before surgery in patients with OCSCC. COPD Death, Oral Cavity Squamous Cell Carcinoma Death, and All-Cause Death After the application of PSM, we observed not only significant difference for COPD death between the 2 groups but also significantly higher OCSCC death and all-cause death in the current-smoking-related COPD group compared with the never-smoking COPD group with OCSCC receiving surgery (Table 1). These findings suggest that smoking-related COPD is a predominant factor of overall survival for patients with OCSCC and not only contributes to COPD death but also to OCSCC death and all-cause death. The higher OCSCC death and all-cause death observed in the current study is consistent with the findings of preclinical and clinical studies reporting that cigarette smoking causes resistance to cisplatin, 42-46 surgical complications, 40 or lower response to radiotherapy. 33 Thus, COPD and COPDAE within 1 year before surgery for patients with OCSCC is likely an independent prognostic factor of overall survival for patients with OCSCC. Potential Cofounding Factors in Propensity Score Matching According to NCCN guidelines and relevant evidence, 19,52 the prognostic factors of overall survival in patients with OCSCC are age, sex, Charlson Comorbidity Index score, pathologic T stage, pathologic N stage, differentiation tumor grade, lymphovascular invasion, perineural invasion, extranodal extension, margin positive, adjuvant radiotherapy, or adjuvant CCR. All of the confounding factors were matched and are listed in Table 1. We also matched the possible confounding factors of COPD death 65 such as diabetes, hyperlipidemia, hypertension, cardiovascular diseases, and CKD in our PSM design. The possible confounding factors were considered in our PSM analysis. No selection bias was noted for therapeutic choice between the 2 groups because pathologic stages, pathologic risk features, and adjuvant treatments were matched in the study. Therefore, COPD or hospitalization of patients with COPDAE are the independent prognostic factors of overall survival in patients with OCSCC receiving curative surgery ( Table 2, Figures 1 and 2). Clinical Practice and Value All potential confounding factors were matched and had no residual imbalance without statistical significance in the covariates (Table 2). 56,57 The independent prognostic factors of overall survival were pre-existing COPD and COPDAE within 1 year before surgery for patients with OCSCC ( Table 2, Figures 1 and 2). This insightful finding may serve as a reference for shared decision-making between physicians and patients regarding the selection of surgery or other treatments for OCSCC, especially in patients with OCSCC with COPDAE within 1 year before surgery. Moreover, preexisting COPD and COPDAE before surgery for patients with OCSCC could be considered in future clinical trials to correct the confounding factors. Finally, prevention of pre-existing COPD progression to COPDAE is paramount for patients with OCSCC receiving surgery as curativeintent treatment. Strength The strength of our study is the fact that it is the first and largest cohort study to estimate the SO of current smokers with smoking-related COPD compared with nonsmokers without COPD in patients with OCSCC receiving curative-intent surgery based on NCCN guidelines. 19 The use of PSM resulted in consistent covariates between the 2 groups, and no selection bias for therapeutic choice existed between the 2 groups. No other study has estimated the influence of pre-existing COPD and hospitalization for COPDAE within 1 year before surgery in patients with OCSCC undergoing surgery; moreover, we controlled for most confounding factors. Our findings may serve as a reference for shared decision-making by physicians and patients who select surgery for treating OCSCC with COPD or COPDAE in the future. Preventing COPD from progressing to COPDAE is crucial to improving overall survival in patients with OCSCC receiving curative surgery (Table 2 and Figure 2). Among patients with OCSCC undergoing curative surgery, current smokers with smoking-related COPD had poorer SO than nonsmokers without COPD, regardless of whether the outcome was OCSCC death or all-cause mortality. Hospitalization for patients with COPDAE within 1 year before surgery was found to be Conclusions an independent risk factor for overall survival in these patients with OCSCC. Prevention of COPD progression to COPDAE is likely to be associated with an increased overall survival in patients with OCSCC receiving curative surgery. Data availability: The data sets supporting the study conclusions are included in this manuscript and its supplementary files. Declaration of Interest The authors have no conflicts of interest to declare. conditions were based on ICD-10-CM codes. The Taiwan Cancer Registry administration randomly reviews charts and interviews patients to confirm the accuracy of the diagnoses, and hospitals with outlier charges or practices are audited and severely penalized if malpractice or discrepancies are identified. Nevertheless, to obtain crucial information on population specificity and disease occurrence, a large-scale randomized trial comparing carefully selected patients undergoing suitable treatments is essential. Third, selection bias and residual or unmeasured confounding factors are likely, as they are in all retrospective studies. Despite these limitations, a major strength of this study is the use of a nationwide, population-based registry with detailed baseline and treatment information. Lifelong follow-up was possible because of linkage with the National Cause of Death database. Considering the magnitude and statistical significance of the observed effects in the current study, these limitations are unlikely to affect the conclusions.
2022-03-08T16:24:07.038Z
2022-03-04T00:00:00.000
{ "year": 2022, "sha1": "6efc0ba430457551a385ba6f95be25a142aa53f4", "oa_license": null, "oa_url": "http://journal.copdfoundation.org/Portals/0/JCOPDF/Files/Volume9-Issue2/JCOPDF-2022-0286-Zhang.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3fde2df371101d2a7060856445ea7c70ecbf7af4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54964604
pes2o/s2orc
v3-fos-license
Recovery of Cerium Dioxide from Spent Glass-Polishing Slurry and Its Utilization as a Reactive Sorbent for Fast Degradation of Toxic Organophosphates The recovery of cerium (and possibly other rare earth elements) from the spent glass-polishing slurries is rather difficult because of a high resistance of polishing-grade cerium oxide toward common digestion agents. It was shown that cerium may be extracted from the spent polishing slurries by leaching with strongmineral acids in the presence of reducing agents; the solutionmay be used directly for the preparation of a ceria-based reactive sorbent. A mixture of concentrated nitric acid and hydrogen peroxide was effective in the digestion of partially dewatered glass-polishing slurry. After the removal of undissolved particles, cerous carbonate was precipitated by gaseous NH 3 and CO 2 . Cerium oxide was prepared by a thermal decomposition of the carbonate precursor in an open crucible and tested as reactive sorbent for the degradation of highly toxic organophosphate compounds. The samples annealed at the optimal temperature of approximately 400C exhibited a good degradation efficiency toward the organophosphate pesticide fenchlorphos and the nerve agents soman and VX. The extraction/precipitation procedure recovers approximately 70% of cerium oxide from the spent polishing slurry. The presence of minor amounts of lanthanum does not disturb the degradation efficiency. Introduction Because of its superior glass-polishing ability, cerium oxide is consumed in great quantities in manufacturing precise optics and other branches of the glass industry [1].Typically, the polishing agents are used in the form of an aqueous suspension (i.e., slurry).As a result of mechanical crushing and chemical reactions, which both participate in the glasspolishing process [1,2], the polishing agents gradually lose their efficiency and must be replaced, whereas the spent polishing sludge is discarded without further exploitation.To some extent, the lifetime of a polishing slurry may be prolonged by removing small glass particles and other undesirable admixtures by flotation [3].However, complete refining and recovery of pure cerium compounds require more sophisticated chemical procedures consisting of dissolving cerium dioxide (and possibly also other rare earth elements present in the polishing agents) and the subsequent separation of cerium.Unfortunately, cerium oxide is known to be poorly soluble in most of the common chemical agents, including strong mineral acids.This is especially true when cerium oxide is prepared by annealing at a high temperature, which is required for its use as a polishing powder.Kim et al. [3] suggested a relatively complex procedure to obtain pure cerium from polishing slurries; specifically, the procedure consists of roasting the dried sludge at 600 ∘ C, leaching it with concentrated sulfuric acid, and separating the material by selective precipitation. We previously developed several procedures for the recovery of rare earth elements (REEs) from spent polishing Advances in Materials Science and Engineering sludges [4][5][6].The digestion of cerium oxide could be performed effectively in a mixture of a strong mineral acid and a reducing agent, such as hydrochloric acid with potassium iodide [4].Real waste sludge from optical glass polishing was treated successfully with nitric acid and hydrogen peroxide (acting here as the reducing agent) [5].Cerium was then precipitated from the solution as cerous carbonate and then converted into cerium oxide by annealing in an open crucible or rotary kiln [6].In our previous work [7], we demonstrated that cerium oxide prepared by thermal decomposition of cerous carbonate may serve as an effective reactive sorbent capable of destroying highly toxic organophosphate compounds in minutes. In the present work, we demonstrated that an effective ceria-based reactive sorbent may be prepared not only from pure cerium salts but also from the glass-polishing waste sludge.A facile and easily scalable preparation route consisted of an acid digestion of the partially dewatered polishing sludge in the presence of hydrogen peroxide, removal of the undissolved portion of the sludge by sedimentation/filtration, precipitation of cerium carbonate with a gaseous mixture [8] of CO 2 and NH 3 , and calcination of the dried cerium carbonate in a muffle furnace to obtain the respective oxide.A series of the reactive sorbents was prepared from the carbonate precursor by calcination at various temperatures ranging from 200 to 900 ∘ C, and their degradation efficiencies were tested for the organophosphate pesticide fenchlorphos.Furthermore, the sorbent was demonstrated to be effective in the destruction of the nerve agents soman and VX. Materials and Chemicals. Waste polishing slurry was collected in the glass-polishing factory Dioptra, Turnov, Czech Republic, where high-grade ceria-based polishing powders of the Cerox type (Rhodia, La Rochelle, France) are used in the form of an aqueous slurry for the precise polishing of various optical pieces.The waste slurry was collected in several portions over a six-month time period, stored in PE containers, and partially dewatered by sedimentation before further treatment (the water content was then approximately 50%).An elemental composition of the dewatered sludge is given in Table 1.Cerous nitrate, Ce(NO 3 ) 3 ⋅6H 2 O, was obtained from Sigma-Aldrich (Steinheim, Germany) as a reagent grade product with a purity of 99.9% (trace metal basis).Technical gases NH 3 and CO 2 were obtained from Linde Gas ( Ústí nad Labem, Czech Republic).Fenchlorphos and 2,4,5-trichlorophenol (2,4,5-TCP) were obtained from Sigma-Aldrich as analytical standards. Cerium Recovery and Preparation of the Reactive Sorbent. Partially dewatered waste sludge was treated with concentrated nitric acid and hydrogen peroxide.Both chemicals were used in 20% excess over stoichiometry (with respect to the total REE content), and hydrogen peroxide was added in several subportions.The mixture was heated and extensively stirred at a temperature of 65-70 ∘ C for 3 h and then left to cool until the next day.An undissolved portion of the The pH value rose suddenly to 9.5 when the precipitation of REE carbonates was complete.The introduction of NH 3 was then stopped, and the mixture was saturated with CO 2 for 1 h.The mixture was then left until the following day.Finely crystalline precipitate was separated by filtration, washed with water, and dried at 110 ∘ C. A series of cerium oxides was prepared by annealing at different temperatures ranging from 200 to 900 ∘ C for 2 h in a muffle furnace; the samples were denoted by D-200 to D-900.For comparison, sample B-400 was prepared from a 0.2 mol/L cerous nitrate solution using a similar procedure.In this case, the calcination temperature was 400 ∘ C. Methods of Characterization. The scanning electron microscope (SEM) Tescan Vega LSU was used to examine the morphology of the sorbents.X-ray diffraction (XRD) measurements were performed on a MPD 1880 diffractometer (Philips).The specific surface area of the sorbents was measured by the BET method (N 2 adsorption) with a Sorptomatic 1900 Carlo Erba instrument.Thermogravimetric analysis of the cerium carbonate precursor was performed using a Netzsch Instrument STA 409. Chemical Analyses and Data Evaluation. Liquid chromatographic determinations of fenchlorphos and its degradation products were performed using a LaChrom HPLC system (Merck/Hitachi) consisting of an L-7100 pump, an L-7450A diode array detector, a Rheodyne 7725i injection valve with a 20 L sampling loop, and a C18 analytical column (Tessek, Prague, Czech Republic), 150 × 4.6 mm, with 5 m packing material.The composition of the mobile phase was methanol (HPLC-grade, Labscan, Dublin, Ireland)/0.1% H 3 PO 4 (w/w) 80/20 (v/v) at a flow rate 0.3 mL/min.The gas chromatograph Varian GC 3800 coupled with an ion trap mass spectrometer (Varian 4000) and a fused silica capillary column (VF-5; 20 m × 0.25 mm ID × 0.25 m), all from Varian (Varian Inc., Palo Alto, USA), were used to confirm the identity of the target analytes.The gas chromatograph Agilent 6890 with an HP-5 column (5% phenyl methyl siloxane, 30 m × 0.32 mm ID × 0.25 film thickness) was used to follow the degradation of the nerve agents VX and soman.An elemental composition of the solid part of the sludge was determined by X-ray fluorescence analysis (Philips PW 1660); the total REE content in solution was determined by complexometric titration.The DataGraph 3.2 (Visual Data Tools, USA) and OriginPro 9.1 (OriginLab, USA) software packages were used for calculations and data evaluations. Pesticide Degradation on the Reactive Sorbents. The testing procedure was derived from those used for an examination of chemical warfare degradation [9].Briefly, constant amounts (50 mg) of the sorbent were weighted into a series of glass vials (Supelco, 4 mL) and wetted with 400 L of acetonitrile.After rigorous shaking, the wetted sorbent was left to stand for 5 min.Then, an exact volume (100 L) of the pesticide solution (10 000 mg/L) in acetonitrile was added to each vial (corresponding to a dosage of 1 mg of pesticide per 50 mg of the sorbent).The vials were sealed with caps and covered with aluminium foil to protect the reaction mixture from sunlight.At predetermined time intervals (0.5, 8, 16, 32, 64, 96, and 128 min), the reaction was terminated by the addition of methanol (4 mL), and the sorbent was separated immediately by centrifugation (4,000 rpm for 7 min).The supernatant was decanted and transferred into a 50 mL volumetric flask, and the sorbent was redispersed in 4 mL of methanol and centrifuged again.The extraction of the sorbent with methanol was repeated three times.All of the supernatants were combined in one volumetric flask, made up to the mark with methanol, and analysed immediately by high-performance liquid chromatography (HPLC) and gas chromatography with mass-spectrometric detection (GC-MS).All degradation experiments were performed at a laboratory temperature of 22 ± 1 ∘ C in an air-conditioned box.In each series of measurements, several types of quality-control experiments were performed: procedural blank experiments with the sorbent and solvents, without the presence of pesticide, and 2-3 experiments with various concentrations of pesticide in working solutions without the presence of sorbent.The recovery of 2,4,5-trichlorophenol as the main degradation product of fenchlorphos was examined using a spiked sample.Under the given HPLC conditions, the limits of detection were 0.06 and 0.04 mg/L for fenchlorphos and 2,4,5-trichlorophenol, respectively.The standard deviations of repeatability were 0.019 and 0.013 mg/L for fenchlorphos and 2,4,5-trichlorophenol, respectively, at a concentration of 1 mg/L.The relative standard deviations (RSD) of the entire degradation test were estimated from a series of duplicate measurements ( = 9, degradation time: 128 min) and expressed in terms of the fenchlorphos disappearance (RSD = 12.6%) and 2,4,5-trichlorophenol production (RSD = 14.7%).The degradation of the nerve agents soman and VX was examined using a similar procedure; gas chromatography was used to follow the decrease of the nerve agent concentration. Results and Discussion Waste polishing sludges mainly contain fine glass particles and spent polishing agents consisting of CeO 2 and minor amounts of other rare earth elements.The presence of other admixtures, such as Zn (Table 1), may not be excluded.During the thermal treatment with nitric acid and hydrogen peroxide, the glass particles remained undissolved, whereas the rare earth elements are dissolved (cerium is simultaneously reduced to its trivalent state).The properties of the polishing sludge varied with time and were dependent on the type of polishing agent used, the composition (concentration) of the polishing slurry, the type of glass parts polished, the residence time of the polishing agent in the polishing machine, and other operational parameters.During the sixmonth period, the total content of rare earth elements in the waste polishing slurry varied between approximately 8% and 15% (dry mass basis).These values differ markedly from those determined several years ago.Our work in the 1980s [5] utilized waste sludge from the same source containing as much as 60% of CeO 2 in dry mass (compare with the data in Table 1).This sludge was digested with nitric acid and hydrogen peroxide in the same manner as described in this paper, but no (external) heating was used because the reaction enthalpy was sufficient to reach the desired reaction temperature.In the present work, external heating was necessary to maintain the temperature in the range of 65-70 ∘ C. The yield of the cerium during the acid digestion was approximately 70%.In the following steps (precipitation, drying, and calcination) the losses were negligible.The cerous carbonate was precipitated from the diluted leachate with a gaseous mixture of NH 3 and CO 2 , as described in the experimental section.This procedure exhibits a sufficient selectivity for the group of REEs, as is evident from a comparison of the elemental composition of the waste sludge and recovered cerium oxide (Table 1).Cerous carbonate was recovered from the solution as a fine microcrystalline precipitate that was easily separable by sedimentation or filtration.The product obtained by drying at 110 ∘ C was a white powder without an exact chemical structure; XRD patterns showed that amorphous phases predominate, but minor amounts of poorly crystalline hydrated basic and oxocarbonates were also identified [8], for example, Ce 2 O(CO 3 ) 2 ⋅H 2 O.As seen from the SEM image in Figure 1(a), cerous carbonate consists of thin microplates (with a thickness on the order of nanometers) assembled together to form relatively large aggregates (a slate-like structure).During calcination, the microplates are broken down into smaller plates, but the product retains an overall morphology of the carbonate precursor (Figure 1 complex process involving not only decarbonization and dehydration but also oxidation of Ce(III) to Ce(IV).These reactions may occur simultaneously in a wide temperature range, but most of them are completed in the range of 200-300 ∘ C. A thermogravimetric analysis showed that carbon dioxide was released as a single peak in the temperature range of 230-300 ∘ C, whereas water was released gradually up to approximately 800 ∘ C (Figure 2).We observed a continuously growing weight loss with increasing calcination temperature (from 21.5% at 200 ∘ C to 24.9% at 800 ∘ C) during calcination of the carbonate precursor under static conditions in an open crucible.It is assumed that the conversion of carbonate to cerium oxide occurred at temperatures below 300 ∘ C, but a complete dehydration requires temperatures above 800 ∘ C [10].Strongly bound water molecules and residual -OH groups at the surface of the annealed cerium oxide play a significant role in the degradation of organophosphate compounds, as will be discussed later. The XRD analysis identified a single crystalline phase in the calcination products obtained in the temperature range of 200-900 ∘ C: cerium dioxide with its characteristic face-centred cubic fluorite-type structure.With increasing calcination temperature, the diffraction peaks narrowed, suggesting that the crystallites grew and acquired a more ordered structure (see Figure 3).Crystallite sizes increased from approximately 15 to > 200 nm with increasing calcination temperature.Simultaneously, specific surface area decreased from approximately 150 to 5 m 2 /g; characteristic sigmoidal dependencies were published in our previous paper [7].Despite the presence of some other elements (i.e., La, minor amounts of Ca, and Si; see Table 1), the XRD patterns of the recovered samples confirmed the presence of a single crystalline phase corresponding to the structure of pure cerium dioxide.The absence of additional peaks indicated the formation of a single phase of the Ce 1− Ln O 2− type (Ln = lanthanide) [11]. Degradation of Fenchlorphos on Recovered Cerium Oxide. Several solvents were tested as media for the degradation of fenchlorphos, ranging from nonpolar solvents, such as heptane or nonane, to polar aprotic solvents, such as acetone.Based on our previous investigations [7], we chose acetonitrile as the reaction medium because of its miscibility with water, which may be useful in some environmental applications.Using the procedure described in the experimental section, the fenchlorphos degradation was examined for various samples of the recovered cerium oxides prepared The kinetic dependencies for the fenchlorphos removal were fitted to a single-exponential-decay equation (a simplified form of the equation used previously [12] to describe the degradation kinetics of chemical warfare agents): where is the fenchlorphos concentration at time , 0 is the initial concentration of fenchlorphos, and ∞ is the residual concentration of fenchlorphos at the end of the reaction (i.e., at equilibrium). is the pseudo-first-order rate constant, which may be related to all processes contributing to the pesticide disappearance, for example, (physical) adsorption and chemical destruction.Similarly, the formation of 2,4,5-TCP was described by the equation where and ∞ are the concentrations of 2,4,5-TCP at time and at equilibrium, respectively, and is the pseudo-firstorder rate constant for the formation of 2,4,5-TCP.Model parameters obtained by a nonlinear regression are listed in Tables 2 and 3. Although both processes (i.e., pesticide degradation and 2,4,5-TCP formation) were treated independently, they are closely related.The mass balance in Figure 4 illustrates that the amount of the disappeared pesticide is nearly equal to the amount of 2,4,5-TCP created.The main mechanism responsible for the pesticide degradation is a chemical reaction, namely, hydrolysis, giving rise to the formation of 2,4,5-TCP, whereas other (side) reactions and physical adsorption (without a subsequent chemical reaction) contribute to the pesticide removal process to a lesser degree.Mechanisms for the degradation of organophosphate compounds in heterogeneous systems remain somewhat unclear, especially at an atomistic level.Kuo et al. [13] recently demonstrated that an S N 2 nucleophilic substitution is the main mechanism responsible for the degradation of sarin, which was used as a model organophosphate compound; the degradation may be accelerated in the presence of hydrophilic surfaces by lowering the reaction barrier.However, in their computational study, they used an idealized glass surface as the heterogeneous part of the system, which is unlikely to be an accurate representation of metal oxidebased sorbents.We hypothesized that the (residual) -OH groups or adsorbed H 2 O on the sorbent surface participate in the S N 2 nucleophilic substitution starting with the cleavage of the P-O-aryl bond in the pesticide molecule and thus accelerate the transformation of organophosphate pesticide to the respective phenolic compound.It is generally believed that coordinatively unsaturated sites at the edge surfaces, vacancies, and defects in the crystal structure serve as active sites when ceria is used as the catalyst; their number and activity may be increased by doping with other metal cations [14][15][16].Therefore, the presence of lanthanum may enhance the activity of ceria-based reactive sorbents.In this context, trivalent cerium behaves as a "foreign" cation, similar to other lanthanide cations, and contributes to the formation of active sites.This effect likely predominates over the doping effect of foreign cations.By the method of X-ray photoelectron spectroscopy (XPS), we confirmed the presence of certain amounts of Ce 3+ in both kinds (pure and recovered) of cerium oxide; the Ce 3+ : Ce 4+ ratio ranged from ca. 20 : 80 to 25 : 75 in the pure cerium oxide, but it could not be quantified in the recovered cerium oxide.It should be noted that as much as 40% of Ce 3+ was found in some types of biologically active cerium oxide nanoparticles [17].The creation of active sites on the cerium oxide surface and their role in the degradation of fenchlorphos are shown schematically in Figure 5. The above considerations regarding the pesticide removal mass balance hold true for the recovered cerium oxides from series D but to a lesser degree for the pure CeO 2 (B-400) (see Figure 4(i)).The fenchlorphos elimination proceeded rapidly (with a half-time of approximately 3 min) and nearly completely on this sorbent, but the amount of the produced 2,4,5-TCP was lower than the amount of fenchlorphos that disappeared.A certain portion of fenchlorphos likely remained bonded irreversibly to the sorbent surface under the given conditions, without a subsequent chemical destruction, as no other product of any potential side reaction was identified in the reaction mixture by GC-MS.This phenomenon was not observed for the degradation of other organophosphate pesticides on similar types of CeO 2 -type reactive sorbents [7]. The kinetic dependencies and data in Tables 2 and 3 demonstrate that the best degradation efficiency toward fenchlorphos was for the recovered cerium oxides annealed at relatively low temperatures below 500 ∘ C, whereas the efficiency of samples annealed at 600-900 ∘ C was rather poor.The same trend was observed for the cerium oxides prepared from pure cerium nitrate.Samples D-400 and B-400 prepared by annealing at 400 ∘ C were tested for their ability to degrade the nerve agents VX (O-ethyl S- [2-(diisopropylamino)ethyl] methylphosphonothioate) and soman (GD, O-pinacolyl methylphosphonofluoridate), which belong to the most dangerous class of chemical warfare agents.As shown in Figure 6, both sorbents are highly effective at degrading the organophosphorus nerve agents, being capable of destroying the VX agent almost completely within several hours (a substantial degree of conversion was achieved within the first 30 min).Good degradation efficiency was also observed for soman. As can be seen from Figures 4 and 6, the degradation efficiency of the recovered cerium oxide towards toxic organophosphates was somewhat lower in comparison with that of the pure cerium oxide prepared in a similar way, probably because of the presence of impurities originating from the polishing slurry.The presence of other lanthanides (3.75% La 2 O 3 ) probably does not disturb the degradation process, as they are incorporated into the crystalline structure of cerium oxide (a negligible effect of La, Nd, and Pr on the degradation efficiency was confirmed in our previous work [7]).It is therefore hypothesized that nonlanthanide elements, although present in minor amounts (Ca, Si), have an adverse effect on the degradation efficiency. Conclusions Cerium was extracted from waste glass-polishing sludge by leaching with concentrated nitric acid in the presence of hydrogen peroxide and subsequently precipitated as cerous carbonate by gaseous NH 3 and CO 2 .Cerium oxide was prepared by a thermal decomposition of the carbonate precursor in an open crucible and tested as a reactive sorbent for the degradation of highly toxic organophosphate compounds.The samples annealed at the optimal temperature of approximately 400 ∘ C exhibited a good degradation efficiency toward organophosphate pesticide fenchlorphos and the nerve agents soman and VX. Figure 3 : Figure 3: XRD plots of cerium carbonate precursor and cerium oxides annealed at different temperatures. Figure 6 : Figure 6: Time dependencies for the degradation of nerve agents VX and soman using recovered (solid symbols, bold lines) and pure (open symbols, dashed lines) cerium oxide. Table 1 : Elemental composition of waste polishing sludge and recovered CeO 2 (expressed as respective oxides). Table 2 : Parameters of the pseudo-first-order kinetic model for the degradation of fenchlorphos.
2018-12-10T23:17:25.306Z
2015-09-16T00:00:00.000
{ "year": 2015, "sha1": "4ad8d9ff730221af389414d2494e98c988a3c19b", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/amse/2015/241421.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4ad8d9ff730221af389414d2494e98c988a3c19b", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
13136188
pes2o/s2orc
v3-fos-license
An Ecoregion-Based Approach to Protecting Half the Terrestrial Realm Abstract We assess progress toward the protection of 50% of the terrestrial biosphere to address the species-extinction crisis and conserve a global ecological heritage for future generations. Using a map of Earth's 846 terrestrial ecoregions, we show that 98 ecoregions (12%) exceed Half Protected; 313 ecoregions (37%) fall short of Half Protected but have sufficient unaltered habitat remaining to reach the target; and 207 ecoregions (24%) are in peril, where an average of only 4% of natural habitat remains. We propose a Global Deal for Nature—a companion to the Paris Climate Deal—to promote increased habitat protection and restoration, national- and ecoregion-scale conservation strategies, and the empowerment of indigenous peoples to protect their sovereign lands. The goal of such an accord would be to protect half the terrestrial realm by 2050 to halt the extinction crisis while sustaining human livelihoods. P rotected areas are the cornerstone of biodiversity conservation (Coetzee et al. 2014, Wuerthner et al. 2015. Where networks of protected areas are large, connected, well managed, and distributed across diverse habitats, they sustain populations of threatened and functionally important species and ecosystems more effectively than other land uses (Noss andCooperrider 1994, Gray et al. 2016). Protected areas also play an important role in climatechange mitigation (Baker et al. 2015, Melillo et al. 2015. Recognizing the importance of protected areas for conserving nature and its services, the Convention on Biological Diversity (CBD) established a goal to protect 17% of terrestrial land and inland water areas by 2020 through Aichi target 11. To date, approximately 15% of global land is protected (UNEP-WCMC and IUCN 2016). Aichi target 11 is achievable but insufficient. Seventeen percent is not a science-based level of protection that will achieve representation of all species or ecosystems in protected areas and the conservation of global biodiversity, as are required by the CBD (Noss et al. 2012, Wilson 2016. In contrast, reviews of conservation plans by Pressey and colleagues (2003) and Noss and colleagues (2012) demonstrated the scientific basis for a 50% protection target to achieve comprehensive biodiversity conservation. Authors of ecoregion-scale conservation plans from a variety of habitats who empirically evaluated what is required to represent and protect habitat and ecosystems (including marine) have agreed on the need to conserve about half of a given region (Noss and Cooperrider 1994, Pressey et al. 2003, Noss et al. 2012, O'Leary et al. 2016. More recently, the scientific basis for protecting half the terrestrial realm was strengthened by Wilson's (2016) analysis of extinction in relation to area of natural habitat loss, of greatest concern in habitats rich in endemic species. Even before these biodiversity-based analyses of the land area required for conservation, Odum and Odum (1972) pointed to the need to conserve half of the land to maintain ecosystem function for the benefit of humans. On the question of how much to conserve, a species-conservation approach derived the same answer as an ecosystem-services paradigm-a striking example of convergence. Therefore, the aspirational goal of 50% protected has emerged and the science codified in several advocacy and policy papers under the name Nature Needs Half (NNH; e.g., Locke 2013). Nature Needs Half addresses the spatial dimensions of conservation biology, which comprises four goals: (1) represent all native ecosystem types and successional stages across their natural range of variation, (2) maintain viable populations of all native species in natural patterns of abundance and distribution, (3) maintain ecological and evolutionary processes, and (4) address environmental change to maintain the evolutionary potential of lineages (Noss and Cooperrider 1994). Here, we evaluate progress toward Nature Needs Half within the framework of ecoregions, protected areas, and habitats. We answer two basic questions that must be addressed: (1) Is the aspirational goal of protecting half of nature in the terrestrial realm possible? (2) Which half should be protected, and how much of it has already been conserved? To address these questions and enhance systematic planning for terrestrial biodiversity conservation, we revised the 2001 map of terrestrial ecoregions of the world (supplemental appendix S1; Olson et al. 2001). We then determined the extent of both protected areas and remaining natural habitat within each ecoregion. To designate the protected area network, we used the World Database of Protected Areas (UNEP-WCMC 2016), which is inclusive of International Union of Conservation of Nature (IUCN) categories I to VI (Dudley 2008), as well as many community conservancies, aboriginal ownership, and private lands without an IUCN category. To assess habitat, we used tree-cover maps in forested ecoregions (Hansen et al. 2013) and excluded globally significant patterns of human land use and populations (anthropogenic biomes, or "Anthromes") in nonforested ecoregions (Ellis et al. 2010; detailed methods in supplemental appendix S2). We conducted this analysis across all 846 terrestrial ecoregions distributed among the Earth's 14 terrestrial biomes (supplemental appendix S1). We then sorted ecoregions into four categories defined by the extent of both remaining natural habitat and protected land: (1) Half Protected: More than 50% of the total ecoregion area is protected. (2) Nature Could Reach Half: Less than 50% of the total ecoregion area is protected but the sum of total ecoregion protected and unprotected natural habitat remaining is more than 50%. Ecoregions in this category have enough remaining natural habitat to reach Half Protected if additional protected areas or other types of conservation areas are added to the system. (3) Nature Could Recover: The sum of the amount of natural habitat remaining and the amount of the total ecoregion that is protected is less than 50% but more than 20%. Ecoregions in this category would require restoration to reach Half Protected because the amount of available habitat outside protected areas plus the existing protected areas is below 50%. (4) Nature Imperiled: The sum of the amount of natural habitat remaining and the amount of the total ecoregion that is protected is less than or equal to 20%. In many Nature Imperiled ecoregions, the remaining habitat exists as a mosaic of isolated fragments insufficient in size and orientation to adequately conserve biodiversity (Wilson 2016). We recognize that in the most heavily altered ecoregions, achieving Half Protected is inconceivable because of extreme rates of conversion. For example, in the tall grass prairie ecoregions of the United States and Canada, 99% of the land area is devoted to agriculture-an active land use that is unlikely to transition back to natural habitat. To determine the shortcomings in conservation even where protected areas exist, we conducted a global survey of terrestrial ecoregions for which strategies to achieve long-term conservation goals have been developed. For each strategy, we assessed the extent to which all four goals of biodiversity conservation are addressed (appendix S3). Evaluating protected area networks using ecoregions The 2001 map of the terrestrial ecoregions of the world (Olson et al. 2001) facilitated the design of representative networks of protected areas. It has also been used to depict species distributions, to model the ecological impacts of climate change, to develop landscape-scale conservation plans, and to report on progress toward international targets. The revised map, named Ecoregions2017 ©Resolve , that is the basis for this scheme is unchanged for large sections of the seven biogeographical realms but differs from the original map in four regions: the Arabian Peninsula, some of the desert and drier ecoregions of the African continent, Antarctica, and the southeastern United States (figure 1). Further details and justification for changes are presented in supplemental appendix S1. Calculating the extent of protection by ecoregion and biome provides a scorecard to measure progress toward Half Protected (table 1, figure 2). Summing across all 14 biomes and their constituent 846 ecoregions, 98 ecoregions (12%) have already achieved Half Protected. The largest category is Nature Could Reach Half, with 313 ecoregions (37%), followed by the 228 ecoregions classified as Nature Could Recover (27%). Half Protected remains a reasonable goal in these regions. Within Nature Could Reach Half, 119 (38%) ecoregions have greater than 20% of their land area protected; the remaining 194 ecoregions (62%) have limited coverage of protected areas but retain considerable intact natural habitat. To achieve Half Protected, these 313 regions require only an expansion of their protected area network. The remaining 207 ecoregions (24%) classified as Nature Imperiled have little natural habitat and will require intensive efforts to achieve Half Protected or even to conserve the fraction that remains. Analyses conducted at a global scale inevitably involve error. Here, we were unable to differentiate "paper parks"designated protected areas that remain unprotected because of lack of enforcement-from those that are well managed. Protected areas subjected to severe bushmeat-hunting pressures or overgrazing by domestic livestock are also ignored at this scale, although these are major threats. There are also protected areas where activities (e.g., industrial extraction) have been expressly allowed by governments even though these activities are plainly inconsistent with conservation objectives. We elucidate the major sources of error, including the assessments of tree-cover change and land-cover classes, in supplemental appendix S2. The tropical and subtropical moist broadleaf forests biome has more species and ecoregions than any other on Earth. Covering only 14% of the Earth's land area, this biome supports at least 50% of the world's species (table 1), many of which have likely yet to be discovered (Mora et al. 2011). Fortunately, over half (61%; 140) of the ecoregions within this species-rich biome (n = 230) fall into the Half Protected or Nature Could Reach Half category: 24 (10%) ecoregions have achieved Half Protected (table 1, supplemental appendix S2), and 116 (50%) have achieved Nature Could Reach Half (many of which already exceed Aichi target 11). Of the best-protected ecoregions, the majority (15) occur in the Neotropics, followed by the Indomalayan realm (11; figure 2). In contrast to the moist forests, the tropical and subtropical dry broadleaf forest is the most endangered biome on Earth; only 2 ecoregions (among 56) are Half Protected, 20 are Nature Could Recover, and 26 are Nature Imperiled. The temperate broadleaf and mixed forests biome has the second largest number of ecoregions (83) but shows a distribution of protection categories skewed toward those needing restoration: Nature Could Recover and Nature Imperiled. The boreal forest ecoregions are among the largest and have the greatest potential to reach Half Protected because of their vast remaining intact forest blocks. The majority of mangrove ecoregions fall into the categories of Half Protected or Nature Could Reach Half. The remaining mangrove ecoregions are degraded but can recover through restoration (table 1, supplemental appendix S2). The Nature Imperiled category includes 108 (23%) forest ecoregions (n = 476; table 1; supplemental appendix S2, supplemental table S1a, S1b). Assessing recent trends in tree cover, of the 16 forest ecoregions with the greatest extent of tree loss between 2000 and 2014 (ranging from 20% to 86%), 9 are in the Afrotropics, and 4 are in the Indo-Malayan realm of India. Deforestation was greatest in the Nigerian lowland forests and the Cross-Niger transition forests. Nonforested ecoregions and biomes. The protected area network is far less extensive in nonforested biomes. The Figure 1. The 846 global ecoregions that comprise Ecoregions2017 ©Resolve nested within 14 terrestrial biomes. An interactive map is available at ecoregions2017.appspot.com. (A companion biome map is presented in supplemental appendix S1, supplemental figure S1). tundra biome is best protected among the seven nonforested biomes: 26 of the 51 tundra ecoregions (51%) fall under Half Protected, and another 24 ecoregions (47%) are in Nature Could Reach Half. Desert and xeric shrubland ecoregions also have expansive networks of protected areas and large swaths of natural habitat remaining: over half fall into Half Protected or Nature Could Reach Half (figure 2). Ecoregions in the remaining nonforested biomes have been more heavily degraded: 99 (27%) nonforested ecoregions were categorized as Nature Imperiled. Human impact and revisiting the most endangered biomes on Earth. Land-use change as a result of human activities is a dominant feature in the large majority of ecoregions, as has also been shown by Venter and colleagues (2016). In the 207 Nature Imperiled ecoregions, an average of 96% of natural habitat has been converted to an anthropogenic land use. Many of the fragments in these ecoregions are of disproportionately high biodiversity value. Here, protecting Key Biodiversity Areas (KBAs) will be crucial, and the goal of NNH remains aspirational and of secondary concern to protecting what remains (Eken et al. 2004). Forested and nonforested biomes are evenly represented in the Nature Imperiled category (table 1). Hoekstra and colleagues (2005) described the temperate grasslands, savannas, and shrublands biome as the most endangered in the world. However, our results show that the most critically Note: The ecoregion data can be found in supplemental tables S1 and S2. (1) Half Protected: 50% or more of the total ecoregion area is protected. (2) Nature Could Reach Half: Less than 50% of the total ecoregion area is protected, but the sum of the total ecoregion protected and unprotected natural habitat remaining is 50% or more. (3) Nature Could Recover: The sum of the amount of natural habitat remaining and the amount of the ecoregion that is protected is less than 50% but more than 20%. (4) Nature Imperiled: The sum of the amount of natural habitat remaining and the amount of the ecoregion that is protected is less than or equal to 20%. endangered biome-as is determined by the proportion of Nature Imperiled ecoregions that constitute each-is the tropical dry forests, whereas two nonforested biomes are nearly as endangered: (1) tropical and subtropical grasslands, savannas, and shrublands and (2) Mediterranean forests, woodlands, and scrub. Without considering fine-scale endemism and betadiversity (turnover of species with distance or along gradients), simple metrics of habitat loss and percent protection may underestimate the conservation crisis among biomes. Biodiversity loss would therefore be much greater and more sensitive to habitat conversion in tropical and subtropical grasslands, savannas, and shrublands; in Mediterranean forests, woodlands, and scrub; and in tropical moist and tropical dry forests. These four biomes support higher endemism and greater beta-diversity levels than those found in other biomes. Beyond Aichi targets: Toward Half Protected The need to go beyond Aichi protection targets was approved by delegates at the 2014 IUCN World Parks Congress. They further decided that the total area of protected areas and connectivity lands needs to be far higher than current conceptions and agreed on the importance of setting ambitious targets (IUCN 2014). Results from our global assessment suggest that the ambitious target of protecting half of terrestrial nature is attainable for many of the Earth's more intact ecoregions. Among the 846 ecoregions, 98 (12%) occupy the Half Protected category. Although these ecoregions are largely concentrated in two biomes-tropical and subtropical moist forest and tundra-there is at least one ecoregion achieving this status in 12 of the 14 biomes. Within Nature Could Reach Half (n = 313), 26 ecoregions (8%) are at least 40% protected and therefore require modest additional protection to reach Half Protected in each. These and the other 287 ecoregions constituting the Nature Could Reach Half category provide Figure 2. The protection statuses of ecoregions of the world. This map shows the high levels of habitat remaining in some of the most species-rich areas on Earth, including the Brazilian Amazon, the Congo basin, and the islands of Indonesia. Although enough habitat remains for nearly half of the ecoregions to exceed 50% protected in the coming decades, much of this forest is still unprotected, and just under 50% of ecoregions have adequate conservation plans in place to keep remaining forests intact (supplemental appendix S3). The numbers in parentheses for each category represent the entire number of ecoregions found in each category. The ecoregion protection categories are defined as the following: Half Protected, more than 50% protected; Nature Could Reach Half, less than 50% of the total ecoregion area is protected, but the sum of the total ecoregion protected and unprotected natural habitat remaining is more than 50%; Nature Could Recover, the sum of the amount of natural habitat remaining and the amount of the total ecoregion that is protected is less than 50% but more than 20%; Nature Imperiled, the sum of the amount of natural habitat remaining and the amount of the total ecoregion that is protected is less than or equal to 20%. the greatest conservation opportunity, because adequate habitat remains to reach Half Protected. These ecoregions are found within every biome and should rank high in the formulation of the next Aichi target 11 post-2020. Because Aichi target 11 requires protected area networks to be ecologically representative, an ecoregion assessment provides an indispensable tool for meeting the new targets to be set in 2020. Greater effort is needed to complete these ecoregion strategies. For example, only 94 of the 846 terrestrial ecoregions (11%) have published plans that address all four biodiversity conservation goals (figure 3; see supplemental appendix S3 for methods). Formal conservation strategies that address three-fourths of the biodiversity conservation goals were published for 22% of ecoregions globally. Most of these strategies focus on identifying priority areas for protection and on conserving species of conservation concern (figure 3). Notably, a high percentage of ecoregions in the Nature Imperiled category have plans that address all four conservation goals. This is because biodiversity hotspots-biologically rich areas containing less than 30% of the original habitat-are explicitly targeted by Critical Ecosystem Partnership Fund (CEPF) profiles (Myers et al. 2000, Olson 2010). Of great concern are the 337 ecoregions that lack biodiversity plans ( supplemental appendix S3). Robust ecoregion strategies must be followed by effective implementation to realize biodiversity conservation goals formulated at a national scale. Three countries advancing to or already surpassing Half Protected-Namibia, Nepal, and Bhutan-are worth singling out as compelling examples of where effective implementation embodies key principles of biodiversity conservation. They also refute some of the criticisms raised over the NNH approach that (a) it could displace rather than empower indigenous communities, (b) it is a paradigm only suitable for wealthy countries, and (c) it can only succeed in sparsely populated, remote ecoregions. Namibia's conservation strategy includes conservation areas managed by local communities alongside government-run strict nature reserves across all its ecoregions. These communities are awarded autonomy to manage vast tracts of land for wildlife conservation and income generation, in large part by allowing communities to own the wildlife. Now widely touted as a success story in global conservation, these lands were largely defaunated through poaching only 25 years ago. Community-managed lands, called communal conservancies, now contribute to Namibia's national protected area network, which covers 47% of the country. Communal conservancies range in size from 43 square kilometers (km 2 ) to 9120 km 2 (the mean being 1953 km 2 ). In fact, many conservancies function as vital corridors connecting other protected areas and allowing dispersal, movement, and range recovery of large mammals, including elephants, lions, and others that are in steep decline elsewhere in sub-Saharan Africa (figure 4a; Naidoo et al. 2016). In Nepal, ecoregion conservation strategies that involve local communities are the rule and complement the country's strictly protected areas. In the lowlands and midlands, community forestry and agroforestry in designated landscapes yield economic returns while strategically extending habitat and connectivity among reserves (figure 4b, table 2; Wikramanayake et al. 2010). Community-managed forest parcels are small-some are as little as 20 hectares in size-but abundant and interspersed among larger protected areas, often facilitating population recovery of endangered large mammals (Wikramanayake et al. 2010). Community forests, linked together to form corridors, play a pivotal role in landscape conservation. Handing over forest management to communities, which then receive 50% of the revenue generated by wildlife parks in designated buffer zones, led to a 61% increase in tigers and a 31% increase in rhinos over a 5-year period (2008)(2009)(2010)(2011)(2012)(2013). No rhinos, tigers, or elephants have been poached in Nepal in several years (Dhakal et al. 2014). In the Himalayan and trans-Himalayan ecoregions overlapping Nepal, conservation areas managed by local Figure 3. The proportion of biodiversity goals addressed within available conservation plans for all 846 ecoregions, distributed across the four protectionstatus categories. The colors represent the percentage of conservation strategies addressed within each protection-status category: 0 goals addressed, red; 1 goal addressed, yellow; 2 goals addressed, orange; 3 goals addressed, light green; 4 goals addressed, dark green. For a detailed list of conservation strategies and sources, see supplemental appendix S3. communities exceed in area the land under national-park status and some, such as the Annapurna Conservation Area, return large sums of tourism-generated revenues annually to local funds. These are sparsely populated ecoregions. In contrast, the protected areas and community forests of the Terai-Duar savannas ecoregion in Nepal are intermingled with some of the highest rural population densities on Earth. In this densely settled, productive ecoregion situated on alluvial soils, there is room for intensive rice production and park protection (Dinerstein et al. 1999), the latter of which returns more than $1 million annually to local development funds in demarcated buffer zones. Bhutan protects 51% of its land through national parks and corridors connecting reserves ( figure 4c, table 2). In a novel policy framework, Bhutan's constitution requires that at least 60% of the country remains forested (currently, forest cover is estimated at 72%). Mid-elevation temperate broadleaf forests, which are so heavily converted elsewhere, are particularly well protected. Bhutan, as with Nepal, ranks among the nations with the lowest per capita GDP but protects enough habitat to conserve biodiversity (Dinerstein 2013). All three examples stress core protected areas, buffer zones, and connectivity-all key components of ecoregion conservation strategies and securing biodiversity. The first two examples illustrate how extensive areas can be put under conservation management by engaging local communities. The example of Bhutan offers a different mechanism through constitutional decree. Both approaches work. Strengths and weaknesses of the Nature Needs Half approach to conserving half the terrestrial realm NNH, like any paradigm, has strengths and weaknesses. NNH offers a simple, inspirational, and science-based Note: The protected status of many of these ecoregions is ahead of the global average because of ecoregional planning and the use of communal reserves and corridors in addition to strict protected areas. A map of these three countries and their protected areas can be found in figure 4. Global ecoregion protection status' refers to 1 = Half Protected, 2 = Nature Could Reach Half, 3 = Nature Could Recover, 4 = Nature Imperiled. Figure 4a-c. Ecoregion conservation planning in three developing countries: (a) Namibia uses communal conservation areas to extend protection beyond protected areas and cover a diverse set of ecoregions, (b) Nepal uses a mixture of protected areas and conservation landscapes to protect along north-south and east-west gradients, and (c) Bhutan uses protected areas combined with biological corridors to provide connectivity between protected areas and across ecoregions. message that can be easily understood by the general public. It also provides the conservation movement with a unifying goal. Incremental gains in global protection targets have proved insufficient in response to the magnitude of the biodiversity crisis. Conservation efforts have often been mired in process or targets that do not track onto an ultimate conservation goal or vision statement (Wilson 2016). NNH provides an endgame: Achieving Half Protected will help realize the outcomes and objectives of maintaining a living biosphere, avoiding mass extinction, and preserving ecological processes that benefit all human societies. NNH also provides a goal and a planning framework under which all conservation efforts can fit. Importantly, 50% avoids setting targets too low and being surpassed by the synergistic effect of threats to nature from climate change and mass extinction. The recent Paris Agreement under the United Nations Framework Convention on Climate Change provides targets for stabilizing atmospheric greenhouse gas concentrations at a level that prevents "dangerous anthropogenic interference with the climate system. " We contend that for the climate deal to succeed, we need a Global Deal for Nature (box 1). NNH provides a baseline from which we can monitor progress as the environmental data sets are increasingly dynamic, annually updated, and freely available and serve as a scorecard to underpin a Global Deal for Nature and assist the CBD in measuring progress. Finally, NNH could help provide government, lenders, citizens, and industry guidance about where to site extractive industries and develop large infrastructure projects. Providing clear implementation guidelines can help address weaknesses associated with NNH. For example, insisting that NNH be empirically derived for each of the world's ecoregions is important. However, in trying to erect a simple, science-based target that nonscientists can understand-50% protected by 2050-the approach runs the risk of giving the misimpression that 50% is the "right" target for each ecoregion. In fact, the amount of habitat that needs to be conserved in each region will vary. This guideline will help avoid pitfalls, such as a case in which governments could assign large areas to be protected just to reach the 50% target (e.g., high-elevation rock and ice, barren desert, contaminated areas, unproductive soils, or lands of low economic value) without consideration of the design, through ecoregion strategies, of representative networks to capture unique patterns of biodiversity. One clear guideline is that site selection is as important as total area protected in achieving conservation objectives (Margules and Pressey 2000). Tools such as ecoregion conservation planning, CEPF hotspot profiles, Key Biodiversity Areas, and systematic conservation planning that focus on the quality or irreplaceability of areas considered for protection will be most useful to avoid this danger (Margules and Pressey 2000, Myers et al. 2000, Eken et al. 2004, IUCN 2016. A potential pitfall is that policymakers not well versed in ecosystem function might view NNH as license to clear the other 50%. This would be a disaster in some ecoregions, Box 1. Protecting half in a policy context. Nature Needs Half finds support in the United Nation's Sustainable Development Goals (SDGs). Among other items, the SDGs call on humanity to "take urgent and significant action to reduce degradation of natural habitats [and] … protect and prevent the extinction of threatened species" and to "halt deforestation" and "halt loss of biodiversity" by 2020. These internationally agreed-on conservation goals will be challenging to achieve without protecting in the realm of half. As such, we call on advocates and leaders around the world to set new global protected area targets accordingly: 50% of the terrestrial realm by 2050. Calls to increase the global area under protection should be considered in the context of other political mechanisms, such as international development funding (e.g., G20) and The Bonn Challenge. The Bonn Challenge, a global effort to restore millions of hectares of deforested and degraded land by 2020 or 2030, can be a critical mechanism in ecoregions falling under Nature Could Recover and Nature Imperiled. There are other opportunities to weave the 50% goal into the global economic and development fabric. For example, the "G20, " the world's 20 largest economies, have called for as much as $60 trillion-$70 trillion in investment for large infrastructure projects (Foundation Earth 2015). Holistic ecoregional planning must be included to ensure that future infrastructure and cities are built in harmony with a world where nature receives half. A Paris-like deal that addresses biodiversity conservation at the highest political level-a Global Deal for Nature under the auspices of the CBD-is needed for nature conservation (for further details see www.resolv.org/blog/2017/global-deal-for-nature). An initiative of this scale would mobilize unprecedented financial resources to support countries to implement the goal of Half Protected. The estimated cost to add terrestrial protected areas, better protect existing reserves, and restore habitat varies by country, region, and ecoregion, ranging between $8 billion and $80 billion per year for the terrestrial realm (Balmford et al. 2003, McCarthy et al. 2012 and between $5 billion and $19 billion per year for the marine realm (Balmford et al. 2004). Implementing a Global Deal for Nature would employ a large number of currently unemployed or underemployed workers in rural communities. At the current rate, the amount of land under formal protection increases by about 4% per decade. If the rate of increase doubled to 8% or achieved 10% per decade, the global goal, supported by a Global Deal for Nature, could be within reach. such as those in the Amazon and Congo Basins, that perform vital ecological roles only if contiguous forest cover is maintained. Conservation planning will need to underpin the implementation of NNH to avoid these abuses. Another concern is that the NNH approach risks overlooking, however unintentionally, those 207 ecoregions determined by our analysis to average only 4% of remaining natural habitat outside protected areas that fall into the Nature Imperiled category. Where ecoregions contain global centers of endemism but with only fragments of natural habitat remaining replete with irreplaceable sites, a concern is that the global importance of these sites of rarity could be downplayed. Donors and agencies might concentrate on those less biodiverse ecoregions but those likely to come closer to achieving the 50% target. In most of these ecoregions, Key Biodiversity Areas, if properly conserved will protect the biodiversity that remains (Eken et al. 2004). CEPF profiles should include all possible options for restoration . A possible concern expressed by critics of Wilson (2016) and of the NNH approach is that protecting half the terrestrial realm adversely affects humans in remote regions (e.g., Büscher et al. 2016). In contrast, implementing NNH is an opportunity to empower indigenous peoples and local communities. Many indigenous reserves in Latin America, Asia, Africa, and Australasia are an essential part of the formal protection network, but the decisionmaking is in the hands of those within the reserves. Several indigenous communities are also advocating for half their lands to be protected. The Dehcho Dene in northern Canada, for example, has articulated an explicit 50% protected goal for their own territory (Norwegian 2005). For many groups, such as the Dehcho Dene, protecting half is an approach derived from their traditional ecological knowledge. Conservation should be achieved through careful planning while respecting rights, improving livelihoods, and sharing decisionmaking. Achieving Half Protected hinges on a reduction of human disturbance, sparing nature Fortunately, two schools of thinking-how to save half for nature and how to feed and fuel advancing societies-are in growing concordance. As societies urbanize and develop, there is a well-documented trend toward "decoupling": an increasingly efficient use of land and resources that reduces environmental degradation (Ausubel 2000, Fischer-Kowalski and Swilling 2011, Tilman et al. 2011, Ausubel et al. 2012). These trends have already produced major recoveries of woodland and other vegetation in many regions (Ellis et al. 2013, Blomqvist et al. 2015. The prospects for feeding growing human populations while recovering natural habitat are not only aspirational but also achievable as long as these aspirations are put to work guiding land-use policy and commodity-chain interventions (box 1; Lambin et al. 2014). The global phenomenon of growing urbanization, accentuated in some ecoregions, sets the stage for reaching Half Protected. In remote areas in many parts of the world, depopulation due to socioeconomic changes such as increasing wages and career opportunities have resulted in rural populations moving to population centers; by 2050, 70% of people will live in cities. This phenomenon, driven by economics, could lead to expansion of the protected area network and restoration of disturbed or abandoned lands (Ellis et al. 2013). Nature Needs Half is an ambitious goal that will allow humanity to maintain a world with space for all life and the continuance of critical ecosystem services. Our findings show that a large number of ecoregions are Half Protected and that NNH is achievable in the vast majority of remaining ecoregions. However, achieving NNH requires further research into the desirability, feasibility, and progress toward the goal at ecoregional and national scales. Here, we provide tools and information to chart progress toward NNH and call on advocates and leaders around the world to set new global protected area targets: 50% of the terrestrial realm by 2050. Doing so through carefully balanced ecoregion plans that promote economic development while sustaining nature will also make the planet more livable for humanity (Mulligan 2014(Mulligan , 2015.
2018-03-05T20:23:15.388Z
2017-04-05T00:00:00.000
{ "year": 2017, "sha1": "f6cd4eebe0fa4eda8444ecffbdb0acac50bbe59e", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/bioscience/article-pdf/67/6/534/17644834/bix014.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f6cd4eebe0fa4eda8444ecffbdb0acac50bbe59e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
219767230
pes2o/s2orc
v3-fos-license
A Review on Adhesively Bonded Aluminium Joints in the Automotive Industry : The introduction of adhesive bonding in the automotive industry is one of the key enabling technologies for the production of aluminium closures and all-aluminium car body structures. One of the main concerns limiting the use of adhesive joints is the durability of these system when exposed to service conditions. The present article primarily focuses on the different research works carried out for studying the effect of water, corrosive ions and external stresses on the performances of adhesively bonded joint structures. Water or moisture can affect the system by both modifying the adhesive properties or, more importantly, by causing failure at the substrate/adhesive interface. Ionic species can lead to the initiation and propagation of filiform corrosion and applied stresses can accelerate the detrimental effect of water or corrosion. Moreover, in this review the steps which the metal undergoes before being joined are described. It is shown how the metal preparation has an important role in the durability of the system, as it modifies the chemistry of the substrate’s top layer. In fact, from the adhesion theories discussed, it is seen how physical and chemical bonding, and in particular acid-base interactions, are fundamental in assuring a good substrate/adhesive adhesion. Introduction In the automotive industry, one of the key elements to reduce the fuel consumption, and therefore CO 2 emission, is the switch to lighter materials than plain carbon steel.For this reason, a great interest was put on advanced high strength steels, light non-ferrous alloys (such as aluminium, magnesium and titanium alloys) and a variety of composites, including carbon fiber composites, material matrix composites and nano-composites.Thus, during the last decade, the average amount of aluminium used in passenger cars has doubled, and based on the new design concepts, this trend will be confirmed in the coming years [1][2][3].One of the main advantages of aluminium over steel is its density which is approximately 65% lower.Both cast and wrought aluminium alloys are used in numerous applications in automobiles.The change in the automotive industry from steel to aluminum is not straightforward and needs design and process adaptations.For example, in order to have the same stiffness, the aluminium parts needs to be thicker compared to the steel ones.This is due to the fact that aluminium has an average elastic modulus of 70 GPa while for steel this is 207 GPa.As such, aluminium components need to be around 40% thicker than steel ones, but remains 50% lighter. The aluminium alloys used cannot be easily spot-welded because of the low electrical resistance of the aluminium, the very stable and non-conductive oxide layer, and the trend of the aluminium to interact with the spot weld electrodes.On the other hand, the strength of the welded parts is reduced by the effect of heat during the welding process.Especially regarding fatigue, the negative effect of the heat during the welding process is very relevant.To avoid the heat disadvantages, cold processes such as mechanical fastening and adhesive bonding established [4].In recent years, adhesive bonding has been used more and more in automotive joining for a variety of components including closures and structural modules.Adhesive bonding, alone or in combination with other joining techniques, shows significant advantages over more traditional joining techniques, such as lower weight, cost and improved crash performance/safety [5].Adhesives (and sealants) can be characterized according to the way in which they cure.This can be by loss of solvent, loss of water, cooling or chemical reaction.Once hardened, the polymer in an adhesive can be linear or crosslinked.The crosslinking makes polymers insoluble and poorly fusible, greatly reducing creep [6].A structural adhesive is one used when the load required to cause separation is substantial such that the adhesive provides for the major strength and stiffness of the structure.The structural members of the joint, which are joined together by the adhesive, are the adherends [7]. All structural adhesives are cross-linked.The best known and most widely used structural adhesives are epoxies.A schematic representation of this review outline is shown in Figure 1.In the first part, the use of adhesive joints in automotive is introduced.Then, the chemical characteristics of the epoxy adhesives will be described.However, it must be noticed that epoxies are only one, even if widely applied, of the broad class of adhesives that can used in a vehicle.A factor of pivotal importance for the long term durability of the joint is the chemistry of the substrate's surface prior to bonding.In order to be used for bonding, in most cases the aluminium substrates is pre-treated.Different pre-treatments can be applied on the surface and the different methods will be mentioned in the successive sections.Section 5 will then briefly discuss the classical theories of adhesion.In fact, there have been different attempts in the past to develop a single theory which could explain how adhesion can occur.Finally, the focus will be on the durability of the adhesive joints and some of the extensive literature on the effect of moisture, corrosion and stress on the adhesive joints will be analyzed. Adhesive Joints in Automotive Structural adhesive bonding was initially used in aerospace industry, starting from the 1950's, and only from the late 1960's the application spread as well to automotive companies.Adhesive bonding is practiced more commonly with aluminium than with steel, since aluminium is more difficult to weld.Structural bonding makes it possible to increase the rigidity of the components and the crash resistance [8].Several aluminium-intensive concept cars as well as low volume niche cars, have been produced using adhesive bonding as the primary joining methods.Also, the S-Class Coupé has more than 100 m of structural bonds in body in white (BiW) applications and the BMW 7 series has more than 10 kg of structural adhesives applied [9].Besides of the use of structural adhesives, within the automobile body-in-white adhesive bonding is used as well for: anti-flutter bonding to reduce or eliminate any fluttering or vibration of the outer and inner panels to each other.Anti-fluttering adhesives are commonly used on horizontal closure panels (bonnets, trunk lids or roof) and less on vertical panes. hem flange bonding for joining the inside sheets in the hem flange areas of doors, hoods and tailgates. sealants to seal joints and crevices before painting in order to prevent damage of joined substrates by protecting against possible aggressive environments Advantages Adhesive bonding, compared to other joining techniques, does not affect the bulk of the adherends, and, therefore, does not interfere with the aluminium metallurgy or result in thermally or mechanically weakened zones.Thus, there is a uniform stress distribution over the whole bonded area which results in an increased static and dynamic stiffness of the vehicle structure.As the body structure is more rigid, the reasonance frequency modes will be higher and the structural damping faster and, therefore, the vehicle will have better noise, vibration, and harshness (NVH) characteristics.In combination with other joining techniques, also crash performance and fatigue strength are improved by the adhesive bonding, allowing an additional weight reduction of the body structure (as there is a lower application of material gauges) [10].Furthermore, the use of adhesive bonding can enable the use of dissimilar metals.Due to the presence of the adhesive the metals are also isolated against potential galvanic corrosion [11]. The aesthetics of the final assembly represent a further advantage of the adhesive bonding technology compared to other joining techniques.In fact, there are no visible weld seams or rivet heads.Thus, adhesive bonding may minimize or eliminate secondary operations like grinding and polishing [12].Very beneficial is also its gap filling potential.Adhesives can bridge large gaps between panels and improve the overall appearance compared to other joining methods.Therefore, in many cases, joining and sealing operations can be combined [5]. Disadvantages The adhesive bonding entails some disadvantages as well.Among the most significant is the long-life durability of the adhesive joints when exposed to harsh conditions, such as the presence of water or corrosive environments.Moreover, especially for aluminium alloys, in order to achieve a good durability, in some cases, the substrate will need to be pre-treated.In the next sections, the focus will be on some of the pre-treatment used in automotive industry and on the main known failure mechanisms due to the presence of water or/and corrosive environments. Besides the over-mentioned disadvantages, other limitations are present too.In fact, adhesively bonded structures (similar to welded structures) cannot be easily dismantled for in-service repairs.Moreover, in the assembly, the joint needs to be supported until the adhesive is cured which, therefore, slows down the whole production process.This is one of the resons for which most of the structural adhesive are not used alone but in combination with another joining techniques (as selfpiercing or riveting) [5]. Chemical Properties Epoxies are probably the most versatile family of adhesives because they bond well to many substrates and can be easily modified to achieve widely varying properties [13].The term epoxy, epoxy resin, or epoxide refers to a broad group of reactive compounds which are characterized by the presence of an oxirane or epoxy ring (Figure 2).An epoxy resin can be any molecule containing more than one of these epoxy groups.The number of epoxy groups per molecule is the functionality of the resin.The group can be situated internally, terminally or on cyclic structures.Epoxy groups are capable of reacting with curing agents or catalytically (homopolymerized) to form higher-molecular-weight polymers.Once cured, the epoxy polymers have a densely crosslinked, thermosetting structure with high cohesive strength and adhesion properties.However, the term epoxy can also be used to indicate an epoxy resin in the thermoplastic or uncured state [14].A general formula for an epoxy resin can be represented by a linear polyether with terminal groups and secondary hydroxyl groups occurring at regular intervals along the length of the chain.The epoxy structure and properties are influenced by the various chemical groups.An example of an epoxy resin is illustrated in Figure 3 where the role of each chemical group is illustrated as well [15]: • the epoxy groups at both terminals of the molecule and the hydroxyl groups at the midpoint of the molecule are highly reactive • the outstanding adhesion of epoxy resin is largely due to the secondary hydroxyl groups located along the molecular chain, the epoxy groups are generally consumed during cure • the large part of the epoxy resin backbone contains aromatic rings, which provide a high heat and chemical resistance • the aliphatic sequence between epoxy linkages confer chemical resistance and flexibility • the epoxy molecule can be of different molecular weight and chemistry.Resins can be low viscosity liquids or hard solids.• a large variety of polymeric structures can be obtained depending on the polymerization reaction and the curing agents involved.This can lead to versatile resins that can cure slowly or very quickly at room or at elevated temperatures. Epoxy resins commercially produced are not necessarily completely linear or terminated with epoxy groups.Some degree of branching occurs, with the end groups being either epoxy or hydroxyl [15]. Curing Mechanisms The epoxy resins can react with different curing agents or with themselves (via a catalyst) to form solid, crosslinked materials with high strength and adhesion in a step usually called curing or hardening.This capability to be mould from a viscous liquid into a tough, hard thermoset is one of the most important characteristics of epoxy resins.In order for the curing to occur, a chemically active compound (as a catalyst or a curing agent) is usually added to the epoxy resin.Depending on the particular details of the epoxy formulation, curing may perform at room temperature, by applying external heat, or with the application of an external source of energy other than heat such as ultraviolet (UV) or electron beam (EB) energy [16]. The main type of epoxy curing reactions are polyaddition reactions and homopolymerization.Both reactions can result in increased molecular weight and crosslinking.Both types of reaction occur without the formation of by-products.The curing reactions are exothermic, and the rates of reaction increase with temperature.Compared to other thermosetting resins, epoxy resins present a lower degree of cure shrinkage.This is due to the fact that they cure primarily by a ring-opening mechanisms.In these reaction processes, the epoxy group may react in one of two different ways: anionically and cationically.In the anionic mechanism, the epoxy group may open in various fashion to produce an anion.The anion is an activated species capable of further reaction.In the cationic mechanism, the epoxy group may be opened by active hydrogen to produce a new chemical bond and a hydroxyl group.Depending on the curing agent and the epoxy resin, curing can take place at ambient or elevated temperatures.Room temperature curing generally cannot achieve the same performance as is obtained by curing the epoxy adhesive at elevated temperatures [13,16]. Amine Curing Agents Amines are important curing agents for epoxy resin.They provide fast cures with a relatively high crosslink density.Usually primary amines are used for curing and these can be aromatic, cycloaliphatic or aliphatic.A primary amine has two active hydrogens which are capable of reacting with an epoxy group.Most of the primary amine curing agents that are used have more than one primary amine per molecule so that the crosslinking and, therefore, network development can occur.A secondary amine will react with only one epoxy group.The secondary amines are derived from the reaction product of primary amines and epoxies.They have rates of reactivity and crosslinking characteristics that are different from those of primary amines.The secondary amines are generally more reactive towards the epoxy group than the primary amines, because they are stronger bases.They do not always react first, however, due to steric hindrance.If they do react, they form tertiary amines.In contrast, tertiary amines which have no active hydrogens, will not react with epoxy resins but can cure epoxy resins catalytically, resulting in homopolymerization.Figure 4 shows the reaction between an epoxy and a primary or secondary aliphatic amine [14,17].The amine curing agent selection depends on the desired mechanical and physical properties, enviromental resistance, pot-life, viscosity, processing etc.In many commercial epoxies more than one amine type will be used to balance the processing and properties.Table 1 shows advantages and disadvantages of different amine-based curing agents [16].This class of curing agents (Figure 5) are made from dimerized unsaturated fatty acid and they are derived from the reaction of the dimer acid with a polyamine.Their reaction with the epoxy resins is through the amine groups and not the amide hydrogens.The presence of a polyamide backbone gives an overall good mechanical property to the polyamide amine cured adhesives.Cure times can last to several hours at room temperature to few minutes at 150 • C [18]. Imidazoles Curing Agents Imidazoles (Figure 6) are a type of anionic polymerizing curing agent for epoxy resins.They are characterized by a relatively long pot life, the ability to form resin with a high heat deformation temperature when they are treated thermally at medium temperature (between 80 and 120 • C) for a short time.The curing temperature is between 100 to 180 • C. Imidazoles can be used as a curing accelerator or co-curing agent for organic-acid anhydrides, dicyandiamide, polyhydric phenol, and aromatic amine [14]. Anhydrides Curing Agents Anhydrides curing agents are derived from the elimination of water from diacids.As the reaction is reversable, these curing agents needs to be protected from moisture. Usually anhydrides used for curing epoxy are liquid, but solid dianhydrides have limited use as curing agents in fiber reinforced composite applications.The reactivity of most liquid anhydride curing agents with epoxy resins is very slow without the addition of a catalyst, even at high temperature.The reaction mechanism of anhydrides with epoxies is complex and it includes different competing reactions.The time, temperature, type of accelerator and concentration may have a major effect on the reaction mechanism. Among the advantages of anhydrides curing agent there are the long pot-life, the low cost, the low viscosity (which is beneficial in composite processes) the excellent thermal stability and low curing shrinkage.The main disadvantages includes their reaction with water when uncured and the susceptibility to hydrolysis when exposed to specific temperatures and humidity, which can dissolve the matrix and reconvert back to liquid in few days [19]. Latent Curing Agents Latent curing agents are mixtures of curing agents with epoxy resins which can be stored at room temperature and that can cure rapidly. Lewis acids (as for example BF 3 , AlCl 3 , ZnCl 2 ) react with resins at room temperature.Their pot-life is 30 s or less.Therefore, they are usually used in form of complexes with amines.Lewis acid presents exceptional heat deformation temperature and excellent electrical properties and therefore they have been used in electrical insulating laminates and carbon fiber reinforced plastics (CFRP).Together with Lewis acids, dicyandiamide (DCIY) are used as latent curing agents too.DICY, when dispersed in the resin as powder, can have a long pot life up to 12 months.DICY heat at high temperature (160 to 180 • C) in 20 to 60 min.As it generates a large quantity of heat during curing, it is mostly used as thin films such as paints.Finally, organic-acid hydrazide belongs to the category of latent curing agents.It is a powder which has a high melting point and it is used mostly in powder paints and one-part adhesives.It cures at around 150 • C in 1 to 6 h [20]. Epoxy Additives In addition to the two main ingredients of an epoxy formulation, which are resin and curing agent, numerous other formulatory materials are available and have been regularly employed to modify the properties and characteristics of epoxies, both in their uncured and cured form. Among them there are: • Diluents They are used to reduce viscosity for both the ease of processability and allowing a greater incorporation of formulatory ingredients.Diluents are used as well to improve wettability [14].Examples of diluents for epoxy resins include : phenylglycidyl ether, butylglycidyl ether, allylglycidyl ether, butanediol diglycidyl ether and glycerol-based epoxy resins [21]. • Fillers They are the most common ingredient used in the majority of the epoxy formulations.Hundreds of different fillers can be used to modify specific properties of the epoxy.Even if fillers are considered beneficial for most of the applications, the disadvantage is the increase of density (and therefore weight) and viscosity which can influence the way in which the formulation behaves.Table 2 shows a non exhaustive list of fillers which have been used in epoxy formulations [14]. • Resinous modifiers Resinous materials are sometimes used together with epoxy to reduce the cost or to impart property modifications.Adding resinous materials such as nylon to epoxy has shown to increase the toughness enough to be used as structural adhesives.However, due to the presence of hygroscopic constituents, the use of these systems is limited as they lead to durability problems in presence of moisture [14]. • Flexibilising/plasticising additives Another way to overcome brittle behaviour from adhesives, besides using elastomers or fillers, is by incorporation of plasticising or flexibilising additives. The difference between them is that while plasticisers are long-chain non-reactive molecules, which are not incorporated into the epoxy network, flexibilisers react with the epoxy system during cure [22].Examples of plasticizers are phthalates or bisphenol A diglycidyl ether, while among flexibilizers there are thermoplastic polymers such as polyvinyl ethers or polyurethanes [23]. • Miscellaneous additives In addition to the additives described above, there are other additives which can be added to epoxy systems.Here two examples are shown.It is a common practice for some epoxy manufacturers to add in the epoxy some coupling agents (such as organosilanes). By adding the coupling agent to the epoxy and not on the substrate, a step in the preparation of the substrate can be skipped.Another example is the use of "expanding monomers" which help reduce the shrinkage occuring during cure [24]. Aluminium Surface Preparation Prior to Bonding As the adhesive is bonding on the surface, the chemistry of the surface prior to bonding is highly important.Therefore, before joining, aluminium substrates undergo a surface preparation which has the aim of [25]: • Remove the weak boundary layers, including the weak oxide layers formed by heat treatment or exposure to humid atmosphere, air-borne contamination and protective oils and greases • Enhance the molecular contact between the adhesive and the substrate to promote the formation of intrinsic adhesion • Create a continuous film on the oxide layer which has a high stability over a wide pH range, protects against hydration, create a barrier against corrosion. Adhesive bonding is a technology applicable for various product forms such as sheets, extrusion and casting.For different aluminium products the preparation procedure or application products may somehow differ but the essential steps are still the same [5] .In the following sections the case of aluminium sheets is considered. The Need for Surface Preparation and the Weak Boundary Layer Theory Bikerman and his colleagues developed in 1950-1960 the early theory on the weak boundary layer.This theory is often discussed alongside the classical adhesion theories (Section 5) due to the fact that when it was formulated, 60 years ago, there was not yet a basic understanding on the adhesion mechanisms. According to this theory, a weak layer (sometimes referred as interphase) is formed at the interfacial region of an adhesive joint causing the joint itself to fail at lower stresses than expected (Figure 7) [12].In fact, the surface of the metal is covered by a complex layer which includes contaminants such as residual lubricant and residuals coming from manufacturing processes.This layer will mostly probably form a weak boundary layer which lowers the cohesive strength of the joint.A successful surface preparation will most likely not remove completely the contamination but it will produce a surface which will be less affected by cohesive weakness.However, not all the contaminants will form a weak boundary layer as, in some circumstances, they will be dissolved by the adhesive [26].Even it is not completely supported today as adhesion's theory, Bikerman's theory did stimulate different developments in understanding adhesion.Moreover, thanks to this theory careful attention was put on the adhesive bond preparation to avoid the presence of contaminants [27] such as the residual lubricants coming from the rolling processes (described in Section 4.5). The Aluminium Substrate after the Rolling Process In order to produce aluminium sheet plates, Al needs to undergo a rolling process, either hot or cold.The main characteristics of the top surface of the the aluminium oxide after rolling are shown in Figure 8.In metal rolling processes, lubricants are applied to keep the surface of the roll and work-piece separated by a film of solid or fluid.The reason to use lubricants is to both reduce friction forces between the work-piece and the roll and to reduce the possible damage that the work-piece is applying on the surface.The lubricants used during the rolling stages are generally based on paraffin and are volatilised either during annealing or by natural evaporation.However, the rolled surface can still have a certain degree of contamination which will then be removed with a subsequent degreasing step [28]. Aluminium is a very reactive metal with a high affinity for oxygen.Therefore, when exposed to air it will instantaneously form a thin oxide.If the oxide is formed at low temperature (below about 375 • C) it is composed of a thin amorphous Al 2 O 3 about 1-2 nm which is adjacent to the metal with hydrated surfaces oxides and hydroxides on top.The hydrated overlayer is usually composed of hydrated gel-like pseudoboehmite (AlOOH), crystalline boehmite and/or trihydroxides Al(OH) 3 , bayerite, or gibbsite depending on how much they were exposed to humidity and for how long.The total thickness can be between 2 and 60 nm [29].Aluminium hydroxides are considered basic and therefore good for interaction with acid polar sites of polymers (as explained later in the acid-base theory).However, hydration (as opposed to hydroxylation) may reduce the overall adhesion performance as it creates weaker basic sites on the top surface.Oxidation and hydration can also be accelerated by the presence of alkaline and alkaline earth elements (such as Li, Na, and Mg) segregated at the surface or at the metal-metal oxide interface [30].In particular, in case of heat treatment, Mg migrates at elevated temperatures depending on what is the initial Mg bulk concentration and the bulk Al grain size.Even if Mg oxide is basic, unlike Al 2 O 3 , it dissolves easily in neutral humid enviroments [31].In case of heat treatment at higher temperatures (>400 • C) the amorphous oxide may crack due to thermal expansion and nuclei of crystalline γ-Al 2 O 3 can be formed.If the enucleation of crystalline alumina lead to the loss of cohesion of the oxide with the metal substrate then it may have a detrimental effect on the total adhesion [32,33].The presence of Mg also promotes the crystalline oxide growth as it acts as a nucleant for γ-Al 2 O 3 .Different heat treatments conditions and Mg content can form different oxides on the top surface such as MgO, Al 2 MgO 4 or Mg doped Al 2 O 3 [32].Therefore, in Mg rich alloys more complex oxides can be formed.Another characteristic of the alumninium substrate after rolling is the so-called near surface deformed layer (NSDL).When the roll is biting the surface, it can initiate cracks in the oxide of the metal.During the exit of the roll, mill metal or intermetallics can stick to the roll surface and can be picked up by the roll.In successive rolling cycles, these particles are re-depositing on the surface and they can create more imperfections or they can initiate cracks in the oxide layer.Oxides are then lift up by the roll and can be redeposited on other areas of the surface [34].These properties create a layer on the surface of the aluminium which is different from the underlying bulk [35][36][37][38][39].This layer is known as the near-surface deformed layer.In their work Fishkis and Lin [35], studied a non-heat treatable aluminium alloy containing magnesium which has undergone different stages of hot rolling.By using a combination of wavelength-dispersive X-ray spectroscopy (WDS) and transmission electron microscopy (TEM), they could observe the different characteristics of the NSDL which are summarized in Figure 9. The NSDL is formed by two different oxide zones: a thin continuous oxide layer and the subsurface layer containing oxide particle inclusions at the grain boundaries.Using TEM they saw that the aluminium grains in the subsurface were in the range of 0.04-0.3µm and the small oxide particles around 25-500 A. Moreover cracks, voids and inclusions were observed in the layer too.The thickness of the NSDL varies between 1.5 to 8 µm depending on the rolling mills gauge's thickness.Regarding the rolling process, the formation of the NSDL was seen for both hot [37,39,40] and cold [37,41] rolling.However, it was noticed [42] that during cold rolling, the incorporation of oxides is lower.From an electrochemical point of view, it was seen by Afseth et al. [43][44][45] that there is a strong correlation between the presence of the NSDL and the susceptibility to filiform corrosion (FFC) for 3xxx and 5xxx aluminium alloys.The influence of the NSDL on FFC will be discussed in Section 6.2. Cleaning Step Before pre-treatment the substrate undergoes a cleaning step.The scope of the surface cleaning is to remove residual oil, smut and surface oxides.For 5xxx alloys the main requirement for cleaning, besides removing the organic residues, is to ensure the removal of Mg rich oxides which was reported to have a detrimental effect on the wet adhesion of the coatings [46,47].Therefore the cleaning of 5xxx aluminium alloys is less straightforward than for 6xxx [48].The most used cleaning step for auto-sheets is a mixed acid process.The process bath is sometimes a combination of sulphuric and/or phosphoric acid with, sometimes, small additions of hydrofluoric acid, operated at 50/70 • C.This process is carried out either by spraying or by immersion [5].For materials which are susceptible to surface-active mode of corrosion, any cleaning process which removes the NSDL leads to an inherently more corrosion resistant substrate.The thickness of the NSDL determines the amount of metal that needs to be removed [48]. Chemical Surface Pre-Treatments Options in Automotive Pre-treatments for automotive industry need to be fast and low cost and aim to modify the chemistry of the top layer to increase adhesion and/or anti-corrosion performances. The pre-treatments used in industry can be divided in three different groups according to the way they act: • Metal ions and inorganic molecules which react with or precipitate on the oxidized Al to form a mixed oxide • Coupling agents which promote adhesion • Anodised film, which modify the aluminium oxide The first category includes difficult to reduce transition metal oxides, such as such as Hf, Ti, Zr and Ta.These form a very stable oxide in their highest oxidation state.Soluble and mobile precoursors of these oxides are difficult to stabilize in aqueous solution while peroxo complexes and acid fluorides of these elements exist at low pH [49].Widely used in automotive industry are conversion coatings based on titanium fluoride or a mixture of titanium and zirconium fluoride.The processing baths are based on fluorotitanate (H 2 TiF 6 ) and fluorozirconate (H 2 ZrF 6 ) solutions.The treatments can be carried out by immersion or spray or by no-rinse processes [25].The advantage of using Ti-, Ti/Zr or Zroxides is mostly due to their fast and simple application, the possibility to "dry-in-place" and the low temperature of operation [50].Moreover, it was seen from pull-off tests that the presence of Ti/Zr based conversion coating enhances both the corrosion performance and the adhesion of the substrate [51].In order to obtain even a higher adhesion with the underlying substrate and to attain a homogeneous coating, organic additives can be added to the conversion baths [50].The first additive that was used in early studies was poly(acrylic acid) PAA [52][53][54].Other additives as phenol phosphate [55], silanes [56], and polypyrrole [57], chelating agent such as amino trimethylene phosphonic acid (ATMP) [58] and polyphenol like tannic acid (TA) [59] have been added to conversion baths as well.In all the mentioned cases the performance of the TiZr-based conversion coating was increased by the presence of organic additives. Among coupling agents, silanes are used the most.The role of coupling agents is to improve the degree of cross-linking in the interface region to obtain an improved chemical bonding.The functionality of the silane can interact to form chemical bonds with both the substrate and the adhesive.Silanes coupling agents have the form R -SiX 3 , where R is an organic functional group and X is the hydrolysable group [26].An example of how the silane is binding with the metal, with the formation of a covalent bond, is shown in Figure 10.The advantage of silanes is that they are simple and stable, mostly due to their covalent cross-linked structure [60].Moreover, in addition to covalent bonding, other factors which have been proposed for the effectiveness of silanes include the improvement in surface wettability and the capacity of silane layers to be deformable and relieve internal stresses [61].However, among the drawbacks of the use of silanes is their difficult to be stored which leads to a relatively short shelf-life [62].Organophosphonic -acid based coatings have been used as well as coupling agents due to their excellent properties.It has been shown that organophosphonic acids form very stable monolayers on aluminium alloys covered with a thin oxide film [63][64][65][66][67].It was seen that the presence of phosphonate monolayers enhances the adhesion with the aluminium oxide [68].Moreover, having a bi-functionality enables the binding of the phosphonic acid or its anion to the oxide surface while the other group can react with the organic phase of the adhesive or coating [69].Organophosphonic acids can be applied on the surface via dip coating or spraying [70].Another form of pre-treatment consists in modifying the aluminium oxide by forming anodised films.Thin anodised films are created by AC or DC powered electrolytic processes.The oxide film is formed by an amorphous barrier layer, which gives resistance to corrosion, and a porous top which provides adhesion to adhesives or primers [5].The anodising has been performed with both sulphuric and phosphoric acid baths or boric sulfuric acid baths [71].The performance of the treatment used is function as well of the adhesive used.Correira et al. [72] performed an experimental study on single lap joints pre-treated with sulfuric acid anodizing and boric-sulfuric acid anodizing on aluminium-to-aluminium joints and using two similar structural adhesives for aerospace from different manufacturers.The results have shown that the optimal surface treatment is different for each type of adhesive, leading to differences in mechanical behaviour. In automotive, it was developed the use of thinner anodized films (thin film anodising, TAF) prior to bonding for the second generation of Mercedes CLS.For the low volume Lotus Elise and Opel Speedster sports cars, a sulphuric acid anodising (SAA) on AA6060 aluminium was employed.The oxide formed in the SAA has a thickness of about 5 µm which provides both adhesion and good corrosion resistance of the uncoated structure [25].The advantage of using thin anodised films over other pre-treatments is that as they consist purely of aluminium oxide, they represent an environmental friendly alternative to transition-metal based pre-treatments.Moreover, the thickness and the morphology of the finished oxide can be easily controlled by applying a different range of potentials and voltages.However, the drawbacks include higher operation costs and low volume requirements. The Effect of the Stamping Lubricant After the pre-treatment process, a stamping lubricant is usually applied on the metal substrate to improve the material formability and to protect the substrate prior to bonding.The amount of oil applied is approximately 0.9 g/m 2 .However, if the oil used is not a dry-film lubricant, there will be an inhomogeneous distribution on the surface due to run-off.Therefore, a higher amount of oil could be present in some areas [73].Lubricants are not necessarily removed in the stamping plants and, therefore, it is important that the adhesives are compatible with the processing lubricants.Therefore, the adhesive must be able to either displace or absorb any applied lubricant [5].Several studies have been performed to analyse how the epoxy adhesive may accommodate the oil in order to produce strong interfacial bonds with the substrate [74][75][76].Debski et al. [74,75] proposed that, in case of an apolar oil, there is thermodynamic displacement of the oil by the epoxy followed by absorption by diffusion.Ogawa et al. [76] found that the oil layer disappears in the first steps of curing as the diffusion of the oil into the adhesive increased abruptly with temperature.Therefore, according to their study, the presence of oil does not affect the perfomance of the joint.According to Greiveldinger et al. [77] the oil diffuses over several hundred of microns when cured (therefore the typical thickness of the join), but still leaving some residues on the interface.Moreover, it was found that the crosslinking of the adhesive only starts when oil diffusion is complete and, consequently, does not affect the kinetics or mechanism of adhesive cure.Different studies have been performed to study the effect of the lubricant on the mechanical performance.Zhang et al. [78] studied that the strength of adhesive joints of AA6111/high-strength-steel (HSS) with various lubricant masses and observed that increasing the amount of lubricant slightly decreases the joint strength.Zheng et al. [79] studied the effect of a hydrophobic lubricant on the adhesive joints of different aluminium alloys.From the strength point of view, they found that below a certain value (2.21 g/m 2 ), the lubricant itself had no effect on the strength, while above this value the oil would decrease adhesion.Moreover, the lubricant itself was increasing the corrosion performance of the joint protecting the substrate from possible electrochemical reactions. Adhesion Theory It is not completely clear which is the mechanism behind adhesion.Several theories have been attempted to describe this phenomenon.However, no single theory can give a comprehensive mechanism of adhesion but each look to be more feasible for specific substrates and applications rather than for others.The most common theories are described in the following sections.The most relevant theories to explain the bonding between the metal substrate and the epoxy are the mechanical and the adsorption theory.A schematic representation for the different adhesion theories is illustrated in Figure 11. Mechanical Theory According to this theory, the degree of adhesion for a bonded joint is directly linked with the porosity and roughness of the substrate.The adhesive must fill the cavities of the adherends' surface in order to achieve an intimate contact referred to as mechanical interlocking.Experimental results show an increase in joint strengths after mechanical roughening of the surface, supporting this theory [80].It was seen that in case of anodized aluminium substrates, when a porous open structure is formed, the adhesion strength measured was even higher [81][82][83] than in the case of a non-anodized substrate.Nonetheless, when the roughness is excessive there could be an incomplete wetting of the surface and voids may be created which can act as concentration points initiating failure [26].Criticism against this theory are based on the fact that good adhesion can anyway be obtained when using a smooth surface [84].Boutar et al. [85] studied the effect of surface roughness on the strength of single lap joints.It was seen that changes in the roughness are not enough to explain a change in strength and other factors such as physical and chemical properties of the interface need to be taken into account as well. Adsorption Theory According to the adsorption theory, macro-molecules of the mobile phase are adsorbed onto the substrate by forces which range from weak dispersion forces to chemical bonds.As a consequence, an "interface" is formed.The typical strengths from different kind of interactions are shown in Table 3 [6]. Van der Waals Interactions Van der Waals forces are the one responsible for the physical adsorption.These involve attractions between permanent dispoles and induced dipoles and are of three types: London, Keesom and Debye interaction.Even if these interactions are weak (in comparison with chemical bonds), as they occur between any two molecules in contact, they contribute to all adhesive bonds.By measuring the contact angle it is possible to asses the Van der Waals forces and to predict the stability of the joint by calculating the thermodynamic work of adhesion [12]. Chemical Bonds Chemical bonds are considered as primary bonds when compared to the physical interactions which are called secondary force interactions.The choice of this nomenclature comes from the strength of the interaction as it can be seen from Table 3.The formation of a chemical bond depends on the reactivity of both the adhesive and the substrate.Different types of chemical bonds can be formed according to their strength.The covalent bond is usually the strongest and the most durable.The presence of covalent bonds was found from some studies on the action of organosilanes promoters used to bond epoxies to metal and glass surfaces [86].It has been long suggested that the effectiveness of silanes was partially associated with their capability to form a covalent bond between the silane and the metal surface [87][88][89].Adhesive can bond to the substrates in a different manner as well.It was shown by different studies [90][91][92][93][94][95][96], that when a carboxylic acid is in contact with a metal oxide substrate, a carboxylate bond is formed (COO -).This bonds forming an ion with the substrate.Brand et al. [97] studied the interaction of different carboxylic acid based model compounds with differently prepared aluminium substrates via infrared reflection absorption spectroscopy (FTIR-RAS). Their study shows that all the carboxylic acid groups are deprotonated to form a carboxylate bonded with the alumina substrate. Acid-Base Theory Fowkes [98] was the first that considered that the interaction between two materials has two main contributions.A contribution due to dispersion interactions in the form of a geometric mean relationship and a contribution of an acid-base interaction.The acid-base interactions are considered the most important interaction which exist across the metal-polymer interface.This theory is based on the interaction between an acceptor and a donor of electron pair.In acid-base interactions both covalent and non-covalent factors are involved [99].An acceptor is a molecular system which has unoccupied levels and an affinity to electrons.A donor, on the other side, is a system which has a lone electron pair.According to Brønsted, an acid is a proton donor and a base is a proton acceptor.These reagents may be both molecules or ions.The reaction of transfer of a proton is referred to as protolytic.A protolytic reaction is a reversible reaction.Therefore a proton transfer can occur also in the reverse process, the reaction products can act as an acid and a base relative to each other.The acid-base properties of substances are dependent on the thermodynamics of protolytic reactions.An example of this is HCl with H + the acid and Cl -the conjugated base [84].Lewis definition of acid-base interactions (1923) states that a basic substance is one which has a lone pair of electrons which may be used to complete the valence shell of another atom (the acid) and that an acid substance is one which can employ a lone pair from another molecule (the base) in completing the valence shell of its own atoms [100].The hydrogen bond is a sub-class of the Lewis base interactions.A hydrogen bond can exist between two electronegative atoms or groups of which one has a hydrogen atom covalently attached to it.A larger electronegativity of the attached atom results in more electronic charge withdrawn from the proton.Therefore, the H is charged more positively and can form a stronger hydrogen-bond.A molecule which contains hydroxyl (OH) group can act as a base through the O or as a acid through the H [101]. If the oxide surface of a metal is exposed to the environment or immersed in aqueous solutions it forms hydroxyl groups due to the interaction with water molecules.When exposed to an aqueous solution the surface hydroxyl groups remain undissociated if the pH of the the aqueous solution is the same as the isoelectric point (IEP) of the oxide.If the pH is less than the IEP the surface will have a positive charge If the pH is greater than the IEP, the surface will acquire a negative charge In the first case, the surface species −MOH + 2 is a Brønsted acid because it is a proton donor.−MO − is a Brønsted base because it is a proton acceptor.Therefore, −MOH is amphoteric as it is both an acid and a base [102]. Using the definition of Lewis for acid and bases the types of acid sites on metal oxide surfaces are (with δ S being the partial charge): The basic sites are Therefore a metal surface can be seen as a sequence of Lewis acid-base and Brønsted acid-base sites [101]. What is present on the surface depends on the metal oxide and on the preparation of the oxide itself [101,[103][104][105].By using photoelectrochemistry, XANES and XPS, Lopez et al. [106,107] studied the basicity of differently pre-treated aluminium surfaces.Formic acid and pyridine were used to probe respectively surface Brønsted basic and acid sites and it was observed that a Mg-containing oxide has the highest basicity while the alkaline detergent degreased (sodium silicates and phosphate) has the lowest. The acid/base theory has been used to identify the type of interactions which occurs between an epoxy and a substrate.Nakemae et al. [108] studied an epoxy resin/4,4 -diaminodiphenylmethane (DDM) curing agent system on oxidized aluminium via XPS, FTIR and contact angle measurements.It was seen that the curing agent (DDM) is preferentially adsorbed onto the Al 2 O 3 substrate while the epoxy resin was adsorbed on Al 2 O 3 particles whose surface was covered with the epoxy.From their study they concluded that the interaction between the cured epoxy resin and the oxidized aluminium interface was an acid-base interaction between amino groups of the curing agent and acid sites of the oxidized aluminium.Similar results were found by Hong et al. [109] which studied the behaviour of an epoxy/amidoamine system on iron, aluminium and zinc oxides and they saw that the amidoamine curing agent is preferentially adsorbed on the three metal oxide surfaces due to the surface acid-base interaction between the metal surface and the curing agent.Therefore, curing agents are often used as model epoxy systems to study the interactions between the epoxy and the metal oxide.Abel et al. [110] studied the interaction between diethanolamine (DEA) and two substrates: grit-blasted aluminium and aluminium coated with an organosilane adhesion promoter, gamma-glycidoxy propyl trimethoxy silane (GPS) by X-ray photoelectron spectroscopy (XPS) and time-of-flight secondary ion mass spectrometry (ToF-SIMS).It was found that on oxidized aluminium DEA interacts with the surface via the formation of a hydrogen bond or via donor-acceptor interaction with an aluminium atom.When deposited onto GPS-coated aluminium, DEA undergoes two types of interactions: formation of covalent bond with nucleophilic addition or Brønsted type of interaction between nitrogen of DEA and silanol functionality of hydrolyzed GPS. Diffusion Theory The diffusion theory suggests that the adhesion is due to an interdiffusion of molecules in and between the adhesive and the adherend.This theory is applicable when both adherend and adhesive are polymers with relatively long chain molecules capable of moving.An interphase layer will be formed typical in the thickness range of 10-1000 A. As no physical discontinuity exists in the interphase area, there will be no stress concentration.To interpret the diffusion bonding, cohesive energy density (CED) can be used as defined in Equation (1) where: E coh is the amount of energy required to separate molecules to an infinite distance , V is the molar volume and δ is the solubility parameter.Bond strength is maximized when solubility parameters between adhesive and adherend are similar. There is a small number of polymer pairs which make this interaction possible.Schreiber and Ouhlal [111] annealed a number of polymer pairs in contact for up to 72 h at 60-160 • C and found an increase in adhesive strength for polypropylene/linear low density and polyethlene and polystyrene/PVC but not for polystyrene/PMMA and PVC/polyvinylidene chloride.Their data shows that there are significant contributions to bond strength coming from diffusion when dispersion forces and acid-base interactions are favourable at the interface [112]. Electrostatic Theory This theory proposes that there is an electrostatic effect which joins adherend and adhesive.An electron transfer takes place between the adhesive and the adherend as results of unlike electronic band structures and a electric double layer is formed between adhesive and adherend.According to this theory adhesive and adherends which contain polar molecules or permanent dipoles are most likely to form electrostatic bondings.As polymers are insulators it seems difficult to apply this theory to adhesives [113].However, this theory was supported by the study of Randow et al. [114] which investigated the adhesion of some commercial films used in food packaging to glass, steel and polyolefin substrates.The films were made of PVC.It was seen that after separation there was a residual electrical charge on both film and substrate and that all the films showed sparking when repeatedly applied to glass [112]. Environmental Degradation of Adhesive Joints One of the main drawbacks of adhesive joints is their long term durability when exposed to environmental conditions.In the following section the effect of water, corrosive environment and external stress of the durability of the adhesives joints will be discussed. The Effect of Water Water can enter the bonded system by bulk diffusion through the adhesive, interfacial diffusion along the interface between the adhesive and substrate, and by capillary action through cracks or defects in the adhesive or conversion layer.Moreover, it can affect the system by either modifying the adhesion properties or by displacing the adhesive at the interface causing failure [12]. The Effect of Water on The Adhesive All adhesives, even if in different extent, absorb water.The diffusion of water in polymers follows Fick's law. F = −D ∂c ∂x where F is the flux or rate of transfer per unit area, c is the concentration of the diffusing substance, x is the space coordinate measured normal to the section, and D is the diffusion coefficient.Furthermore, the diffusion of water in structural adhesives follows the Arrhenius equation, and therefore the rate of diffusion increases strongly with temperature [12]. Once it reaches the joint, there are several ways in which the water can affect the adhesive.These, reviewed by Comyn (1983), include the following: • causing plasticization by altering the properties of the adhesive in a reversible way • causing the adhesive to crack, craze or hydrolize, in this case the properties of the adhesive are altered in an irreversible manner • attacking the adhesive-adherent interface • causing stress due to swelling Weitsman [115] studied the stresses introduced into an adhesive joint by water swelling.In the case of an epoxide adhesive, the joints were evaluated to swell 3% in water and the stresses were localized at the edges of the joint.However, due to drying the stress concentration was decreasing with time suggesting that swelling does not give a contribution on the long term.Morsch et al. [116] studied the effect of water uptake on epoxy-phenolic coatings.It was seen that the water uptake is not completely a reversible process.In fact, in case of pre-soaked and dried coatings, an exposure to humid environment led to a greater amount of water absorbed.This was interpreted as a result of water plasticized macromolecular relaxation mechanism which resulted in the formation of nanoscale hydrophilic regions in the epoxy during immersion.Xu and Dillard [117] exposed electrically conductive adhesives to saturated air at 85 • C for up to 50 days and then dried the samples at 150 • C.They measured water absorption and mechanical properties.Part of the joint strength was recovered, indicating that some reversible processes are taking place and that interfacial irreversible phenomena are of greater impact in the failure of the joint than bulk properties of the adhesive. The Effect of Water at the Adhesive Metal Oxide Interface While water uptake in the bulk follows Fick's law, the speed of water diffusion at the interface is higher, and it is attributed to the capillary diffusion occurring between the adhesive and the adherends [118] . The water penetration process to the adhesive/metal oxide interface was analyzed by Gledhill and Kinloch [119] based on thermodynamics.They defined the work of adhesion as the energy required to separate the unit area of two phases which are forming an interface.In an inert medium the work of adhesion is expressed as: where γ A and γ s are the surface free energies of the adhesive and substrate, respectively and γ AS is the interfacial free energy.In the presence of a liquid, the work of adhesion is: where γ AL and γ SL are the interfacial free energies between the adhesive-liquid and substrate-liquid interfaces, respectively.For an adhesive-substrate interface, in an inert environment, the work of adhesion W A always has a positive value, indicating thermodynamic stability of the interface.However, in the presence of a liquid, the value of W AL may have a negative value, indicating that the interface is now unstable.Some values of work of adhesion for different adhesive/substrates are shown in Table 4 [26].The thermodynamic approach shows that substrates with high energy, such as metal oxides, are very susceptible to water to displace the adhesive from the substrate.However, it is not straightforward to give a complete description of the metal/polymer adhesive interface.This is due to the tendency of the metal oxides to hydrate when exposed to the atmosphere. When water interacts with an aluminium substrate it will form hydrated oxides according to the reactions: If the hydration occurs under an adhesive, the resulting increase in volume may induce high stresses and crack growth [25].Therefore, the adhesive, to tightly bond with the substrate, must reach through layers of water molecules to reach the hydroxyl base connected to the metal surface.Water can act both as acid or base.Thus, when there is a weak acid or basic group present at the interface, water can easily penetrate and lead to eventual destruction of the adhesive/substrate interface.The approach mentioned above is only valid when the forces are a result of van der Waals forces and not due to stronger primary forces.Therefore, pre-treating the surface and creating primary bonds with structural adhesive leads to an improvement on the adhesive performances [120].The stability in water between organic coatings of PAA and PMMA and aluminium oxide was studied by Pletincx et al. [96,121] by mean of in-situ techniques such as ambient pressure X-ray photoelectron spectroscopy (APXPS) and attenuated total reflectance Fourier transform infrared spectroscopy in the Kretschmann geometry (ATR-FTIR Kretschmann).For both cases it was seen that when water reaches the interface, the water molecules deprotonate the carboxylic acid forming carboxylate ions.The carboxylate ions are then reacting with the surface hydroxyl groups of the aluminium surface to form ionic bonds and hydroxide ions.However, after a certain exposure time it was seen taht the interfacial interactions are destroyed by water, which eventually leads to macroscopic delamination [122]. Abrahami et al. [123] studied the adhesion of epoxy resin as a function of the surface chemistry of barrier-type anodic oxides prepared in sulfuric acid (SAA), phosphoric acid (PAA), and mixtures of phosphoric-sulfuric acids (PSA).It was seen that, bonding stability under wet conditions is highly influenced by the surface chemistry.The wet adhesion strength increases with the hydroxyl concentration at the aluminum (oxide) surface, indicating that interfacial bonding is established through these surface hydroxyls.In this study the presence of phosphates and sulfates anions were not found to contribute to bonding with this type of adhesive. Adhesive Joint Strength in Humid Environment Brewis, Comny et al. [124][125][126] exposed aluminium joints with different structural adhesives at 50 • C and 100 % RH.It was observed that the strength decreases significantly.However, when the same samples were exposed successively for the same amount of time at 50% RH and 50 • C, the strength would partially recover.Therefore, it seems that the mechanism responsible for the decrease in joint strength in humidity is at least partially reversible.Similar results were observed by Mubashar et al. [127] on AC anodized single lap joints of AA7075.The weakening of the joints is function of the humidity at which they are exposed as well.Although joints decrease in strength at high humidity (80-100 % RH), it has been frequently observed that joints can withstand exposure at lower humidity (lower than 50%) for long periods without weakening.Brewis et al. [124] found that exposing aluminium adhesive joints for 10,000 h at 45% RH and 20 • C did not significantly decrease the joint strength.Gledhill et al. [128] proposed that there is a critical concentration of water in the adhesive above which the joints will be negatively affected (1.35 g/100 g).Wang et al. [129] studied the lap-shear strength of an adhesive-bonded 5052 aluminium alloy exposed to neutral salt spray for up to 1200 h.It was seen that the lap-shear strength of the joints decreased sharply in the first 240 h.The main reason for this was associated with the strong polar water molecules which dissolve the hydrogen bonds responsible of the adhesion.Few studies have been done on the effect of the Ti/Zr conversion coating on the strength of the adhesive joint exposed to humid environments.Critchlow and Brewis [130] studied the effect of zirconium-based pre-treatments on AA5251 on single lap shear joints immersed in water at 60 • C and they observed an increase in strength compared to degreased-only or grit-blasted substrates.Lunder et al. [131] studied the effect of Ti/Zr pre-treatment on the durability of epoxy-bonded AA6060 aluminium joints.According to their findings, the Ti/Zr based pre-treatment improved the adhesion of the epoxy bonded aluminium, but it was still performing worse than a chromate pre-treatment. The Effect of Corrosion on the Durability of the Adhesive Joints H 2 O can diffuse directly through the organic coating but most of the adhesive are a good barrier for ion transport due to their low dielectric constant and small free volumes due to the high cross-linking network.However both defects or cut-edges can allow the ions to reach the polymer/substrate interface. The corrosion protection performance depends also on the adhesion of the coating layer to the substrate.When the bonding between the coating layer and the substrate is strong, so that the penetration of water into the interface is difficult, the corrosion does not develop fast.However, when the bond is weak, the corrosion due to the presence of water/ions can easily propagate at the interface [120, 132,133]. Wu et al. [134] studied the effect of long-term salt spray (50 g/L of salt solution) on the strength of Zr-Ti coated and bare lap-shear aluminium joints bonded with hem flange.It was seen that, for the first 130 h the Zr-Ti protected the aluminium substrate from electrochemical reaction, therefore reducing the joint degradation.However, after 1400 h, the Zr-Ti coating provided a worse protection than the bare aluminium. For adhesive joints, the most relevant form of corrosion, which will be discussed in the following section, is filiform corrosion.However, even if not explicitly reported in literature, it is not excluded that other forms of corrosion with a slower kinetics may affect the durability of the system. Filiform Corrosion Filiform corrosion (FFC) is a form of atmospheric corrosion which occurs under organic coatings in the form of narrow interconnected thread-like filaments [135]. FFC was observed for the first time in 1944 on steel, while on aluminium it was observed for the first time in the late 1960' s where it occurred around rivet heads and from the edges of the aluminium skins on certain aircrafts exposed to aggressive tropical environments [136,137]. In case of polymer layers on non-conductive oxide surfaces, like on Al-alloys, filiform corrosion is due to anodic delamination.Anodic undermining corresponds to a situation in which the loss of adhesion is caused by the anodic dissolution of the substrate.The metal at the edge of the filament is therefore anodic.While on steel the anodic delamination usually occurs under an applied potential, aluminium substrates are particularity prone to anodic undermining.The main environmental factors that are crucial for the initiation and proliferation of this form of corrosion were found to be relative humidity (above 80%), the presence of aggressive ions such as Cl -and defected sites in the coatings.The filaments are made of an "active head", which is filled with liquid, and a "tail|, with dry corrosion products and it may reach the length of several centimetres [138]. The primary driving force for filament advancement is thought to be an oxygen concentration cell which forces the anodic metal dissolution reaction (Equation ( 2)) in the case of aluminium to occur at the leading edge and cathodic oxygen reduction (Equation (3)) to occur at the trailing edge of the active head. [Al(H Reaction ( 1) and ( 2) will occur simultaneously.The metal therefore adopts a mixture potential relative to the electrolyte, known as the free corrosion potential, which lies between the equilibrium potentials of the couples present and is determined by their relative reversibility. William et al. [138] described in their work the kinetics and the mechanism of FFC using a scanning kelvin probe (SKP).A defect was formed on epoxy coated aluminium alloys which were successively exposed for some time to HCl vapour.The exposed samples were than placed in the SKP which was maintained at 50 • C and 95 % RH.The proposed initiation and propagation mechanism is explained here. Initiation.When the adhesive joint is exposed to high humidity, several layers of water molecules are formed on the bare aluminium which is exposed due to the scratch.As the transport of O 2 is much easier through the thin electrolyte than through the polymer film, the reaction in Equation (3) will occur on the bare metal.On the other side, the reaction in Equation (2) will occur where there is a deficiency of O 2 , thus on the metal/electrolyte/coating interface.Therefore a local anode in which Al is dissolved will be formed.As the Al dissolution proceeds the Cl -ions will migrate beneath the delaminated coating to preserve electroneutrality and water will be drawn in by osmosis to produce an electrolyte droplet (Figure 12a). Propagation Once the corrosion process starts, it will try to propagate.The reaction described in Equation ( 2) occurs mostly at the leading edge.The cations created at the leading edge will than migrate towards the trailing edge and will combine with OH anions generated by the reaction in Equation (3).Water insoluble products will eventually precipitate.Cl -anions will keep moving towards the anodic leading edge and all water soluble ions and all liquid water molecules will be in the FFC active head.The Al(OH) 3 corrosion products left behind will slowly lose water and convert into porous hydrated aluminium.However, O 2 will easily pass through Al 2 O 3 and hence filament propagation will be maintained over important distances (Figure 12b). Alloying elements and especially the presence of electrochemically noble second phase particles play an important role for FFC to occur.The intermetallic particles in aluminium alloys may either act as local cathodes or anodes.Afseth et al. [44] studied the corrosion mechanisms of 2xxx series Al-alloys showing that the enrichment in Cu creates areas for cathodic reactions to occur.Therefore, when a filament propagates, it will use the intermetallic particles which are present at or near the surface, to jump from particle to particle.The detrimental effect of Cu enrichment at the surface was seen as well by Colemal et al. [139] on a high levels copper AA6111.Regarding the role of surface pre-treatments to protect from filament corrosion, Lunder et al. [131] studied the effect of Ti/Zr pre-treatment on AA6060.In their study it was found that this did not provide significant protection against FFC as the Ti-Zr oxide deposit did not inhibit the cathodic activity on the electrochemically active particles.Moreover, Coleman et al. [140], dissolved phenylphosphonic acid (H 2 PP) in a polyvinyl butryal coating deposited on AA6111-T4 and followed the FFC, after application of HCl in a penetrative coating defect, with use of SKP.They saw that the presence of the H 2 PP inhibits the propagation of the filament by reducing up to 0.35 V (compared to a coating without H 2 PP) the difference in free corrosion potential between the intact and the scratched part of the sample. The Effect of the Near-Surface Deformed Layer on Filiform Corrosion It was demonstrated that there is a strong correlation between the presence of a NSDL and the susceptibility to corrosion, in particular FFC [139,141].This is due to the fact that the NSDL is more reactive than the bulk, promoting therefore rapid delamination.In their work, McMurray et al. [142] investigated, by use of SKP, the effect of NSDL on AA6111.Moreover, they proposed a delamination mechanism which is presented here.According to their work the delamination process in presence of a NSDL proceeds in two different phases.The first part consists of the intergranular anodic attack.However, as the NSDL presents a high density of grain boundaries and small grain size, this will result in the consumption of the NSDL.Therefore, the propagation of the filament can be distinguished in 4 different areas as shown in Figure 13a. • Area 1, not corroded with NSDL still present • Area 2, anodic site with dissolution of the NSDL • Area 3, in which the NSDL is com ely dissolved and the bulk aluminium acts as cathodic O 2 reduction site • Area 4, comprising a dry porous tail as described in the previous section. It was shown that, in presence of a NSDL, together with differential aeriation (which is still the most important driving force for the FFC to occur), an additional force is represented by the difference in potential of the NSDL and bulk material which will lead to anodic activation. In the second phase, there is a proceeding of the FFC.However, as the NSDL has been already consumed, the bulk metal will not be cathodically protected by an anodically active NSDL.Therefore, a successive-pitting FFC will be present (Figure 13b).Again, in this case, the driving force is the differential aeriation arising from the ease at which atmospheric O 2 diffuses through the porous corrosion products. The Effect of Static and Dynamic Stresses on the Durability of the Adhesive Joints One of the main advantages of using adhesive joints over mechanical fastening, is the more uniform stress distribution over a large area.However, this does not imply that loads are uniform or well understood in adhesive joints. Applying a stress will cause an adhesive bond to degrade at a faster rate than an unstressed bond especially if the bonds are subject to high loads for prolonged periods. Adhesively bonded joints can be exposed to both static and dynamic stresses.The stresses are not only due to external loads.They can in fact originate as well from adhesive shrinkage (after curing), adhesive swelling (water adsorption) or from a thermal mismatch between adhesive and adherend.Moreover, stress can also accelerate other processes such as the rate of diffusion of moisture in the joint [143].The effect of the stress on the durability can be determined by the time that the joint need to fail when imposing a given load or by imposing a load for a certain time. Stress-durability testing of adhesively bonded joints is a common method to classify the performance of the joint.Marceau et al. [144] compared the performance of AA2024-T3 bonded joints under static and cyclic loads in different controlled environments.It was seen that an increase in temperature was leading to a shortened life of the lap shear subjected to fatigue.Ashcroft et al. tested different CFRP (carbon firber reinforced polymer)-epoxy lap-straps under fatigue in different environments.They showed that samples in hot-wet conditions experience a significant reduction in the fatigue threshold [145].Samples were even conditioned at high humidity conditions until saturation.For wet tested samples at 22 • C there was no change in the fatigue threshold while for samples tested at 90 • C in both wet and dry conditions, they experienced a large reduction in fatigue threshold. The effect of environmental exposure on the fatigue of mild steel joints bonded with different adhesives has been investigated as well [143].Double lap shear joints were kept under different loading and environmental conditions for 8 years.It was seen that the adhesive used had a big impact as some were showing excellent durability while others were adversely affected by the environment.The adhesives which showed a better performance were the ones cured with polyamide hardeners and with an initial high strength and Young's modulus. So et al. [146] studied the fatigue response of epoxy bonded poly(methyl methacrylate) (PMMA) and PMMA to aluminium joints.It was seen that the PMMA-to aluminium joints were more sensitive to temperature than PMMA-to-PMMA joints and for both systems the stress applied had a higher influence than the frequency. Small and Fay [147] studied the creep of steel lap joints in high humidity.A single-part, heat curing epoxy was used as adhesive and the joints were subjected to a creep test for three and a half years at 70 • C and 45 • C at 95% RH.The time to fail was ten times faster in humid conditions than at dry conditions at high temperature.For dry conditions the failure was associated with a combination of creep of the adhesive and peeling of the adhesive from the adherends.Moreover it was seen that the way the joints are assembled has an influence on the joint durability [148,149]. A number of techniques have been used to model fatigue and creep in bonded joints.The best is to combine load-life approach together with monitoring techniques, like embedded sensors, and modelling analysis such as finite element analysis (FEA) [150]. Conclusions Adhesive joints are an important joining technique in the automotive industry as their introduction into car manufacturing can be viewed as one of the main enabling technologies for the production of all-aluminium car body structures.One of the main issues related with a wider use of adhesive joints is the durability of these systems when exposed to service conditions where water, corrosive environments and external stresses are present.This work presents the main parameters influencing the durability of adhesive joints.The different preparation phases which lead to the final adhesive system were analysed.It was seen how the top surface's chemistry, which has an important role in the final adhesion, changes in each step.In particular, a wide variety of surface pre-treatments can be applied on the substrate to obtain a beneficial effect on the adhesion/corrosion protection.The way in which the substrate interacts with the epoxy, is explained by the different classical adhesion theories.In particular, the adsorption theory is the most relevant for the polymer/metal oxide bonding and it was used to describe a different range of interactions between the adhesive and the substrate.It was seen how those interactions can deteriorat by the presence of water/moisture or corrosive ions.Water can in fact reach the epoxy/substrate interface displacing the adhesive at the interface and causing interfacial failure.Moreover, water can be absorbed by the adhesive and weaken it.The presence of ionic species, together with high moisture content, can lead to the corrosion of the substrate.In particular, adhesive joints are prone to filiform corrosion.The detrimental effect of water or corrosive ions seems to be accelerated by the presence of dynamic or static stresses. Figure 1 . Figure 1.Graphical outline of this chapter. Figure 3 . Figure 3.The structure and properties of an epoxy resin (DGEBA). Figure 4 . Figure 4. Mechanism of cure of primary or secondary amines. Figure 5 . Figure 5.Chemical structure of a polyamide. Figure 8 . Figure 8. Main features of the aluminium top surface after rolling.NSDL is the near surface deformed layer which is created after the rolling process. Figure 9 . Figure 9. Schematic representation of near surface deofrmed layer.This schematic representation was created based on the thoery of Fishkis et al.Data from [35]. Figure 10 . Figure 10.Interaction between a silane coupling agent and the metal substrate. Figure 12 . Schematic diagram showing the process of initiation (a) and propagation (b) of FFC on aluminium.This schematic was based on the theory illustrated by Williams et al.Data from [138]. Figure 13 . Schematic diagram showing the mechanism of FFC in the presence of NSDL.This schematic was based on the theory illustrated by McMurray et al.Filament propagation (a) and successive pitting (b).Data from [142]. Table 4 . [26]es of work of adhesion for various interfaces in dry air and water, data from[26].
2020-06-04T09:08:58.276Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "4a5db51e51e63a71461e2269e7e045ca79c93327", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4701/10/6/730/pdf?version=1592796865", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "3c0fc980b28806a734a6ebc0cc5c94ea7c5743f6", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
118359812
pes2o/s2orc
v3-fos-license
Grand Unification in High-scale Supersymmetry A constraint on masses of superheavy gauge and Higgs multiplets at the grand unification (GUT) scale is obtained from the gauge coupling unification in the case of high-scale supersymmetry. We found that all of the particles may lie around a scale of 10^16 GeV so that the threshold corrections to the gauge coupling constants at the GUT scale are smaller than those in the case of the low-energy supersymmetry. In addition, the GUT scale tends to be slightly lower when the gauginos are heavier and, thus, the proton decay rate via the X-boson exchange process is expected to be enhanced. Introduction Supersymmetric grand unified theories (SUSY GUTs) [1,2] are promising candidates of physics beyond the Standard Model (SM). Indeed, they are strongly motivated by an experimental observation which implies that the gauge coupling constants of the SM gauge groups are to be unified at a certain high-energy scale with good accuracy [3][4][5][6][7]. From the observation, the unified scale is estimated as M GUT ≃ 2 × 10 16 GeV. In the GUTs, new superheavy particles are supposed to appear around the scale, which make the couplings run together above the scale. These particles are naturally expected to have masses of the order of M GUT . They are, of course, beyond the reach of collider experiments, and there is little hope to search for them or to measure their masses directly. In Refs. [8,9], a way of constraining the masses of superheavy particles indirectly by requiring the gauge coupling unification is discussed and limits on the masses are presented. Later, by applying the same method with more accurate gauge coupling constants, the authors in Ref. [10] derive a more stringent constraint on the masses of the color-triplet Higgs boson, the adjoint Higgs bosons, and the X bosons in the context of the minimal SUSY SU(5) GUT [1,2]. They have found that while the masses of the adjoint Higgs and the X bosons are around 10 16 GeV, the mass of the color-triplet Higgs boson lies in the region of 3.5 × 10 14 GeV M H C 3.6 × 10 15 GeV, which is significantly below the GUT scale. This implies that the threshold corrections to the gauge coupling constants are still not small, even in the SUSY GUTs. In fact, the analysis is quite sensitive to the mass spectrum in the intermediate scale, such as those of the SUSY particles. In Ref. [10], all of the particles except for gauginos are assumed to be at 1 TeV. The gaugino masses are set to be around the electroweak scale with the GUT relation for the gaugino masses being assumed. Currently, on the other hand, the SUSY models with heavy sfermions have been widely discussed with a lot of attention [11][12][13][14][15][16][17][18]. Although such models originally have been discussed as a possible candidate of SUSY models from a theoretical point of view, now they are also supported by the latest results of the LHC experiments; no significant excess in the SUSY searches [19][20][21] and the discovery of the 126 GeV Higgs boson [22,23] both suggest that the SUSY breaking scale is somewhat higher than the electroweak scale. While this high-scale SUSY scenario is difficult to be probed at the collider experiments, it still has a lot of phenomenological consequences which might be checked in other experiments. Recent studies on the subject are given in Refs. [24][25][26][27][28][29][30][31][32][33][34][35][36]. As mentioned above, the GUT scale mass spectrum inferred from the indirect analysis presented in Refs. [8,9] is highly dependent on the SUSY mass spectrum. In particular, if the SUSY particles have masses much larger than O(1) TeV, previous results are expected to be changed significantly. In this Letter, therefore, we revisit the analysis in the highscale SUSY scenario. We carry out the calculation in the minimal SUSY SU(5) GUT with sfermions having masses well above the electroweak scale. We will see that while the constraint on the adjoint Higgs and X boson masses only differs from previous ones slightly, that on the color-triplet Higgs mass is found to be changed by more than an order of magnitude and actually improved in the sense that it may also be around the GUT scale. Interestingly enough, this result again indicates that the high-scale SUSY scenario is rather supported than the traditional low-energy SUSY models. This Letter is organized as follows: in Sec. 2, the high-scale SUSY model which we discuss below is presented, and its phenomenological aspects are briefly described. The mass spectrum of the superheavy particles in the minimal SUSY SU(5) GUT is also displayed there. In the subsequent section, we discuss the method of constraining the masses of the GUT-scale multiplets by means of the renormalization group equations (RGEs) as well as the threshold corrections of the gauge couplings. Then, we show some results in Sec. 4. Section. 5 is devoted to conclusions and discussion. Model and Spectrum Let us begin by presenting a high-scale SUSY model discussed in this Letter. We consider the particle content of the minimal supersymmetric Standard Model (MSSM). Then, the only assumption which we adopt here is that there exists a dynamical SUSY-breaking sector with the Kähler potential having a generic structure. Then, all of the scalar bosons except the lightest Higgs boson acquire masses of the order of the gravitino mass m 3/2 , while the lightest Higgs boson mass is fine-tuned to be m h ∼ 126 GeV. The higgsinos may in general have similar masses to the gravitino mass, though they might be suppressed if there are some extra chiral symmetries. The gaugino masses are, on the other hand, generated by the quantum effects [11,37] and, thus, suppressed by a loop factor compared with m 3/2 . There are two kinds of contributions to the gaugino masses; one is the anomaly mediation effect [11,37] and the other is from the higgsino-Higgs boson loop diagram. These two effects give rise to the gaugino masses as with representing the higgsino-Higgs boson loop contribution. Here, M a and g a (a = 1, 2, and 3) are the U(1) Y , SU(2) L , and SU(3) C gaugino masses and gauge coupling constants, respectively. We use the SU(5) normalization for the U(1) Y coupling, i.e., g 1 ≡ 5/3g ′ . Further, µ H and m A denote the higgsino and the heavy Higgs boson masses, respectively, which are assumed to be the same order of magnitude as sfermion masses. tan β is the ratio of the vacuum expectation values (VEVs) of the Higgs fields; tan β ≡ H u / H d . Since the higgsino mass is presumed to be around the gravitino mass, the higgsino-loop contribution L is also expected to be as large as m 3/2 . The values of the gaugino masses are, however, dependent on the relative phase between µ H and m 3/2 . In the following discussion we just assume the gaugino masses are lighter than the scalar masses by a loop factor, and regard them as free parameters. As mentioned in Introduction, this model has a lot of fascinating features from a phenomenological point of view. They are originally discussed in Refs. [11][12][13][14][15][16][17][18], and recent development is given in Refs. [24][25][26][27][28][29][30][31][32][33][34][35][36]. In these works, a typical scale for sfermion masses is taken to be O(10 2 -10 3 ) TeV, which explains the 126 GeV Higgs boson mass. With such heavy particles the SUSY flavor and CP problems [38] as well as the gravitino problem are considerably relaxed. In this case, the gaugino masses are O(1) TeV because of an one-loop factor. Note that from Eq. (1) it is found that with a moderate value of L, wino becomes the lightest among gauginos, and thus the lightest SUSY particle in this model. It is quite interesting since the wino dark matter with a mass of 2.7-3 TeV is consistent with cosmological observations [39]. The dark matter might be searched directly [28,40,41] or indirectly [26,42,43] in future dark matter experiments. Moreover, the gauge coupling unification in this high-scale SUSY model is found to be achieved as precisely as that in the MSSM. Thus the SUSY GUT is still promising in the case of high-scale supersymmetry. In the next section, we in turn require the gauge coupling unification and constrain the GUT scale mass spectrum by using the requirement. Now, we briefly summarize the superheavy particles in the minimal SUSY SU(5) GUT, which we adopt as the working hypothesis in this paper. The SUSY SU(5) gauge theory includes twenty-four gauge superfields V A with A = 1, . . . , 24. By using the SU(5) generators T A we define a 5 × 5 matrix representation of the vector superfields such that V ≡ V A T A , with the components written as ( We collectively call X α and Y α the X-bosons hereafter. The unified gauge group SU (5) is spontaneously broken by the VEV of the adjoint Higgs boson The MSSM Higgs superfields are, on the other hand, incorporated into fundamental and anti-fundamental representations as follows: where are the MSSM Higgs superfields. H α C andH Cα are called the color-triplet Higgs multiplets. The superpotential of the Higgs sector in the minimal SU(5) SUSY GUT is given by After the adjoint Higgs field gets the VEV, (5) the unified gauge coupling constant. As regards the adjoint Higgs multiplets, the components Σ 3 and Σ 8 The components Σ (3 * ,2) and Σ (3,2) become the longitudinal component of the X-bosons, and thus do not show up as physical states. Renormalization Group Analysis In this section, we present the RGEs of the gauge and Yukawa coupling constants, as well as the boundary conditions at each threshold. We use the DR scheme [44] in this work. First, we write down the two-loop beta functions [45]. In the MSSM, the two-loop RGEs for the gauge coupling constants are given as with y i (i = t, b, τ ) the top, bottom and tau Yukawa coupling constants, respectively. Since the Yukawa couplings enter into the two-loop level contributions to the gauge coupling RGEs, it is sufficient to consider the RGEs for the Yukawa couplings at one-loop level. They are given as Below the SUSY breaking scale (M S ), the squarks and sleptons, the higgsinos, and the heavy Higgs boson masses are decoupled so that the theory is regarded as the SM with gauginos. The contribution of gauginos and the SM particles to the coefficients of the beta functions is given as where the subscripts "SM" and "gaugino" indicate that the contributions are of the SM particles and gauginos, respectively. The running of the Yukawa couplings in this case is given as follows: Next, we consider the matching conditions at each threshold scale. At the GUT scale, the gauge coupling constants in the SU(3) C ×SU(2) L ×U(1) Y gauge theories are equated to the unified coupling constant g 5 with the following threshold corrections at one-loop level [46,47]: Here, the conditions do not include constant (scale independent) terms since we use the DR scheme for the renormalization [5,48]. From the equations it immediately follows that The relations allow us to evaluate the masses of the heavy particles, M H C and M 2 X M Σ , from the gauge coupling constants determined in the low-energy experiments through the RGEs [8,9]. While the couplings are well measured with high precision, the estimation is quite dependent on the spectrum in the intermediate region, especially on the masses of gauginos and higgsinos. For the SUSY breaking threshold, we just equate the gauge couplings above and below the threshold, and change the beta functions appropriately for each region. This approximation is valid since the particles appearing at the scale are assumed to be degenerate in mass. In the case of gauginos, on the other hand, we need to consider the threshold corrections since the mass difference among gauginos might be sizable. The condition is where g a (µ) SM are the couplings in the SM while g a (µ) gaugino are those above the gaugino threshold. The Yukawa couplings are matched as usual, i.e., at the SUSY breaking scale, the Yukawa couplings y i (µ) below the SUSY breaking scale are matched with the supersym-metric ones, y i (µ) MSSM , as follows: Before concluding this section, we solve the RGEs at one-loop level and, taking the threshold corrections into account, derive relations between the superheavy masses and the low-energy gauge coupling constants. Such relations reflect the dependence of M H C and M 2 X M Σ on the mass spectrum of the SUSY particles. By inserting to Eq. (17) the one-loop solutions of the RGEs for the gauge couplings, we have From Eq. (20) we find that the mass of the color-triplet Higgs M H C gets larger as the SUSY breaking scale M S is taken to be higher. This originates from the mass difference among the components of the fundamental Higgs multiplets, i.e., the triplet-Higgs, higgsinos, heavy Higgs bosons, and the lightest Higgs boson. Therefore, the behavior of M H C with respect to the SUSY breaking scale is universal in a sense. Further, M H C depends only on the ratio of M 2 and M 3 . M 2 X M Σ is, on the other hand, independent of the SUSY braking scale M S while dependent on the scale of the gauginos, not their ratio. This is because the right-hand side of Eq. (21) results from the mass difference in the gauge vector multiplets and the adjoint Higgs multiplet, a part of which is included as the longitudinal mode of the gauge multiplets. It is also found that M 2 X M Σ decreases when the gaugino masses are taken to be large values. This is owing to the opposite sign of the contribution of gauge fields to the gauge beta functions to those of matter fields. This feature is, therefore, again model-independent. In the subsequent section, we carry out a similar analysis using the two-loop RGEs. Results Now we present some results for the RGE analysis which we discuss in the previous section. As noted above, the running of the gauge couplings is computed at two-loop level and the threshold corrections are taken into account at one-loop level. The masses of sfermions, heavy Higgs bosons, and higgsinos are taken to be M S for brevity. Gaugino masses are assumed to be lighter than M S by an one-loop factor. First, we consider the color-triplet Higgs mass M H C . In Fig. 1, we plot the dependence of M H C on the SUSY breaking scale M S in the pink lines. Here, the wino mass M 2 is fixed to be 3 TeV, which is favored from the thermal relic abundance, and tan β = 3. The ratio of the gluino and wino masses, M 3 /M 2 , is set to be M 3 /M 2 = 3, 9, and 30 from top to bottom, respectively. Further, we show the error of the calculation coming from that of the strong coupling constant α s (m Z ) = 0.1184(7) [49]. 20); M H C increases as the SUSY breaking scale grows while it decreases when the ratio M 3 /M 2 becomes large. To see the latter feature more clearly, we show its dependence on the gluino mass M 3 . Again, we set tan β = 3, and the SUSY breaking mass is fixed to be M S = 10 3 TeV. The upper and lower lines correspond to M 2 = 3 TeV and 300 GeV, respectively. These two figures show that M H C is strongly dependent on M S and M 3 /M 2 . Therefore, to predict the mass with high accuracy, precise determination of the masses of gauginos as well as the SUSY breaking scale is inevitable. Any way, in the high-scale SUSY scenario it is found to be possible for the mass of the color-triplet M H C to be around ∼ 2 × 10 16 GeV, which is expected by the gauge coupling unification. Next, we discuss constraints on M 2 X M Σ derived from the relation (21). From now on we define M GUT ≡ (M 2 X M Σ ) 1/3 and refer to it as the GUT scale. The equation (21) tells us that the GUT scale depends on only the gaugino masses at the leading order, so we express M GUT as functions of the gaugino masses. In Fig. 3 we plot it as functions of gluino mass. Here again we fix tan β = 3 and M S = 10 3 TeV. The upper and lower lines correspond to M 2 = 300 GeV and 3 TeV, respectively. Again, the horizontal blue line shows a result in the case of low-energy SUSY with M S = 1 TeV, M 2 = 200 GeV, and M 3 /M 2 = 3.5, which gives M GUT ≃ 1.9 × 10 16 GeV. The error bars indicate the input error of the strong coupling constant α s (m Z ) = 0.1184(7) [49], though the effect is negligible. We see that the GUT scale also has little dependence on the gaugino masses. In that sense, the prediction is robust compared with that for M H C . However, as discussed in the previous section, the GUT scale M GUT gets lower when the gaugino masses become larger (M GUT ∝ (M 3 M 2 ) −1/9 ). This feature is quite interesting when one considers the proton decay via the X-boson exchange processes. Although the change in M GUT is small, it might be significant since the proton decay lifetime scales as ∝ M 4 X . For instance, when M X ≃ 0.8 × 10 16 GeV, the proton lifetime via the X-boson exchange reduces to around 5 × 10 34 years [50], which is slightly above the current experimental bound, τ (p → e + π 0 ) > 1.29 × 10 34 yrs [51]. Finally, we briefly comment on the tan β dependence of the results. Although the one-loop computation is not related with tan β, the two-loop results might be affected through the running of the Yukawa couplings. We have found, however, that the effects on the results are negligible. Conclusions and Discussion In this Letter, we have presented constraints on the masses of the GUT scale particles from the gauge coupling unification in the case of the high-scale SUSY scenario. To that end, we have used the two-loop RGEs for the gauge couplings with one-loop threshold corrections considered. As a result, the mass of the color-triplet Higgs multiplets M H C turns out to be considerably large compared with previous results in the traditional lowenergy SUSY scenario, while the GUT scale is found to be slightly lower. These are generic features resulting from the mass difference among the components of the same supermultiplet of the SU(5) gauge group. Interestingly, all of the superheavy particles might be around 10 16 GeV in the high-scale SUSY models. The mass spectrum of the GUT scale particles predicted here stimulates us to reconsider the proton decay in the case of high-scale supersymmetry. As mentioned to above, the relatively low GUT scale enhances the proton decay rate via the X-boson exchanging process. If the enhancement is strong enough, the proton decay in the p → e + π 0 channel might be searched in future experiments. Furthermore, the experiments may also reach the proton decay through the color-triplet Higgs exchange. Since the decay process predicts too short lifetime [10], some suppression mechanism for the process has been assumed. Nevertheless, with M H C larger than those considered in the previous literature and sfermions much heavier than the electroweak scale, this process might evade the current experimental bound without any mechanism of limiting the color-triplet Higgs exchanging process. In such a case, it is possible for the p → K +ν mode, which is the main decay mode in the case of the color-triplet Higgs exchange, to be searched in near future. A detailed analysis of this decay process will be given elsewhere [52]. Note that, in proposed models where the color-triplet Higgs exchange is suppressed by some mechanism, large threshold corrections to the gauge coupling constants at the GUT scale tend to appear. One of the examples is introduction of the Peccei-Quinn symmetry [53]. It was found that when the threshold corrections at the GUT scale are small, the suppression mechanism does not work [54,55]. In the SUSY SU(5) GUTs in higher dimensional space the U(1) R symmetry forbids the dimension-five proton decay, though the Kaluza-Klein particles generate large threshold corrections to the gauge coupling constants [56]. The high-scale supersymmetry would be another solution for the proton decay problem, in which such large threshold corrections are not required.
2013-08-31T15:35:30.000Z
2013-04-01T00:00:00.000
{ "year": 2013, "sha1": "e750851f64b2e1ed93e8ff1b0390f4ce12b49651", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1304.0343", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e750851f64b2e1ed93e8ff1b0390f4ce12b49651", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
17128075
pes2o/s2orc
v3-fos-license
The Changing Pattern of Upper Gastrointestinal Disorders by Endoscopy: Data of the Last 40 Years Objectives. We have investigated the changes in the incidence of various diagnoses that have been made in the endoscopy unit throughout the last 40 years. Methods. In this study, changes in the incidence of endoscopic diagnosis in upper gastrointestinal system between 1970 and 2010 were evaluated. Their diagnosis, age, and gender data were entered into the Excel software. Results. Of the 52816 cases who underwent esophagogastroduodenoscopy in the 40-year time period, the mean age was 48.17 ± 16.27 (mean ± SD). Although overall more than half of the patients were male (54.3%), in 1995 and after a marked increase was seen in the proportion of female gender (51–55%). The presence of hiatal hernia, reflux esophagitis, and the number of Barrett's esophaguses significantly increased. Erosive gastritis showed gradual increase, while the number of gastric ulcers decreased significantly. The presence of gastric and esophageal cancer significantly decreased. The number of duodenal ulcers significantly decreased. Conclusion. We detected that the incidences of esophagitis, Barrett's esophagus, and erosive gastritis significantly increased while the incidences of gastric/duodenal ulcer and gastric/esophageal cancer decreased throughout the last 40 years. Introduction During the last few decades, a change has been observed in the incidence of many gastrointestinal diseases, such as gastric cancer, acid-peptic disease including peptic ulcer, and gastroesophageal reflux disease [1]. Gastroesophageal reflux disease (GERD) was previously thought to be a rare disease in the East, but several recent reviews have also brought up the possibility of an increase in the prevalence of GERD. Esophagitis prevalence is reported to be 14.5% to 16.1% in patients for whom upper gastrointestinal endoscopy is performed due to dyspepsia and reflux [2][3][4]. Over the past three to four decades a decline in the prevalence of peptic ulcer disease in the West has been reported [5]. Similar observations have been made in the Asian-Pacific region as well [6]. The epidemiology of esophageal cancer has changed substantially over the last 50 years. It is a development that will certainly give rise to great concern. While the burden of gastric cancer remains high in the Asian Pacific region, age-standardized incidence rates have started to decline. This keeps up with observed trends in Western countries where gastric cancer has been observed to have declined since the 1940s [7]. Therefore, we retrospectively investigated the results of upper gastrointestinal system (GIS) endoscopy which was performed throughout the last 40 years. Material and Methods Istanbul is the most populated city in Turkey and its population has risen significantly during the last 40 years. Our faculty is a tertiary care institution. To determine the change of frequency of diagnosis in the upper gastrointestinal system, we retrospectively evaluated esophagogastroduodenoscopy (EGD) data recorded between the years 1970 and 2010 in the endoscopy laboratory of gastroenterology department. We reviewed 106 registries which were endoscopy reports Year 1970-1975 1976-1980 1981-1985 1986-1990 1991-1995 1996 performed between the 1970s and 2000s. We obtained the data from the year 2000 and thereafter from the computerized registries. From 2000 and on, a customized version of MedGate system from Aura was used. 56.652 data were reviewed and, of them, 52.856 were included in this study. Inadequate endoscopic reports were excluded from the study. The patients were grouped by 5-year periods. Their diagnosis, age, and gender information were recorded. After the registration of all diagnoses, it was simplified to general diagnoses and rarely seen endoscopic diagnoses. Many patients with a gastric ulcer underwent a follow-up gastroscopy a few months later. These follow-up endoscopies were excluded from the analysis. All included cases were newly diagnosed ulcers in a previously uninvestigated dyspeptic population. For the endoscopies performed in the 1970s, the devices Olympus, JF-B2, GIF-D, K, and P2 were used. Between the years 1980 and 2000, the endoscopies were performed using the devices Olympus Fiberoptic GIF T10, Q10, and K10. In the 2000s, with the introduction of video camera systems, Pentax upper GIS endoscopy devices were used. Endoscopies were done on the request of a general practitioner or a specialist, mostly an internist or a gastroenterologist, sometimes a surgeon or a cardiologist. Biopsy samples were taken to confirm the macroscopic diagnosis if required. Data is described as the mean ± standard deviation (SD). The frequency of upper gastrointestinal disorders was expressed in percentage. Statistical analysis was performed using Excel software. Discussion Upper gastrointestinal endoscopy is an accurate and safe method to evaluate the mucosa of the esophagus, stomach, and duodenum. It is performed for a variety of indications, especially for diagnostic purposes. Among the gastrointestinal diseases, major changes have been observed in gastric and esophageal cancer, as well as with acid-peptic diseases including peptic ulcer and GERD. In our retrospective evaluation, we observed a marked increase in the incidence of esophagitis, Barrett's esophagus, gastritis, and bulbitis and a decrease in the incidence of duodenal ulcer, gastric ulcer, gastric cancer, and esophageal cancer. In recent years the number of upper gastrointestinal endoscopies performed on the request of the general practitioner significantly increased in Turkey. The explanation is not only the presence of an open access facility, but also the more prominent place of gastroscopy in the work of dyspepsia and reflux disease [8]. Beside this, the number of women undergoing upper gastrointestinal endoscopy steadily increased in the consecutive years. Socioeconomic improvements have let women benefit from health services. Every Turkish citizen has a mandatory health insurance and hence an accessible health care. Gastroesophageal reflux disease is a common problem in the West: among patients undergoing esophagoduodenoscopy for a variety of upper gastrointestinal symptoms, 9-23% had endoscopic esophagitis [9,10]. Also recent studies from some parts of Asia have documented a prevalence of endoscopic esophagitis of up to 14.5% in patients evaluated for upper gastrointestinal tract symptoms [4]. Our study clearly shows that the incidence of endoscopic esophagitis increased over time. The increase may be due to altered nutritional habits, increased body mass index, and a declining rate of Helicobacter pylori (H. pylori) infection. Although a decreased incidence of H. pylori infection during the childhood period has been reported in Turkey [11], there is no data showing decreased incidence of infection in adult population. The increased attention paid to the lower esophagus during endoscopy may also contribute to the increase in the prevalence of reflux esophagitis. There is no doubt that the presence of hiatal hernia contributes to the occurrence of gastroesophageal reflux, which can lead to erosive esophagitis and Barrett's esophagus. We found that the presence of hiatal hernia was significantly increased over the periods in our series. Barrett's esophagus (BE), a metaplastic condition caused by chronic gastroesophageal reflux, predisposes to adenocarcinoma of the esophagus. Data about the change in the incidence of Barrett's esophagus are conflicting. Todd et al. showed a decrease in reflux esophagitis and an increase in Barrett's esophagus in patients undergoing endoscopy in the period from 1980 to 1995 in Scotland [12]. On the contrary, Loffeld and Van Der Putten reported that esophagitis gradually increased but Barrett's esophagus remained stable during the last 10 years [13]. In our series, Barrett's esophagus significantly increased over the time, consistent with an increased background of reflux disease. Improvement in the endoscopic diagnosis of Barrett's esophagus may be due to the increased attention paid to the lower esophagus. There is no doubt that the epidemiology of esophageal cancer has changed substantially over the last 50 years, especially in the Western world. In the United States and Europe, overall rates of esophageal cancer as well as squamous cell carcinoma have been decreasing, while rates of adenocarcinoma have been on the rise [14,15]. In the East, esophageal cancer is predominantly squamous cell in type and there has not been a noticeable rise in the incidence of adenocarcinoma of the esophagus [16]. Fernandes et al. reported that the overall incidence of esophageal cancer has declined significantly in the multiethnic Singapore over the last 35 years [17]. The decrease is mainly a result of a steep decline in the incidence of squamous cell carcinoma (SCC), which is not offset by the marginal increase in the incidence of adenocarcinoma [12]. Similarly, Gholipour and colleagues reported that the incidence of overall esophageal cancer and squamous cell carcinoma has been declining during the years of their study [18]. Although the lack of knowledge about the histological subtypes limits our ability to make further comments, our results suggest that the frequency of overall esophageal cancer has been declining during the years of study. The decreased incidence of esophageal carcinoma may be attributed to the decreased consumption of traditional dried foods and improved sanitation. Although the decreased risk of squamous cell carcinoma is attributed to the decreased frequency of smoking in Western countries, this reason cannot explain the decrease seen in our study because the frequency of smoking has gradually increased during the last 50 years in Turkey. On the other hand, a conflict arises between decreased esophageal cancer and increased Barrett's esophagus in the last 15 years. The widespread using of proton-pump inhibitors in reflux disease may be a reason to some extent. Peptic ulcer (PU) disease is believed to be less common and less severe as a result of modern medical treatment [19]. By El-Serag and Sonnenberg, a study that covers a 25year period was performed in the United States. This study reported that the incidence of peptic ulcer had a marked decrease [5]. In the United Kingdom, Bardhan et al. reported that, during a 28-year period, the incidence of PU decreased, but a very slight decrease of the presentation to emergency services accompanied this [19]. Compared to the 1970s, we have identified a significant decrease of the incidence of peptic ulceration in the 2000s. In detailed retrospective evaluation, we found an increase in the incidence of PU during the period of 1970-1980. This increase may be due to the introduction and the common prescription of nonsteroidal anti-inflammatory drugs (NSAIDs) during these years. After the marketing of the first H2-receptor antagonists (H2RA) in the eighties a temporal consistency with the decrease was observed in PU incidence in Turkey. We observed a second decrease in the incidence of peptic ulcer disease during the period after 1995. The obvious explanation for this observation is the installment of the anti-Helicobacter pylori therapy, which is generally used in the cases of ulcer disease since 1993. Another explanation for the decreasing numbers of ulcers is the decreasing acquisition of H. pylori. In our study, the incidence of gastric ulcer showed an important decrease in the early 90s compared to the 70s. However, the decrease observed in the incidence of GU remained stable during the 1990s. This stability observed in the incidence of GU may be attributed to the increased population of elderly people and the increased usage of NSAIDs and aspirin. In their study performed in the 1990s in Australia, Xia et al. reported a decrease in the incidence of peptic ulcer, especially gastric ulcer (GU), but no marked decrease was reported for duodenal ulcer (DU) [20]. Gastritis is a heterogeneous pathological condition. According to published data the prevalence of gastritis among adults in the Western world changes between 37% and 62% [21,22]. In the Zaanstreek (in The Netherlands) population, Loffeld et al. reported that erosive gastritis showed a gradual decrease in a period of 20 years after 1991. The reason for this was proposed to be due to the decrease of H. pylori incidence by some authors [23]. However, when we analyzed our series, we found that there was an increase of the diagnosis of erosive gastritis. The increase in NSAID use in recent years could be the reason for the increase in our series. In addition, this increase may be due to the introduction of new endoscopes with a higher resolution and the awareness of endoscopists for endoscopic gastritis increased after the introduction of Sydney classification [24]. Epidemiologically, the mortality rate for gastric cancer has decreased worldwide in the past several decades. It was reported that the incidence and the mortality of gastric cancer have gradually decreased in the Baltic Republic during the last four decades [25]. Miyahara et al. reported that the incidence of gastric cancer has gradually decreased in Japan during the last 30 years [26]. Consistent with other series, our series showed a gradual decrease in the incidence of gastric cancer. This decrease may be attributed to the alteration of dietary habits (the consumption of Western diets with low amount of nitrate), socioeconomic improvement, and the decrease of incidence of H. pylori infection. Along with the decreasing of the infection with H. pylori, changing incidence of chronic gastritis and changing of the diet may reflect the change observed in the incidence of gastric cancer [27,28]. Although Turkey reported a decreased incidence of infection with H. pylori during the childhood period, high incidence of H. pylori and rate of failure in the eradication did not support the decrease of gastric cancer in our country [11,29]. The limitations of this study are that the subjects were studied in a single hospital only. In addition, the fact that our hospital is a tertiary care institution may contribute to underestimation of real incidences of upper gastrointestinal pathologies in general population. The results may be influenced by the fact that, in the past, many different endoscopists, with different levels of experiences, worked at our center. The improvements in endoscopy technology and the changes of the definitions of endoscopic diagnosis (e.g., Barrett's esophagus) that occurred during that period naturally influenced the results.
2016-05-04T20:20:58.661Z
2014-09-08T00:00:00.000
{ "year": 2014, "sha1": "a2944fa288e90fc02236090163b90faa25a2f554", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/dte/2014/262638.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a2944fa288e90fc02236090163b90faa25a2f554", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15466231
pes2o/s2orc
v3-fos-license
Evidence of a third ADPKD locus is not supported by reanalysis of designated PKD3 families Mutations to PKD1 and PKD2 are associated with autosomal dominant polycystic kidney disease (ADPKD). The absence of apparent PKD1/PKD2 linkage in five published European or North American families with ADPKD suggested a third locus, designated PKD3. Here we re-evaluated these families by updating clinical information, re-sampling where possible, and mutation screening for PKD1/PKD2. In the French-Canadian family we identified PKD1: p.D3782_V3783insD, with misdiagnoses in two individuals and sample contamination explaining the lack of linkage. In the Portuguese family, PKD1: p.G3818A segregated with the disease in 10 individuals in three generations with likely misdiagnosis in one individual, sample contamination, and use of distant microsatellite markers explaining the linkage discrepancy. The mutation, PKD2: c.213delC, was found in the Bulgarian family, with linkage failure attributed to false positive diagnoses in two individuals. An affected son but not the mother, in the Italian family had the nonsense mutation, PKD1: p.R4228X, which appeared de novo in the son; with simple cysts probably explaining the mother’s phenotype. No likely mutation was found in the Spanish family, but the phenotype was atypical with kidney atrophy in one case. Thus, re-analysis does not support the existence of a PKD3 in ADPKD. False positive diagnoses by ultrasound in all resolved families shows the value of mutation screening, but not linkage, to understand families with discrepant data. Introduction Autosomal dominant polycystic kidney disease (ADPKD) is one of the most common hereditary disorders with a frequency of 1:500 -1:1000. It is characterized by the development of fluid-filled cysts in the kidneys and several extrarenal manifestations (1,2). ADPKD is genetically heterogeneous with mutations in two genes causing the disease. PKD1, localized to 16p13.3 (3), is a large gene with 46 exons and a 12909 bp coding sequence (CDS) (4)(5)(6). The locus is complex since exons 1 -33 lie in a region genomically reiterated as six pseudogenes, with ~98% similarity to PKD1, ~20 Mb more proximal on 16p (4,7). The second causative gene, PKD2, is localized to 4q21 (8,9), and has 15 exons and a CDS of 2904bp (10). PKD1 and PKD2 encode polycystin-1 (PC-1) and -2 (PC-2), respectively. PC-2 is a TRP-like calcium channel and PC-1, a cleaved, large receptor-like protein; the polycystins are thought to complex via their C-terminal tails (11,12). The site of localization of this complex related to its role in maintaining normal renal tubular differentiation appears to be on the sensory, primary cilium, and, hence, PKD is considered a ciliopathy (13). In screens of large clinically defined ADPKD populations, mutations have been detected in ~90% of families, and of these ~85% are PKD1 and ~15% PKD2 (14,15). PKD1 patients have more severe disease with end-stage renal disease (ESRD) occurring on average 20yrs earlier than in PKD2 (~54yrs vs. ~74yrs) (16). Soon after PKD1 and PKD2 were mapped, a number of ADPKD families unlinked to either of these loci were described, suggesting a third locus, PKD3. These reports involved families of French-Canadian (17), Portuguese (18), Bulgarian (19), Italian (20) and Spanish (21) origin. The Italian, Spanish and Bulgarian families were described to have milder ADPKD, whereas the French-Canadian and Portuguese families had more aggressive, PKD1-like disease (22). More recently, further complexity of ADPKD genetics has been noted. An apparently unlinked multi-generational family was revealed to have bilineal inheritance of a PKD1 and a PKD2 mutation (23). Interestingly, the two gene groups were phenotypically similar due to the PKD1 mutation being hypomorphic (24). Hypomorphic PKD1 alleles have also been found in homozygosity, associated with typical to severe ADPKD, with heterozygotes having very mild disease; an inheritance pattern that confounds linkage studies (25). Mosaicism associated with de novo mutation can also confuse linkage analysis in ADPKD (26), while mutations at other loci, notably HNF1B, can mimic an ADPKD phenotype (27). Despite the possible mechanisms described above, evidence of unlinked families and the consistent level of ~10% of ADPKD families where no mutation is identified (14,15), suggest further locus heterogeneity. To explore that option, we rigorously re-evaluated the previously described "PKD3" families to determine if they still supported the presence of an additional ADPKD locus. Results The described "PKD3" families were reanalyzed by linkage studies, reassessment of clinical data, collection of new samples where possible, and mutation screening of PKD1 and PKD2. The locations of genetic markers within and flanking the PKD1 gene used for linkage analysis in the original papers and additional markers analyzed here are shown ( Figure 1A). The French-Canadian family The described three-generation "PKD3" family of French-Canadian origin had six affected members and linkage analysis initially excluded PKD1 and PKD2 (17). However, our direct sequencing of the ADPKD genes in II-3 showed a novel PKD1 variant in exon 40: c. 11347_11348insACG, resulting in p.D3782_V3783insD ( Figure 1B). This is only a moderately conserved region in the large extracellular loop of PC-1, but one not displaying indel variants between orthologs ( Figure 1C). Description of a similar insertion of glutamic acid at this position was scored as Indeterminate previously (28), but has been classified as Likely Pathogenic by the ADPKD Mutation Database (http://pkdb.mayo.edu) (29). Segregation analysis with the original DNA samples showed that III-1, III-2, III-3 and III-6 also had this variant. Previously, III-6 was defined as unaffected, whereas III-4 and III-5, who did not inherit this variant, were defined as affected. While this was consistent with the unlinked status of this family, and the questionable significance of the PKD1 insertion, repeat ultrasounds of III-4 (at 28yrs) and III-5 (at 24yrs) showed that they were misdiagnosed as having PKD when previously analyzed at 16 and 12yrs, respectively ( Figure 1D, E). Repeated ultrasound analysis of III-6 at 32yrs confirmed the negative diagnosis, but analysis of a freshly collected DNA sample did not show the p.D3782_V3783insD variant. Microsatellite marker analysis of the original III-6 DNA sample showed it was likely contaminated by the III-1 (affected) sample, explaining the aberrant result. Patient II-4, who was not originally studied, was recently diagnosed with PKD and reached ESRD at 45yrs; sequencing showed the presence of p.D3782_V3783insD ( Figure 1D). Consequently, analysis of new DNA samples and the most recent clinical information showed that PKD1: p.D3782_V3783insD, along with a haplotype of variants, completely segregated with the disease and so was likely pathogenic ( Figure 1D). p.D3782_V3783insD first appearing in Generation II, coincident with the onset of PKD, emphasized this point. Haplotype analysis indicates that the mutation originated from the grandfather (I-2), who has the affected haplotype but not the mutation or cysts at 70yrs. It is likely that the grandfather is a germline mosaic for p.D3782_V3783insD (hence two affected offspring), but no sign of the variant in blood DNA (or the phenotype) indicates it is not in somatic tissue, although low-level somatic mosaicism cannot be completely ruled out. The Portuguese family-7001PKD A four-generation Portuguese family with more than 20 affected members was described as apparently unlinked to PKD1 or PKD2 (18,22). Subsequently, genotyping errors and/or sample mix-ups were suggested as a large number of apparent recombinants were found in a short genetic distance (30). The authors of the original paper reconciled their findings to the misinterpretation of one marker. However, rescoring this marker and generating a new haplotype still did not show positive linkage to PKD1 or PKD2 (31). Limitations of the original study were linkage analysis employing markers distant from the genes, plus the question of sample integrity. To circumvent these limitations, we repeated this analysis on freshly collected samples with markers intragenic or closely flanking PKD1/ PKD2 ( Figure 1A). This analysis showed segregation of a PKD1 haplotype in available affected individuals (Figure 2A), except III-14, who had one cyst in each kidney and a serum creatinine of 1.0 mg/dl at 31yrs. Direct sequencing of both genes in II-9 revealed, PKD1: c.11453G>C in exon 41, resulting in a novel missense change p.G3818A ( Figure 2B). Although this is a rather conservative change located between the described PC-A and PC-B domains (32) (in the same extracellular loop mutated in the French-Canadian family), glycine at this position is invariant in a wide range of PC-1 orthologs ( Figure 2D) and homologs ( Figure 2E). In silico analysis of this substitution employing SIFT and AlignGVGD supported its likely pathogenicity ( Figure 2C) and a restriction site assay confirmed segregation in all available affected family members, except III-14 ( Figure 2F). The latest available ultrasound of this patient at 36yrs still showed just two renal cysts, which probably represent simple cysts. Hence, this is likely a PKD1 family with the mutation p.G3818A, with a misdiagnosis in III-14. Many of the affected members had a mild to moderate decline in renal function by the late 30s, similar to typical PKD1 disease progression ( Figure 2G), consistent with this being a fully inactivating allele. The Bulgarian family-7865 Linkage analysis in 22 Bulgarian families showed absence of linkage to PKD1 (Lod = −7.75) and PKD2 (Lod = −2.69) in Family 7865 (19). However, direct sequencing of II-1 showed a frame-shifting deletion, PKD2, exon 1: c.213delC resulting in p.A71fs45X ( Figure 3A). This mutation segregated in II-3 and III-5 but not in III-1 and III-3 ( Figure 3B), who were originally diagnosed with ADPKD based on the apparent appearance of a small number of cysts at 33 and 30yrs, respectively. Ultrasound data shows clear cysts in III-5, but no definite evidence of cysts in III-3 ( Figure 3C). Both of these subjects had normal renal function when last analyzed ( Figure 3D), but they have been lost to follow-up and so no new imaging or renal function reanalyzes have been possible. In this case, ADPKD is due to a PKD2 truncating mutation, consistent with the milder disease within the family ( Figure 3D), with the unlinked designation due to misdiagnoses by ultrasound in two sisters. The Italian family I-2 (mother) and II-2 (son) in the Italian "PKD3" family had multiple bilateral renal cysts on ultrasound examination at 46 and 21yrs, respectively (20). Subsequently, II-2 had nephrolithiasis and hypertension at 34yrs, whereas the mother did not show any of these symptoms by 58yrs. Linkage analysis showed both PKD1 and PKD2 haplotypes shared by the affected subjects but also an unaffected individual, II-3, excluding linkage to either of these genes (20). Recent clinical analysis of the mother at 75yrs showed a normal creatinine level and no additional renal cysts. In contrast, her affected son, II-2, progressed to ESRD at 51yrs. Direct sequencing revealed a pathogenic mutation, PKD1, exon 46: c.12682C>T, resulting in p.R4228X in II-2 ( Figure 4A); however, this mutation was not found in the mother or other family members. Relationship testing confirmed the pedigree as shown (see Methods for details) and analysis of PKD1 intragenic SNPs was consistent with the original linkage data ( Figure 4B). Suspecting mosaicism in I-2, we analyzed urine and buccal cell DNA samples by sequencing; however, we did not find the mutation. In addition, allele-specific PCR developed to specifically amplify the c.12682C>T (p.R4228X) allele amplified a 250bp PCR product only from II-2 and not I-2 ( Figure 4C). This evidence suggests a de novo mutation in II-2 (with the origin of the haplotype that is mutated not determined) and simple cysts in I-2. The significance of the rare PKD1 variant p.T2250M found in I-2 and II-1 was also explored. This variant ( Figure 4D, E) was originally described as pathogenic (33) but more recent evidence found it with a pathogenic mutation in several ADPKD patients, suggesting it is not a fully penetrant mutation (http://pkdb.mayo.edu). However, Irazabal et al (34) described this change in an ADPKD patient in association with a second, weak PKD1 variant, p.S1619F, suggesting that it might be a weak hypomorphic allele. The p.T2250M variant is present in the REJ domain of PC-1 and cleavage of PC-1 at the GPS site can be affected by variants in this domain (12,35), however, our analysis did not show that p.T2250M influences this cleavage ( Figure 4E). The Spanish family This Spanish family was first diagnosed with possible ADPKD when II-5 presented with congenital posthydronephrotic atrophy in the left kidney (length 91mm), multiple small cysts in the right kidney ( Figure 5A) and a few liver cysts at 36yrs; hypertension was diagnosed at 27yrs ( Figure 5C) (21). The sister (II-1) had five and three small cysts in the right and left kidney, respectively, without hypertension at 42yrs, and the father (I-1) was diagnosed with hypertension at 67yrs with multiple small cysts in both kidneys. Original linkage analysis excluded linkage to PKD1 and PKD2 (21). Direct sequencing of PKD1 and PKD2 in I-1 and II-5 identified no likely pathogenic mutations and analysis of intragenic SNPs confirmed lack of linkage to PKD1/PKD2 ( Figure 5B). The atrophic kidney observed in II-5 prompted us to also screen HNF1B for mutations, which causes the Renal Cysts and Diabetes Syndrome (RCAD) (36). However, no likely pathogenic HNF1B variants were detected by direct sequencing or Multiple Ligation-Dependent Probe Amplification (MLPA) to detect larger rearrangements (37,38). This mutation negative and apparently unlinked family remains unresolved, but is notable because of the mild disease and atypical presentation in II-5, questioning the ADPKD diagnosis. Discussion Here we have screened the five previously described unlinked ADPKD ("PKD3") families (17)(18)(19)(20)(21) and showed by mutation analysis that three have PKD1, one PKD2, while one remains unresolved. As Paterson et al (30) highlighted, there are several confounders that can prevent the detection of linkage, including a single incorrect diagnosis or sample mixup. We could add to that list the need to employ markers close to and flanking the gene, and that ~10% of ADPKD families can be traced to a de novo mutation (39,40), and that some of these cases are mosaics (26,41). These confounders can fully explain the mistaken classification of the families in this study. Complex inheritance of hypomorphic alleles (see Introduction and Pei et al (42)), can also manifest as unlinked ADPKD. Of note, all four misclassified families had at least one false positive diagnosis (two had two such cases), all via ultrasound analysis. These included cysts detected during childhood that were not confirmed in adulthood (Figure 1) (17), or cases with a small number of cysts that were confirmed later but did not progress (Figures 2 and 4) (20,22). The inability to rescreen the Bulgarian family means that the reasons for the two false positives diagnoses are unknown (Figure 3) (19). There is no doubt that ultrasound technology has developed greatly since the 1990s and that CT and MRI are now used more widely, especially to help diagnose equivocal imaging cases. Nevertheless, even with the new ultrasound diagnostic criteria (43) there are cases with one or two cysts where a definitive diagnosis cannot be made. MRI and CT can help in these cases, but the greater resolution means that even unaffected individuals display more small "simple" cysts (44)(45)(46); still leaving some cases in diagnostic limbo. Mutation screening can help, as illustrated here, and can be the gold standard if a definite mutation is detected. Two families were traced to a new mutation within the studied pedigree. The natural inclination in a dominant disorder is to consider one of the parents as affected, even if the disease is much milder in the parent; a confounder of linkage in one family (Figure 4). It is intriguing in the French-Canadian pedigree that two individuals in the second generation had the disease and the mutation while the parents were clinically and molecularly negative ( Figure 1). Germline mosaicism seems the only likely explanation for this inheritance pattern, and germline/somatic mosaicism also cannot be completely ruled out in the Italian family ( Figure 4). Greater awareness about the level of de novo mutation in ADPKD, and that this probably much more often involves mosaicism than presently recognized, would be valuable when making diagnoses and determining the risk of recurrence in families with an apparent negative family history. Mosaicism is probably under-recognized because lowlevel alleles are difficult to identify by sequence analysis. Also, due to variability in levels between tissues/organs, the mutant allele may often be underrepresented or completely absent in leukocyte/buccal cell derived DNA (26,41). Sample mix-ups are difficult to completely eliminate, although good diagnostic laboratory practices should reduce them to a minimum. The advantage of mutation screening over linkage to obtain a definite diagnosis in families with contradictory data is illustrated here, where analysis of just one individual with a definite diagnosis (with no sample confusion) can provide the diagnosis. A detected mutation can then be segregated in all family members and further imaging and resampling completed to resolve any inconsistencies. The Spanish family is the only "PKD3" family where we did not identify a PKD1 or PKD2 mutation and we confirmed that it was unlinked to these loci; screening of HNF1B also did not reveal a mutation. This leaves open the possibility of further genetic heterogeneity in ADPKD, although the disease is rather atypical in II-5 (with kidney atrophy) and a milder course with small cysts in the other affected family members ( Figure 5). Further genetic heterogeneity is also suggested by the study of large ADPKD cohorts that have consistently shown that PKD1/PKD2 mutations are not detected in ~10% of patients (14,15). It is possible that these patients have atypical mutations to the existing genes that have not been detected or recognized as pathogenic. This could include low level mosaicism, unrecognized pathogenic missense changes at poorly conserved sites, or intronic variants beyond the regions screened by conventional screening. Additionally, allele drop out may cloud mutation identification due to the genomic duplication of PKD1 and the consequent locusspecific long-range PCR mutation screening methods, plus nested PCR. Also, gene conversion events shown to cause mutations at this locus (47,48) may be more common than presently described but missed because they extend over one or more PKD1-specific primer. Next generation sequencing (NGS), employing fewer PKD1 specific primers, longer products covering all introns, and a single round of amplification may help to identify some of these missing mutations (48). Further characterization of the clinical, imaging and family history details of mutation negative families would be valuable to see if they represent an atypical subgroup. Such analysis would also identify multiplex families suitable for whole exome sequencing to identify further ADPKD genes. The possibility of a PKD3 gene cannot be dismissed until these cases are resolved. Sample and data collection The relevant institutional review boards or ethics committee approved all studies and participants gave informed consent. All of these families were published previously, with the geographic designations used here identifying their origin. New DNA samples and updated imaging and clinical data were collected as available from all the families after the original publication except from family 7865 that was lost to follow up. Urine and buccal smear samples were collected from I-2 in the Italian family. New members were added to the previously published pedigree of 7001PKD and employed for the analysis of the phenotype. Mutation screening and variant classification Exonic and intragenic flanking regions of PKD1 and PKD2 were screened in each family by direct sequencing (14). Exons 1-9 of HNF1B were amplified as 9 exonic fragments (PCR primers and conditions available upon request). An MLPA protocol to look for large gene arrangements in the HNF1B gene was completed using the SALSA MLPA kit P241 (MRC-Holland, Amsterdam, The Netherlands) (38). Segregation analysis in each family was completed by sequencing the exonic fragment that contained the mutation. Two web-based programs, SIFT and AlignGVGD that predict the pathogenicity of amino acid variations based on the degree of conservation of the amino acid in multiple sequence alignments were used to further characterize the novel mutation found in the Portuguese family (51). Restriction Fragment Length Polymorphism (RFLP) analysis in the Portuguese family, 7001PKD A RFLP method exploiting the generation of a new restriction site for EcoO109I in the mutant sequence was developed to test the segregation of c.11453G>C change in 7001PKD. Briefly, exon 41 of the PKD1 gene was amplified from the family members and restriction digested with EcoO109I for 37°C for ~ 2 hours and visualized on a 2.5-3.0% agarose gel. Confirmation of biological relationship in the Italian family A custom-made panel of short tandem repeats (STR) markers identical to the 13 core loci of the Combined DNA Index System (CODIS) used in forensic investigations was used to test the biological relationships within the Italian family at the Mayo Genotyping Core Facility. Briefly, a set of 14 microsatellite markers were amplified using ABI Taq Gold using 2μl multiplexes per sample. These were then run on the ABI 3730 DNA Analyzer and analyzed using the ABI GeneMapper version 4 (Applied Biosystems, USA). Allele-specific PCR to test R4228X mosaicism and cleavage assay to determine mutagenicity of the variant T2250M in the Italian family We developed an allele-specific PCR method to specifically amplify the c.12682C>T mutation in the Italian family as previously described (52). Briefly, a reverse primer (5′-GTAGACGTCCTCTGTGGCCTGGTTGAGTGA-3) was designed with its 3′-end nucleotide complimentary to the mutation (A). A mismatched nucleotide (G) was also added to the penultimate position in this primer in order to increase the specificity of this PCR. A forward primer (5′-TCCGCTTTGAAGGGATGGAGCCGCTGCCCT -3′) was designed to match the wildtype sequence. The specificity of the reverse primer in this primer set only permitted amplification of the mutant allele. Western blotting and quantification for the cleavage assay was performed as described previously (35). II-4, who was not studied in the original report, was diagnosed as affected and also found to carry the mutation. III-4 and III-5, who were originally described as affected but shown as unaffected with a repeat ultrasound, are shown in grey. Haplotype data with the segregation of microsatellite markers (KG8 and SM6), polymorphic PC-1 amino acid changes (p.A4059V, p.I4045V and p.A3512V) and the mutation p.D3782_V3783insD are shown. E: Initial and the latest clinical and molecular diagnoses of the French-Canadian family. Negative ultrasound (-ve US), Hypertensive (HTN), mutation not detected (ND). Figure 2. The Portuguese family A: Partial pedigree of the four-generation Portuguese family, 7001PKD redrawn from two previously published pedigrees. (18,22) II-15, II-20, and II-21 are drawn as described in de Almeida S. et al. (18) New members were added and some original members were taken out of the fourth generation from previous publications. III-14, who was originally reported as affected, but only had two cysts and did not inherit the p.G3818A mutation is shown in grey. The haplotype shaded in blue with the microsatellite markers MC1786, KG8, SM6, 16AC2.5 and CW2 segregates with the disease. B: Direct sequencing showing the wildtype chromatogram and the PKD1: c.11453G>C nucleotide change and the corresponding p.G3818A amino acid change found in affected family members. C: Analysis of the likely pathogenicity of p.G3818A using SIFT and Align GVGD. VS: Variation score; MG: Mutation group. p.G3818A has a SIFT score of 0.000 and Align GVGD score of C55 which correspond to the Highly Likely Pathogenic mutation designation (25). D: Multiple sequence alignment of PC-1 orthologs showing the well-conserved glycine at position 3818 (red arrow) across a wide-variety of species. E: Multiple sequence alignment of PC-1-like proteins in humans, sea urchin and C. elegans compared with human PC-1, further showing the conservation of glycine at position 3818 (red arrow). F: Restriction digest analysis with EcoO109I confirms the c.11453G>C change only in affected individuals. Absence of an EcoO109I site in unaffected individuals (yellow fonts) results in a 288bp band after the restriction digestion of the PKD1-exon 41 PCR product, whereas affected individuals (red fonts) have two bands of 174 and 114bp. G: Estimated glomerular filtration rate (eGFR) in affected individuals calculated using the latest available serum creatinine measurements showing a decline in eGFR in 7001PKD family members by the late third or early fourth decade (blue squares). A reference trend line showing typical PKD1 progression was generated by analyzing 106 ADPKD patients with definitely pathogenic PKD1 mutations. Figure 3. The Bulgarian Family A: Top panel shows the wildtype PKD2 chromatogram and the bottom panel the frameshifting mutation c.213delC, resulting in p.A71fs45X. B: Revised pedigree and haplotype of the previously described Bulgarian family including the PKD2 SNP p.R28P. III-1 and III-3 who were previously described as affected but do not carry the mutation are shown in grey. C: Ultrasound images of the right kidney from III-3 (misdiagnosed with ADPKD) and the right kidney of III-5 (confirmed diagnosis of ADPKD), showing a few cysts (blue arrows). D: Table showing the age at renal function analysis by serum creatinine measurement and expressed as eGFR, plus the mutation status of family members. Mutation not detected (ND).
2018-04-03T05:53:21.594Z
2013-06-12T00:00:00.000
{ "year": 2013, "sha1": "ccfb00ea03aecd8aa8acbeca4dcc98d19e0526ba", "oa_license": "elsevier-specific: oa user license", "oa_url": "http://www.kidney-international.org/article/S0085253815561784/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "ccfb00ea03aecd8aa8acbeca4dcc98d19e0526ba", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
237253303
pes2o/s2orc
v3-fos-license
Paper : The Mediating Role of Cognitive Emotion Regulation Strategies on Mindfulness , Anxiety , and Academic Procrastination in High Schoolers * Corresponding Author: Azra Zebardast, PhD. Address: Department of Psychology, Faculty of Literature and Humanities, University of Guilan, Rasht, Iran. Tel: +98 (13) 33690274 E-mail: zebardast@guilan.ac.ir Objective: The present study aimed to investigate the mediating role of cognitive emotion regulation strategies on the relationship between mindfulness, anxiety, and procrastination in high school students. Introduction espite advances in novel and effective training methods, educators and researchers are focused on the increasing rate of academic noncompliance due to Academic Procrastination (AP) at most educational levels (Krispenz, Gort, Schültke & Dickhäuser, 2019). Postponing the start or end of academic homework to the future for irrational reasons, namely AP, continues as long as the individual feels uncomfortable and creates a significant gap between their intention and action. This matter is among the major reasons for academic failure (Young, 2010). According to Ferrari (2000), the causes of procrastination are classified into 3 categories. First, the emotions class under which individuals succeed in timeconstrained situations and interpret such conditions as challenging and exciting. Second, avoiders who leave things because of their low self-efficacy to reduce anxiety. Third, decisional procrastinators who are unable to make decisions at a specific timeframe and procrastinate due to their limited decision-making skill. Procrastination factors are situational and personal. A high volume of homework and inappropriate time management are among the major situational factors in students' procrastination (Hussain & Sultan, 2010); fear of failure, depression, and anxiety are among the relevant personal factors (Chang, 2014). All results signifying that anxiety is associated with procrastination are addressed as antecedents and outcomes (Chang, 2014). Anxiety is directly related to procrastination, which provides a detrimental effect on students' mental health and academic performance; such an impact is created by negative cognitive evaluation and severe physiological reactions (Afshari & Hashemi, 2019). According to prior research results, 25%-40% of learners in state-run training centers experience anxiety (Putwain & Daniels, 2010). Students with anxiety experience unpleasant moods, physiological arousals, and anxious thoughts (Pekran, Frenzel, Goetz & Perry, 2006); therefore, they tend to leave the situation and are more prone to present procrastination. However, this result was stated by studies that have examined the relationship between anxiety and procrastination at a specific period (Krispenz et al., 2019). Longitudinal studies suggested complex causal relationships between anxiety and procrastination (Pekrun, Ferenzel, Goetz & Perry, 2007). The appraisal-anxiety-avoidance model proposed by Lazarus and Fulker (1984) claims that individuals, because of the risk assessment of the situation and their inability to cope with it, become procrastinate to avoid the conditions. Contrarily, the Temporal Motivation Theory (TMT) states that procrastination is not always a compulsory consequence of anxiety. Procrastination is also prevalent in students who have enough time to take the Plain Language Summary The study aimed to investigate the mediating role of cognitive emotion regulation strategies on the relationship between mindfulness, anxiety, and procrastination in high school students. The study sample consisted of 350 high school female students in Rasht City, Iran. The subjects responded to the various questionnaires. Results demonstrated that There was a direct and significant relationship between academic procrastination, anxiety, and maladaptive cognitive emotion regulation strategies. There was an inverse and significant relationship between procrastination, adaptive cognitive emotion regulation strategies, and mindfulness. Mediation analysis data revealed that the maladaptive cognitive emotion regulation strategies exacerbated the effects of anxiety on academic procrastination; the indirect effect of anxiety on procrastination through adaptive strategies was significant. Procrastination in students could be reduced by minimizing anxiety, correcting maladaptive cognitive emotion regulation strategies, and strengthening adaptive cognitive emotion regulation strategies. Anxiety may aggravate academic procrastination by generating maladaptive mechanisms. exam at the beginning of the semester. This finding was supported by longitudinal studies. For example, Tice and Baumeister, 1997;cited in Krispenz et al., 2019) found that procrastinating students report less anxiety than those without procrastination; they only experience anxiety at the beginning of the course or academic year. Yerdelen, McCaffrey & Klassen (2016) also found that procrastinating students reported less anxiety over the same time with increasing procrastination. Procrastination seems to be a coping strategy in these students against anxiety and negative emotions. Furthermore, some individuals are unable to identify their emotions and fail to adjust their emotions to manage anxious situations. Emotions directly influence thinking to facilitate decision-making, thinking, and action; accordingly, when experiencing negative emotions, they focus on more details and remember further errors. When experiencing positive excitement, they seek positive results and develop optimal performance (Emmerling, Shanwal & Mandal, 2008). Emotion Regulation (ER) is defined as the process of initiating, maintaining, modifying, or changing the occurrence, intensity, or continuity of an inner feeling. Such feelings include the awareness and understanding of emotions, the ability to manage and accept emotions and to act according to the intended goals to achieve individual goals and situational demands. Individuals cope with the emotional distress using different strategies to regulate emotion; some of which are adaptive and others are maladaptive. Adaptive ER enables one to successfully function in the environment. Moreover, when faced with a problematic emotional experience, they could adopt behaviors along with their goals. The maladaptive ER strategies, like trauma-related rumination, emerge shortly after exposure to the harmful situation. Thus, individuals' differences in using various Cognitive Emotion Regulation (CER) styles lead to different emotional, cognitive, and social consequences (Garnefski, Kraaij & Spinhoven, 2001). The 9 CER strategies proposed by Garnefski et al. (2001) include self-blame, other blame, rumination, catastrophizing, acceptance, putting into perspective, positive refocusing, positive reappraisal, and refocus on planning. Studies revealed that students' anxiety and emotion can be predictable through their ER strategies, especially during exams (Capa-Aydin, Sungur & Uzuntiryaki, 2009). Failure to successfully regulate emotions is the underlying mechanism of anxiety (Campbell-Sills & Barlow, 2007). Moreover, some researchers believe that negative emotion is an essential antecedent to procrastination; therefore, ER is an effective function of procrastination. Besides, in one of the causal explanations of procrastination, considering the subject or outcome of the assignment worthless, is the cause of procrastination (Chen & Han, 2017). The awareness of worthiness and commitment to the task is achieved through Mindfulness. Mindfulness indicates paying attention to the present in a specific, goal-oriented manner and without judging the internal and external experiences (Dudovitz, Li & Chung, 2013). There exist several definitions for mindfulness; however, all definitions emphasize two key elements in mindfulness; direct attention to the present as well as openness and acceptance. Through mindfulness, the reality is perceived and becomes less distorted. Thus, events are processed in a real, negative, and unpleasant way, preventing over-estimation and impulsive emotions (Falkenström, 2010). Previous studies explored the effects of mindfulness on academic achievement and AP; with enhanced mindfulness, AP decreased and academic achievement increased (Mrazek et al., 2017). The underlying premise of the association between mindfulness and AP is that mindfulness enables one to recognize and learn about their ability to experience positive emotions, coping, and mental events, positively. In other words, mindfulness, at the fundamental level, is a form of consciousness and awareness that prevents constant rumination as a useful coping style, leading to adaptability (Mettler, Carsley, Joly & Heath, 2019). The essential mediating role of CER strategies in the relationship between mindfulness and AP is clear. Research results suggested that with increasing positive behavior and decreasing negative emotion, the behavior pattern changes, negative dysfunctional thoughts stop, and adaptation increases (Schroevers & Brandsma, 2010). CER was considered by numerous researchers in recent years, as a mediating variable (Taube-Schiff et al., 2015;Patron, Messerotti Benvenuti, Favretto, Gasparotto & Palomba, 2014). The reason for this attention is to support the literature on the relationship between different types of ER strategies and their consequences; thus, adaptive CER strategies present protective and maladaptive effects (Grezellschak, Lincoln & Westermann, 2015). Education subject professionals seek to reduce AP in students by applying various research and motivational methods. According to the literature, data concerning the mediating role of CER strategies on the relationship between mindfulness, anxiety, and AP are scarce. Studies suggested that AP is affected by multiple complex relationships; however, most of these studies have only investigated one dimension of the relationship between academic achievement and other variables. Accordingly, they disregarded examining the mediating or effective roles of an intervention on such relationships. There exists no coherent structural model in this regard. Clarifying the mediating relationships in the form of structural relationships can help formulate effective programs to reduce AP. Therefore, the present study aimed to examine the structural relationships between mindfulness, anxiety, and AP with emphasis on examining the mediating role of CER strategies in female students. Materials and Methods This study is fundamental in terms of purpose and is a descriptive correlational one concerning the method. In this study, mindfulness and anxiety were considered as exogenous variables; AP was an endogenous variable, adaptive and maladaptive CER strategies were the mediating variables. The statistical population of this study included all (n=800) high school female students of Rasht City, Iran, in the academic year 2018-2019. The students were selected from 8 schools in Rasht. The sample size of the study was measured as 280 according to Krejcie and Morgan (1970) Table. They were selected using a twostage cluster sampling approach (first selecting schools, then the random clusters of classes from schools) and from different fields of study. However, to counteract sample dropout and incomplete data provision, 20% dropout was considered and the final sample size was calculated to be 350 subjects. The inclusion criterion was 14-18 years of age. The exclusion criteria of the study were as follows: providing incomplete questionnaires; expressing dissatisfaction during the tests, and being unable to cooperate physically, mentally, or cognitively in the study. After obtaining informed consent forms under the conditions of anonymity and confidentiality of the information, and after providing a brief explanation of the tests and their completion, the study participants completed 3 questionnaires along with a demographic questionnaire in their classes. Research tools included the following: Academic Procrastination Scale (APS) This questionnaire was developed by Solomon and Rothblum (1984) and has 27 items that examine 3 components. The first component exams preparation and includes 8 items. The second component concerns preparation for homework and includes 11 items. The third component addresses preparation for the semester homework and includes 8 items. The items are scored based on a five-point Likert-type scale, ranging from 1 (never) to 5 (always) and some items are scored reversely. In Iran, Namiyan & Hosen Chari (2011) obtained Cronbach's alpha coefficient of 0.73 for the reliability of this scale. In this study, a 27-item version of APS was used. The validity of the questionnaire in the study of Jokar & Delavarpour (2007) was calculated to be satisfactory using the factor analysis method. The Five-Facet Mindfulness Questionnaire (FFMQ) This 39-item self-report scale was developed by Baer, Smith, Hopkins, Krietemeyer & Toney (2006) using the factor analysis approach. The FFMQ is scored based on a Likert-type scale, ranging from 1 (never) to 5 (always). The researchers conducted an exploratory factor analysis on a sample of students and called the obtained factors as follows: observing, describing, acting with awareness, non-judging, and non-reactivity. Based on the collected results, the internal consistency of each factor was appropriate. Furthermore, Cronbach's alpha coefficient ranged from 0.75 (in non-reactivity factor) to 0.91 (in describing factor). In Iran, the FFMQ's test-retest correlation coefficients were calculated to range between r=0.57 (attributable to non-judging factor) and r=0.84 (observing factor) (Ahmadvand, Heydarinasab & Shairi, 2013). Dehghan Manshadi, Taghavi & Dehghan Manshadi (2012) also reported the internal consistency of different dimensions of FFMQ (0.81-0.93). Cattell Anxiety Test (CAT) (Cattell, 1957) CAT has 40 questions with 3 options of Yes, No, and between the two and the subject has to choose one of the options. This test not only measures general anxiety, but also hidden (20 items) and manifest anxiety (20 items), as well as 5 primary personality factors, as follows: the lack of self-consciousness integration; the lack of solidarity or general neuroticism; paranoid insecurity; ten-dency to a sense of guilt, and erg-tension; each of which has its set of questions. The scoring method of this test is that in a set of questions, the answers yes, no, or between the two achieve 2, 0, and 1 points, respectively. However, in the other set of questions, the answers yes, no, or between the two achieve 0, 2, and 1 points, respectively. In Iran, this test was standardized in 1989 by Dadsetan and Mansour among the Iranian subjects. IN total, 24894 people participated in this standardization. The validity of this test, which has been repeated numerous times has always been measured to be >0.70 (Corraze, 2002). Cognitive Emotion Regulation Questionnaire (CERQ): This questionnaire is a 36-item tool that describes CER strategies in response to life-threatening events based on a five-point Likert-type scale, ranging from 1 (never) to 5 (always) according to the following 9 subscales: self-blame, other-blame, rumination, and catastrophizing (these four components are called maladaptive CER strategies). Additionally, putting into perspective, positive refocusing, positive reappraisal, acceptance, and refocus on planning are the adaptive CER strategies. The minimum score of each component is 4, the maximum score is 20 and higher indicate greater use of those cognitive regulation strategies (Garnefski & Kraaij, 2006). The psychometric properties of the CERQ were confirmed in previous studies (Garnefski & Kraaij, 2006;Garnefski et al., 2001). The Persian version of the 36item CERQ was standardized by Hasani (2010) in the Iranian population; accordingly, its reliability, based on Cronbach's alpha coefficient was obtained as 0.51-0.77. Besides, the validity of CERQ was reported by exploratory factor analysis with varimax rotation and correlation between subscales (ranging from 0.32 to 0.67); its optimal criterion validity was also reported. In this study, Pearson's correlation coefficient and structural equation modeling technique were used to analyze the obtained data in SPSS and AMOS. Results The Mean±SD age of the explored students was 16.23±1.54 years and they had studied for 11.28±1.23 years. Data analysis was performed on 350 students. There were univariate and multivariate outliers in each data analysis, i.e., identified and excluded from the statistical processing by Box Plot and Mahalanobis distance test, respectively. Table 1 presents the statistical characteristics, including Mean±SD scores of the demographic features of the study subjects and the correlation coefficients of the main study variables. As per Table 1, there was a direct and significant relationship between AP, anxiety, and maladaptive CER strategies (P<0.0001). There was a significant inverse relationship between AP, adaptive CER strategies, and mindfulness (P<0.0001). Moreover, the significance level of the Kolmogorov-Smirnov test in all variables was above 0.05; therefore, with 95% confidence, the distribution of variables' scores was normal. The maximum likelihood method was used to evaluate the structural model and to fit it with the collected data. In this study, Mardia's normalized multivariate kurtosis value was employed to investigate the multivariate normality. This figure was equal to 82.03 in the present study, i.e., below 624, and calculated by the p(p+2) formula. In this formula, p equals the number of variables in the model, i.e., 24 in this study (Teo & Noyes, 2014). The indices proposed by Gefen, Straub & Boudreau (2000) were used to determine the model fit. These indicators include χ 2 /df with acceptable values <3, Goodness of Fit Index (GFI), Adaptive Fit Index (CFI); values >0.9 indicate a good fit to the model, Table 2. According to Table 2, the CFI, AGFI, GFI, and Normed Fit Index (NFI) were higher than the values presented by Gefen et al. (2000). The values of RMSEA and χ2/ df were also lower than those mentioned above. Based on these findings, the tested model (after modifying & deleting the procrastination vecto r s and in c orporating them in the model as the observed variable) presented a good fit. Figure 1 shows the final model with standard path coefficients. As Figure 1 illustrates, the path of mindfulness to adaptive CER strategies (β=0.06) was not significant (P>0.05). The path of anxiety to maladaptive CER strategies (β=0.12) and the path of maladaptive CER strategies to procrastination (β=0.13) was significant at P<0.05. The rest of the paths in Figure 1 were significant at P<0.01. There was also an underlying assumption of the structural pattern in the present study suggesting indirect or mediating pathways. Bootstrap analysis in Preacher & Hayes (2008) Macro program in SPSS was used to determine the significance of each mediating relationship and the indirect effects of the independent variable on the dependent variable by the mediator variable. The Bootstrap results for the mediating paths of the proposed model are in Table 3. Table 3 indicates the significance of mediating anxiety pathways to AP through adaptive and maladaptive CER strategies. The confidence interval equaled 0.95 and the bootstrap re-sampling number was 5000. Given that zero is outside this confidence interval, these mediating relationships were significant. The rest of mediating relationships, in which there was an exogenous variable of mindfulness, were not significant. Discussion The present study aimed to investigate the structural relationships between mindfulness and anxiety with the mediating role of CER strategies on AP in female students. The initial results provided a direct relationship between AP, anxiety, and maladaptive CER strategies. Besides, there was a reverse and significant relationship between AP, adaptive CER strategies, and mindfulness. This result was in line with those of others, such as Afshari & Hashemi (2019), Ekert et al. (2016), Habibi (2020, Yaghobi, Ghalaei, Rashid & Korde Nughabi, (2015), as well as Ghasemi Jobaneh et al. (2016). Procrastination is a cognitive problem and there was a significant relationship between this issue and positive cognitive constructs -mindfulness-and a negative element, anxiety. According to theorists, mindfulness is significantly associated with reduced procrastination. This is achieved by increased self-control and eliminated loop of negative and crippling thoughts in the impaired cycle of motivation and action (Howell & Buro, 2011). A form of cognitive anxiety to negative self-evaluation and selfesteem, compared to others and insecurity leads to the adoption of maladaptive CER strategies, such as self-or other-blame, or catastrophizing by losing strength and perseverance; accordingly, it expands the gap between the intention and action and the cause of procrastination. The results of structural equation modeling and Bootstrap analysis revealed that maladaptive CER strategies mediated the relationship between anxiety and AP. In other words, maladaptive strategies exacerbate the effect of anxiety on AP. Maladaptive CER strategies are more likely to increase their anxiety than AP. In other words, students with higher levels of maladaptive CER strategies are more prone to experience greater anxiety. This finding was consistent with those of other studies, such as Krispenz et al. (2019), Gharibnavaz, Nouri Ghasem Abadi & Moghadasin (2018), and Haghshenas, Nouri, Moradi & Sarami (2014). The direct and mediating effects of maladaptive CER strategies on the relationship between anxiety and AP indicate that individuals who cannot manage and control daily negative events experience longer confusion and develop specific anxieties, consequently. Moreover, the mediating role of CER strategies between anxiety and AP was significant; thus, students who moderated and controlled the anxiety expe- (2007) signified maladaptive CER strategies as the underlying mechanisms of anxiety; therefore, there exist intensifying and crippling relationships between the two. The present results were consistent with those of previous investigations (Mettler et al., 2019;Porparizi, Towhidi & Khezri Moghadam, 2018) concerning the significance of the mediating role of adaptive mindfulness strategies and procrastination versus the inverse and significant relationship of mindfulness and procrastination. Accordingly, mindfulness could help individuals to become less entrapped in procrastination with present-based conscious judgment and increased concentration. In other words, it can reduce procrastination by activating adaptive CER strategies, such as refocusing and re-evaluating. The present study was associated with such limitations, as restricting the sample to the female gender, one educational level, and Rasht City (Northern Iran); thus, caution is required in data generalization. The research design was descriptive and correlational and the measurements were performed using self-reported questionnaires, which prevent causal inferences and comprehensive evaluation. Therefore, the researchers suggest considering male student samples of different educational levels in other cities and using non-descriptive designs and multi-faceted interviews, observation, and self-report questionnaires in future studies. Given the emphasis and purpose of the present study on the mediating role of CER strategies in the relationship between anxiety and AP in female students, it is recommended to the educators, therapists, and parents that while examining CER strategies in students, assist them to reduce AP by delivering group-and individual-based training courses to them. Besides, identifying and correcting maladaptive CER strategies and learning and reinforcing adaptive CER strategies by preventing the destructive effects of anxiety on AP are suggested in students. Conclusion Procrastination in students could be reduced by minimizing anxiety, correcting maladaptive CER strategies, and strengthening adaptive CER strategies. Anxiety may aggravate AP by generating maladaptive mechanisms. Compliance with ethical guidelines All the study procedures complied with the ethical guidelines of the Declaration of Helsinki (2013). Funding This research did not receive any grant from funding agencies in the public, commercial, or non-profit sectors.
2021-08-20T19:26:39.208Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "eed59c0eea74d7b2232f83c853de825f6f2cf73d", "oa_license": "CCBYNC", "oa_url": "http://jpcp.uswr.ac.ir/files/site1/user_files_261192/azrazebardast-A-10-731-1-1e5b8f5.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "eed59c0eea74d7b2232f83c853de825f6f2cf73d", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
14503201
pes2o/s2orc
v3-fos-license
Muscle dysmorphia: Could it be classified as an addiction to body image? Background Muscle dysmorphia (MD) describes a condition characterised by a misconstrued body image in which individuals who interpret their body size as both small or weak even though they may look normal or highly muscular. MD has been conceptualized as a type of body dysmorphic disorder, an eating disorder, and obsessive–compulsive disorder symptomatology. Method and aim Through a review of the most salient literature on MD, this paper proposes an alternative classification of MD – the ‘Addiction to Body Image’ (ABI) model – using Griffiths (2005) addiction components model as the framework in which to define MD as an addiction. Results It is argued the addictive activity in MD is the maintaining of body image via a number of different activities such as bodybuilding, exercise, eating certain foods, taking specific drugs (e.g., anabolic steroids), shopping for certain foods, food supplements, and the use or purchase of physical exercise accessories). In the ABI model, the perception of the positive effects on the self-body image is accounted for as a critical aspect of the MD condition (rather than addiction to exercise or certain types of eating disorder). Conclusions Based on empirical evidence to date, it is proposed that MD could be re-classified as an addiction due to the individual continuing to engage in maintenance behaviours that may cause long-term harm. INTRODUCTION Muscle dysmorphia (MD) describes a condition characterised by a misconstrued body image in which individuals interpret their body size as both small and weak even though they may look normal or even be highly muscular (Pope et al., 2005). Those experiencing the condition typically strive for maximum fat loss and maximum muscular build. MD can have potentially negative effects on thought processes including depressive states, suicidal thoughts, and in extreme cases suicide attempts (Pope et al., 2005). These negative psychological states have also been linked with concurrent use of Appearance and Performance Enhancing Drugs (APED) including Anabolic Androgenic Steroids (AAS) (Mosley, 2009;Pope et al., 2005). The use of these substances may not just relate to body image, but also social or sexual aspects such as producing an enhanced libido or a sense of physical and psychological wellbeing (Cohen, Collins, Darkes & Gwartney, 2007). MD was originally categorised by Pope, Katz and Hudson (1993) as Reverse Anorexia Nervosa, due to characteristic symptoms in relation to body size. It has been considered to be part of the spectrum of Body Dysmorphic Disorders (BDD); one of a range of conditions that tap into issues surrounding body image and eating behaviours (McFarland & Karninski, 2008). Parallels have also been drawn with Obsessive-Compulsive Disorder (OCD) given some similarities in symptom expression like ritualistic activity (Phillips, 1998). Consequently, there is a lack of consensus amongst researchers whether MD is a form of BDD, OCD or a type of eating disorder (e.g. Jones & Morgan, 2010;Maida & Armstrong, 2005;Murray, Rieger, Touyz & De la Garza Garcia, 2010;Nieuwoudt, Zhou, Coutts & Booker, 2012;Pope, Gruber, Choi, Olivardia & Phillips, 1997;Pope et al., 2005). In this paper, the limitations of these classification approaches will be discussed, and an alternative model is proposed -the 'Addiction to Body Image' (ABI) model. HOW IS MUSCLE DYSMORPHIA CURRENTLY CLASSIFIED? BDD is characterised by a preoccupation with a perceived defect in physical experience that leads to a substantial functional impairments (American Psychiatric Association, 2013). Such a definition can include MD and in the latest DSM-5, muscle dysmorphia was added as a specifier to the BDD diagnostic criteria. This representation of Muscle Dysmorphia is supported by authors such as Pope et al. (1997). In the context of a preoccupation with the belief that their body is not sufficiently muscular and lean, and excessive attention to exercise, lifting weights and diet (possibly including supplements and AAS), the criteria outlined by Pope et al. (1997) -for which two or more need to be present for a diagnosis of the condition -are: 1. Giving up important activities of a social, work or recreational nature due to a strong need to maintain activities in relation to workouts and diet control. 2. Active avoidance of situations where their body is displayed to others, and an intense distress/anxiety of these situations when they are unavoidable. 3. Clinically significant distress arising from pre-occupation with their body fat, size, or musculature. 4. A continuation of dietary control and exercise, despite the knowledge of adverse physical or psychological consequences. The International Classification of Diseases (ICD-10) also classifies MD with other BDD conditions in section F45.2 entitled hypochondriacal disorder. Essential features include somatic complaints, preoccupation, and distress in relation to physical appearance. The category appears to refer to a heterogeneous range of conditions, and the somatoform description of the MD condition appears unwarranted. Somatoform disorders relate to physical symptomatology that is difficult to explain in terms of physical disease, substance use, or other mental disorder. Mosley (2009) considered the 'somatoform' description incongruent with MD; Maida and Armstrong (2005) concurred, given MD symptoms were found to be unrelated to symptoms of somatoform disorder in men who regularly lifted weights. Other classifications consider MD to be part of the obsessive-compulsive disorder symptomatology. A shift of BDDs to be classified as OCD spectrum disorders was considered but rejected due to a lack of evidence (Phillips & Hollander, 1996). There are similarities in symptom expression including intrusive fear, ritualistic actions or obsessions in the course of the illness (Bienvenu et al., 2000;Phillips, 1998;Phillips, Dwight & McElroy, 1998;Phillips, Gunderson, Mallya, McElroy & Carter, 1998;Rosen, Reiter & Orosan, 1995;Zimmerman & Mattia, 1998). Despite overlaps with symptoms and comorbid conditions, Phillips, Gunderson et al. (1998) note important disparities in social isolation, delusions, and differences in insight that cast doubt on MD's suitability for classification on the OCD spectrum. There are also some parallels drawn to the eating disorders such as anorexia nervosa or bulimia nervosa given the extent of attention to diet and exercise, and dissatisfaction with body image (Mangweth et al., 2001;Olivardia, Pope, Mangweth & Hudson, 1995). Eating disorders as presented in the Diagnostic and Statistical Manual are characterised by severe disturbances in eating behaviour and a preoccupation with eating (American Psychiatric Association, 2013). The rigour in which an individual pursues the body ideal is similar amongst the different types of eating disorder and MD. However, the goals being pursued are very different (e.g. the intrusive fears around weight relate to gain in Anorexia Nervosa, but loss in MD). Additionally, it could be considered that a secondary feature of the MD condition is the disordered eating (Olivardia, 2001), and thus classification as a disorder of 'eating' is not sufficient. Other authors (e.g., have mentioned that MD could perhaps be classed as an addiction although there was limited explanation. AN ALTERNATIVE CLASSIFICATION: 'ADDICTION TO BODY IMAGE' MODEL The 'Addiction to Body Image' (ABI) model attempts to provide an operational definition and to introduce a standard assessment across the research area. The ABI model uses the addiction components model of Griffiths (2005) as the framework in which to define muscle dysmorphia as an addiction. For the purposes of this paper, body image is defined as a person's "perceptions, thoughts and feelings about his or her body" (Grogan, 2008, p. 3). The addictive activity is the maintaining of body image via a number of different activities such as bodybuilding, exercise, eating certain foods, taking specific drugs (e.g., anabolic steroids), shopping for certain foods, food supplements, and purchase or use of physical exercise accessories). Addiction is defined as the use of a substance or activity that becomes all-encompassing to the user and comprises all six of Griffiths' (2005) addiction components. Each of these components is described below in the context of MD symptomatology and behavioural maintenance. Salience A person with an ABI may: (i) have cognitive disturbances that lead to a total preoccupation with activities that maintain body image such as physical training and eating according to a strict dietary intake (Veale, 2004), (ii) be able to perform other tasks such as work and shopping (explained by reverse salience -see below) as these tasks will be designed and built around being able to engage in specific body image maintenance behaviours such as physical exercising and eating (Olivardia, Pope & Hudson, 2000), and (iii) be able to manipulate their personal situation to ensure they can perform these maintenance tasks (Mosley, 2009). The individual with ABI may even change or forego career opportunities and other daily activities as it may reduce their ability to train or control eating behaviour during the day (Murray et al., 2010). Reverse salience If the person with ABI cannot engage in maintenance behaviours such as training or eating regimes, their thought processes are likely to be excessively preoccupied by the need to carry out the desired behaviours to maintain body image (Olivardia et al., 2000). This may result in the manifestation of physical symptoms. More specifically, the cognitive disturbance creates a negative thought process that facilitates the manifestation of physical symptoms (e.g., shakes, sweating, nausea, etc.) as seen in other addictions. Due to some of the dietary restrictions the person with ABI places upon their body, physical symptoms such as fainting and falling unconscious may be present due to low blood sugar levels. Mood modification For an individual with ABI, being able to engage in the maintenance behaviours brings a sense of reward. As a consequence, training and food intake (either restrictive or over-eating) should facilitate the release of endorphins into the bloodstream, which would increase positive mood. The physical act of engaging in physical exercise and training (whether cardio-or weight-based) may produce a physical state whereby the muscles are enriched with blood (which at their biggest is known as a 'pump'). This pump brings a sense of euphoria and happiness to the person (Elliot, Goldberg, Watts & Orwoll, 1983). The ABI model proposes that engaging in the maintenance behaviours -for example weight training -will create a chemical high created by the body though the release of chemicals such as endorphins (Griffiths, 1997). A person with ABI will desire these chemical changes and this may have the same effect (both physiologically and psychologically) as other psychoactive substances. Once their maintenance behaviours have been completed, the person's mood will relax due to completion of the activity, and the person may also have a feeling of utopia, a sense of inner peace, or an exceptional high. This feeling has been linked to the use of AASs in gym training (Mosley, 2009). The person with ABI will need to control their food intake (i.e., less or more protein and carbohydrates). The ABI model proposes this will become a secondary dependence due to the food intake being part of the process to maintain the primary dependence (i.e., the sculpting of the body). This will be due to the body adapting to the amount of calories it is being fed, but also due to requirement of being lighter or heavier -and for longer -which in turn will allow the person to obtain the desired body shape. Tolerance The person with ABI may need to increase the levels and intensity of the training or the food restriction (i.e., the maintenance behaviours) to achieve the desired physiological and/or psychological effects. This can be achieved through different training strategies or by the consumption of different foods. In some circumstances, this may be achieved through the use of psychoactive substances such as AASs or other food inhibiting drugs. Record keeping of training sessions and seeking out changes in activities may assist the individual in combatting the effects of tolerance (Mosley, 2009). Withdrawal The person with ABI is expected to have negative physical and/or psychological effects if they are unable to engage in the maintenance activities. This would be likely to include one or more psychological and/or physical components (Griffiths, 2005) such as intense moodiness and irritability, anxiety, depression, nausea, and stomach cramps. They will not be able to just stop the maintenance behaviours without experiencing one or more of these symptoms. Conflict The person with ABI becomes focused on their maintenance behaviours of training and/or eating. These behaviours can become all consuming, and the need to train, control diet, and exercise may conflict with their family, their work, the use of resources (e.g., money) and their life in general. An individual quoted in Mosley (2009) noted "bodybuilding is my life, so I make sacrifices elsewhere" (p. 194). In some cases of the addiction, the process is thought have healthy physical consequences and add to life in the short-term, in the long-term, the addiction will detract from their overall quality of life. Relapse If the person with ABI manages to stop the maintenance behaviours for a period of time, they may be susceptible to triggers to re-engage in the behaviours again. CBT approaches for treatment of MD include aspects which address triggers or reinforcing behaviour, and reducing stress around maintaining body image to prevent likelihood of relapse (Grieve, Truba & Bowersox, 2009). When a person with ABI re-engages with behaviours again, they may go straight back into previous destructive training and eating patterns. The ABI model differs from other addiction models in relation to the primary and secondary dependencies. For instance, in exercise addiction, the individual has the primary goal of exercising, and the cognitive dysfunction in this condition is the act of exercising in, and of, itself (Berczik et al., 2012). If the person loses weight or increases their body size through their exercise, this is seen as a secondary dependence as it is a natural consequence of the primary dependence and is not the primary goal. In MD, the primary dependence is maintenance in behaviours that facilitate body size change due to the cognitive dysfunction of negative perceptions of their body image. Exercise and/or dietary controls are the secondary dependence as they assist in achieving the primary goal of maintaining their desired body size and composition. In addition, exercise addiction tends to relate to compulsive aerobic exercise, with the endorphin rush from the physical exertion rather than a reward from physique change. Pope et al. (1997) also note that (to a degree) aerobic exercise may be avoided by those with MD as it may reduce muscle size. In the ABI model, the perception of the positive effects on the self-body image is accounted for as a critical aspect of the MD condition. The maintenance behaviours of those with ABI may include healthy changes to diet or increases in exercise. However, such behaviours can hide or mislead those with ABI away from the negative thought processes that are driving their addiction. It is in the cognitive dysfunction of MD where we believe there is a pathological issue, and why the field has encountered problems with the criteria for the condition. The attempt to explain MD in the same manner as other BDDs may not be adequate due to the cognitive dysfunction occurring in the context of the potentially positive physical effects via improvements in shape, tone, and/or health of the body. The ABI model supports the findings of Pope et al. (2005); there is a difference in the cognitive dysfunction with a misconstrued self-body image compared to other BDDs. The cognitive dysfunction causes the individual with MD to have a misconstrued view of their own body image, and the person may believe they are small and puny. This negative mindset has the potential to cause depression and other disorders, and may facilitate the addiction. Unlike other conceptualizations of MD in the BDD literature, we would argue that the agent of the addiction is the perceived body image that is maintained by engaging in secondary behaviours such as specific types of physical activity and food. The most important thing in the life of someone with MD is how their body looks (i.e., their body image). The behaviours that the person with MD engages in (such as excessive exercise or disordered eating) are merely the vehicles by which their addiction (i.e., their perceived body image) is maintained. Based on empirical evidence to date, we propose that Muscle Dysmorphia could be re-classified as an addiction due to the individual continuing to engage in maintenance behaviours that cause long-term psychological damage. More research is needed to explore the possibilities of MD as an addiction, and how this particular addiction is linked to substance use and other comorbid health conditions. Controversy about the conceptual measurement of the condition, has led to a number of different scales adapted from different criteria that may not fully measure the experience of MD (Cafri & Thompson, 2007). However, a group of questions that might test the applicability of the ABI approach to measuring and conceptualising MD have not been asked. Questionnaires such as the Exercise Addiction Inventory (Griffiths, Szabo & Terry, 2005;Terry, Szabo & Griffiths, 2004) and the Bergen Work Addiction Scale (Andreassen, Griffiths, Hetland & Pallesen, 2012) could be adapted to fit MD characteristics. Adequate conceptualisation is key to explore the clinically relevant condition (Kuennen & Waldron, 2007). This new ABI approach may also have implications for diagnostic systems around similar conditions such as other BDDs or eating disorders. Theoretical and empirical work exploring these in an addiction context would be welcomed. Funding source: None. Authors' contribution: All authors contributed to the writing of the paper. The paper was based on an idea originally formulated by the first author. The second and third authors subsequently developed the idea. Conflict of interest: The authors declare no conflict of interest.
2018-04-03T02:44:01.074Z
2014-02-03T00:00:00.000
{ "year": 2014, "sha1": "db759c65bc1a0eb161ef732a63897e86d492dc5e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1556/jba.3.2014.001", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "db759c65bc1a0eb161ef732a63897e86d492dc5e", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
236829768
pes2o/s2orc
v3-fos-license
Study of the efficacy of the elastic intramedullary nails in the treatment of primary aneurysmal bone cysts in extremities Background The main treatment method of the primary aneurysmal bone cyst (ABC) in extremities is to curettage and bone grafts with high-speed burring, radiotherapy, sclerotherapy, arterial embolism and hormone therapy can be used for the lesions whose location cannot be easily exposed by the surgery. Regardless of the method, high recurrence rates are a common problem. Purpose To explore and compare the effect and efficacy of the elastic intramedullary nails in the treatment of primary ABC in extremities. Method 26 patients with primary ABC admitted and treated from 2010 to 2016 were studied retrospectively. 26 patients were divided into 2 groups according to the treatment plan: the patients of the control group received curettage and bone grafts with high-speed burring; the patients of the study group received curettage and bone grafts with high-speed burring + elastic intramedullary nail (ESIN). According to the imaging results (Neer grading) and MSTS(musculoskeletal tumor society)functional evaluation, the curative effect of the children of the 2 groups were analyzed statistically. Results A total of 10 patients of the control group received curettage and bone grafts with high-speed burring and a total of 16 patients of the study group received curettage and bone grafts with high-speed burring + ESIN. All of the patients were followed up for more than 2 years. 9 of the 26 patients recurred. According to the imaging results and MSTS function evaluation, there was statistically significant difference in the curative effect between the 2 groups ( P < 0.05).The recurrence cases in the study group have a better MSTS functioncal recovery. Conclusion Curettage and bone grafts with high-speed burring + ESIN can significantly reduce the recurrence rate of primary ABC in extremities. The using of ESIN has a good curative effect and efficacy. Results A total of 10 patients of the control group received curettage and bone grafts with high-speed burring and a total of 16 patients of the study group received curettage and bone grafts with highspeed burring + ESIN. All of the patients were followed up for more than 2 years. 9 of the 26 patients recurred. According to the imaging results and MSTS function evaluation, there was statistically significant difference in the curative effect between the 2 groups ( P < 0.05).The recurrence cases in the study group have a better MSTS functioncal recovery. Conclusion Curettage and bone grafts with high-speed burring + ESIN can significantly reduce the recurrence rate of primary ABC in extremities. The using of ESIN has a good curative effect and efficacy. Background Aneurysmal bone cyst (ABC) was original described by Jaffee and Lichtenstein in 1942, which has a low morbidity (about 1.4-3.2/a million people) [1] . A large amount of literature and clinical data indicated that ABC is a clinically rare benign bone tumor. However, some researchers presented that ABC is a malignant tumor, an intermediate tumor with malignant tendency. The imaging of ABC shows 3 that ABC is a polycystic and expansile osteolytic disorder, 70% of which are primary, and 30% are secondary. Up to now, the causes and pathogenic mechanisms of the ABC are still unknown, which leads to the diversified treatment methods of ABC. Currently, the main treatment method of ABC is curettage and bone grafts with high-speed burring. However, the high recurrence rate of ABC is still a worldwide recognized challenge. The recurrence rate of ABC was 11.8-20%, and the interval time between the surgery and the recurrence is 10 months. Some other reports revealed that the recurrence rate was generally higher in young patients than adults, which might be correlated to the location of focus in epiphysis or near epiphysis [2][3][4] . This study retrospectively analyzed the clinical data of children with primary ABC in extremities and explored the effectiveness of ESIN in the treatment of primary ABC in extremities. I. Case screening The clinical data of the patients who have received surgical treatment after being diagnosed with primary ABC in our Hospital, 2010 to June, 2016 were recorded. Inclusion criteria: The age of the patients was between 2-14 years old, with foci in the long bones of extremities, who have been followed up for more than 2 years, the surgeons who gave them surgical treatment had at least 10 years experience. All the participatences have complete clinical and imaging follow-up data. Exclusion criteria: patients whose tumor foci were located in the flat bones or short bones, patients who received only plaster immobilization, but not surgical treatment due to pathological bone fracture; patients with secondary ABC. II. Treatment Method Surgery was performed under general anesthesia. Control group: curettage and bone grafts with high-speed burring,plaster immobilization in the upper limbs for 6-8 weeks, and in the lower limbs for 8-12 weeks. Study group: curettage and bone grafts with high-speed burring and ESIN was used under fluoroscopic guidance in an ascending matter (2-C-configuration). The diameter of the ESIN (2.0 to 3.0 mm) was decided on the basis of the preoperative anterior-posterior radiograph (digitally measured), plaster immobilization in the upper limbs for 4 weeks, and in the lower limbs for 6 weeks. III. Follow-up and efficacy evaluation All patients were reviewed with clinical examination, X-rays and functional evaluation four to eight weeks after the operation, then every three months until complete bone mineralization occurred and removal of the nails was possible. In order to makesure the long-term effectiveness, we arranged a further evaluation one year after the nails were removed. Treatment results were classified according to the scheme used by Neer [5] : -Grade 1 = no response -the cyst showed no evidence of response to the treatment. -Grade 2 = recurrence -the cyst initially consolidated with bone, but large areas of osteolysis and cortical thinning subsequently recurred. -Grade 3 = healed with residual cyst -the cyst was consolidated with bone and the cortical margin thickened but there were still residual cyst parts. -Grade 4 = healed -the cyst was completely filled in with bone and the cortical margin thickened. Functional evaluation was performed by MSTS scoring [6] . MSTS and radiographic evaluation in the last follow-up time were conducted for both of the treatment group IV. Statistical analysis SPSS 20.0 (SPSS company, USA) statistical software package was used for statistical processing. Among them, χ 2 test was used to compare the recurrence rate between the control group and the study group,R × C contingency table χ 2 test was used to compare the prognosis of the two groups, and t test of two independent samples was used to compare the MSTS score. Kaplan Meier survival curve was used to analyze the prognosis of the two groups and the relationship between the prognosis and the anatomical position, and the test level α value was 0.05 on both sides. I. General Data A total of 26 primary ABC patients were included in the study, and all conformed to the characteristics of typical pathologic histology of ABC (following figure) (Fig. 1). Histological evaluation is mandatory for the accurate diagnosis of ABC. Grossly, ABC is spongy, hemorrhagic masses covered by a thin shell of the reactive bone. Microscopically, red blood cells and pale brown hemosiderin are abundant, filling cyst-like spaces bounded by septal proliferations of fibroblasts, mitotically active spindle cells, osteoid, calcifications, and scattered multi-nucleated giant cells [7] . Table 1 showed the specific focus distribution. The content includes 16 cases of the study group and 10 cases of control group; the mean age for the patients of the 2 groups were 5.94 (2-14) years old and 7.20 (2-12) years old, with no statistically significant difference between the 2 groups (P 0.05). II. Prognostic Analysis For the patients in the study group, 3 cases for grade I, 0 case for grade II, 1 case for grade III, and 12 cases for grade IV; for the patients in the control group, the prognostic evaluation results of Neer grading were 2 cases for grade I, 4 cases for grade II, 3 cases for grade III, and 1 case for grade IV. In addition, there were 2 cases of postoperative pathological fracture in the control group, and 1 case of skin irritation in the study group; none had severe infection, scar formation, or postoperative pathological fracture. During follow-up period, postoperative MSTS functional scoring was performed for the patients of the 2 groups, and the results showed that the functional score of the patients in ESIN treatment group (28.88 ± 2.22) was statistical significantly higher than that of the control group (19.3 ± 7.83)(P < 0.001). According to the postoperative MSTS functional evaluation, we found that patients in the study group were still able to participate physical exercises eventhough the foci recurring, and no pathological fracture happened, and the postoperative life quality of the patients in the study group was greatly improved (Fig,2) . III. Statistical Analysis The results of the Chi-square test revealed that there was statistical difference in postoperative NEER grading between the patients of the 2 groups (P = 0.003), and the NEER grading prognosis of the patients in the study group was significantly better than that of the control group. According to the literature, the grades I-II of NEER grading were defined as recurrence, and the grades III-IV were 6 defined as cured. Grades III and IV were defined as success, whereas grades I and II represented a failure in treatment. The results of Kaplan-Meier curve and statistical analysis of Log-rank test revealed that there was statistically significant difference between the 2 groups (P = 0.028) (Fig. 3). According to the statistical analysis of the relationship between the recurrence rate and the lesion site in the study group, there was no significant correlation between the lesion site and the recurrence rate (P = 0.092) (Fig. 4). Discussion The standard treatment of ABC is curettage with or without bone-graft. Despite the best efforts at curettage, postoperative highly variable recurrence rates have been shown. As a result, various auxiliary methods have been evolved to reduce the recurrence including the use of cement, highspeed burr, argon beam, phenol, and cryotherapy. Most of our patients had disabilities because of the tumor and their treatment, although none of them died or required amputation. It is difficult to treat ABC in the stage of aggressive period. Lesions that occur in the proximal femur should be treated more aggressively, partly because of the high rate of local recurrence and the high risk of fracture. The most appropriate techniques for some of these tumors are curettage surgery and allograft implantation. However, the living quality of the recurred patients significantly reduced. Up to now, the cause of ABC is still unknown. Traditionally, the current consensus believed that it is associated with the pressure increase in local blood vessels. In 1950, Lichtenstein et al [8] . proposed that the ABC should not been defined as a bone tumor, but a reactive disease of increased intraosseous pressure caused by intraosseous vasogenic disorder (intraosseous phlebemphraxis or arteriovenous fistula). In 1995, Kransdorf et al [9] . described the ABC foci as a formation of hemorrhage, and they proposed that continuous bleeding from intraosseous blood capillary created a cavity. Osteolytic change could be a result of rapid expansion of the sclerotin in the lesion area bone cysts formed. According to Mirra et al [10] .the so-called aneurysmal bone cyst is not a cyst nor a neoplasm; rather, it is probably a periosteal to arteriovenous malformation in bone, not uncommonly seen in association with other well-known benign and even malignant lesions. In the last 10 years, however, many researchers proposed that the formation of ABC was correlated with gene mutation, 7 and believed that the ABC was a bone tumor, not a disease caused by local bleeding. Ye Y et al [11] . believed that primary ABC has now been identified as an independent neoplasm. The oncogenes responsible for ABC is formed secondary to gain-of-function translocations of t(16;17) (q22;p13) involving a gain-of-function of TRE17/USP6 (ubiquitin-specificprotease USP6 gene). In ABC, this mutation causes the induction of matrix metallopro-teinase (MMP) activity via NF-kB. They think that ABC has no malignant potential although the USP6 gene activatied. Oliveira et al. [12−13] believed that primary ABC was a tumor originated from mesenchymal cells, and they found that there was rearrangement of one or two oncogenes in USP6 (ubiquitin-specific protease 6) and CDH11 (cadherin 11 gene) in the patients with primary ABC, and that there was chromosome translocation in T (16;17) (q22;p1). They also found that the oncogene USP6 was in a very active state under the regulation of CDH11 promoter, but there was no translocation of CDH11 or USP6 in the patients with secondary ABC. In this study, all of the patients had primary ABC, with good health status evaluated by each system, and without manifestations of malignant tumor. All of the patients of the 2 groups achieved satisfactory efficacies after receiving different treatment protocols, but the efficacy of the study group was superior to that of the control group. Curettage and bone grafts with high-speed burring is the main treatment of ABC. At present, there were few literatures reported the theoretical basis for the application of ESIN in primary ABC. In 2015, Erol B et al [14] . proposed to take the curettage and bone grafts with high-speed burring assisted by the fixation with steel plate, ESIN or Kirschner wire, etc. as the treatment method of primary ABC. They found that internal fixation in specific locations can promote healing rate in most of the ABC cases. Classified postoperative NEER grading of the children into 2 classes, namely the recurrence (grade I and II) and cured (grade III and IV).Our data showed that the recurrence rate of the patients in the study group(18.75%) was significantly lower than that of the control group(60%),This result indicated that the use of ESIN could reduce the recurrence rate (P < 0.05). However, the molecular mechanism of which should be further studied in patients with primary ABC. Meanwhile, we also 8 found that the time of postoperative plaster immobilization of the patients in the study group was significantly shorter than that of the control group. Additionally, the patients in the study group did not have pathological bone fracture after recurrence and the result of MSTS functional evaluation of recurrenced cases indicated satisfactory efficacy. According to above data, we could find that the internal fixation with ESIN can not only increase the cure rate of the patients with primary ABC, reduce their recurrence rate, but also significantly reduce the time for postoperative plaster immobilization of the patients and the risk to have another pathological bone fracture, and also significantly improve postoperative life quality of the patients. Meanwhile, we also found that the recurrence factors may be correlated to the lesion location, but statistical analysis indicated no statistical significance. We believe that the use of ESIN in the treatment of primary ABC has following advantages: (1) With its good elasticity, each ESIN is able to form 3 supporting points in the medullary space; 2 nails are distributed in the medullary space to form double arches, which is a central -type internal splint fixation; the mechanical conduction after fixation is a stress sharing mode, which brings less interference to normal biomechanics of the limbs. The nails provide at least 4 kinds of stability of biomechanics, namely counter-bending stability of axial stability, lateral stability and counter-rotation stability, which can effectively prevent displacement, angulation and rotation after fixation, therefore, the nails can significantly reduce the time of postoperative plaster immobilization of the patients and improve postoperative life quality of the children. (2) Based on the pathogenesis of the ABC, Biesecker et al [15] . supported the hypothesis that ABC was a secondary reactive lesion of bone occurring owing to hemodynamic disturbances based on the results of manometric pressure studies showing increased intracysticpressure. Marcove RCet al [16] . suggested that arresting this hemodynamic disturbance could induce healing and prevent recurrence. Healing therefore may occur either spontaneously or after biopsy or fracture. Therefore, we hypothesis that fixation ESIN can achieve the effect of continuous intracapsular drainage, and thus reduce intracapsular pressure, promote healing and reduce recurrence. Furthermore, compared with Kirschner wire, ESIN is located in the marrow cavity, and it brings less foreign body reaction to surrounding tissues and could reside 9 in body for a long term. (3) After long-term follow-up for the patients of both groups, it was found that all of the recurrenced cases in the study group had no pathological bone fracture, and we believe that when the recurrenced area is located in the stress bone (femur or tibia), the internal fixation with ESIN can provide protection and significantly reduce the risk of refracture. In this study, most of the ABC located at the neck of the femur which is near the metaphysis and is a stress bone in both of the 2 groups. Some researches recommended to curettage and bone grafts with high-speed burring, assisted by internal fixation with steel plate, but local bone cortex of ABC was injured in an osteolytic manner, so it needs to be further discussed whether the internal fixation with steel plate can play a role on the prevention of pathological bone fracture and stabilization once there was a recurrence or expansion of the lesion (Fig. 5). Due to its poor stability, the kirschner wire can hardly play the role of secure internal fixation. Hutchinson PH, Wang x et al [17−18] believed that in the treatment of the fracture of the neck of humerus, the penetration of ESIN head through the epiphyseal plate of proximal humerus and placement of ESIN in the epiphysis would not cause epiphyseal premature closure and affect the growth. For example, we used ESIN less than 3 mm to penetrate the epiphyseal plate of proximal humerus in some cases, and no epiphyseal premature closure was found in the 3 years follow-up. Due to the mechanism of 4 biological stabilities, the ESIN can exert the effect of stability; the cross stress produced by the nails in the medullary space can attain the goal of supporting the longitudinal axis of the long bone, and avoid the risk of another pathological bone fracture due to weak sclerotin and lesions following focus recurrence (Fig. 6). Furthermore, we found one of the patients had the lesion in proximal humerus, which was a active ABC. The patient received curettage and bone grafts with high-speed burring and ESIN. According to our 3 years imaging follow-up. we found that there was a focal recurrence, but the focal recurrence migrated away from the epiphysis as time goes on, and the patient had no obstacles in physical exercise so far, whose score of MSTS was 28. In 2008, Patrick et al [19] . retrospectively analyzed 53 patients with ABC, and found that the patients around 12 years old had a relatively high recurrence rate. 8 out of 19 patients with lesion near the epiphysis had recurrence after the surgery and the recurrence rate significantly higher than the patients with ABC in other locations. This study speculated that this might be correlated with insufficient curettage due to surgeon's concern of postoperative growth deformity during surgery. Therefore, we made a hypothesis that in case of postoperative recurrence, the ESIN in the medullary reduced the risk of pathological fracture, and the patient's postoperative functional score and life quality were significantly improved to an extent that he/she could even do physical exercise, and when the patient was older and the focus furtherly migrated away from the epiphysis, another surgical treatment was provided, which, according to our speculation, was able to reduce the recurrence rate and surgical difficulty. Furthermore, the patient's life quality in the recurrence period was greatly improved. This speculation, however, should be further proved by a large amount of clinical cases in the future study (Fig. 7). Conclusion In summary, according to this study, we believe that the treatment of the primary ABC in extremities with curettage and bone grafts with high-speed burring assisted by ESIN is a safe and effective treatment measure, which can effectively reduce the recurrence rate, prevent the occurrence of pathological fracture in recurred cases, reduce the time for postoperative plaster immobilization and improve the patient's life quality. However, the number of our cases was limited, further studies are needed based on more and multi-centerd cases to prove our conclusion, though statistically significant difference has been got in our study. Ethics approval and consent to participate The Medical Ethical Commission of the Shanghai Children's Hospital approved this study. Participants signed informed consent form before participation. Consent for publication All authors agree to publish in World Journal of Surgical Oncology Availability of data and material All the data used in the article can be obtained from the medical record information system of Shanghai Children's Hospital Shanghai jiao tong University. Any questions or enquiries regarding the present study can be directed to Li-hua Zhao, MD (18616771553@163.com), as the corresponding author. Funding The present study was supported by the National Natural Science Foundation of China (no. 12 Zhao revised the manuscript critically for important intellectual content. All authors read and approved the final manuscript. Competing interests The authors declare that they have no competing interests. The results of Kaplan-Meier curve and statistical analysis of Log-rank test revealed that there was statistically significant difference between the 2 groups (P = 0.028). 18 Figure 3 The results of Kaplan-Meier curve and statistical analysis of Log-rank test revealed that there was statistically significant difference between the 2 groups (P = 0.028). 19 Figure 3 The results of Kaplan-Meier curve and statistical analysis of Log-rank test revealed that there was statistically significant difference between the 2 groups (P = 0.028). 20 Figure 4 According to the statistical analysis of the relationship between the recurrence rate and the lesion site in the study group, there was no significant correlation between the lesion site and the recurrence rate (P = 0.092) . 21 Figure 4 According to the statistical analysis of the relationship between the recurrence rate and the lesion site in the study group, there was no significant correlation between the lesion site and the recurrence rate (P = 0.092) . Figure 4 According to the statistical analysis of the relationship between the recurrence rate and the lesion site in the study group, there was no significant correlation between the lesion site and the recurrence rate (P = 0.092) . Figure 5 The patients with recurrence of ABC. The figure indicated pathological bone fracture even with internal fixation with steel plate. In view of focal recurrence, osteolytic change of local bone cortex of the neck of femur and cystic change of medullary space, the efficacy of the internal fixation by the 3 nails should be discussed further, and meanwhile unclear fixation efficacy of proximal nail of steel plate led to stress bone fracture. 24 Figure 5 The patients with recurrence of ABC. The figure indicated pathological bone fracture even with internal fixation with steel plate. In view of focal recurrence, osteolytic change of local bone cortex of the neck of femur and cystic change of medullary space, the efficacy of the internal fixation by the 3 nails should be discussed further, and meanwhile unclear fixation efficacy of proximal nail of steel plate led to stress bone fracture. Figure 5 The patients with recurrence of ABC. The figure indicated pathological bone fracture even with internal fixation with steel plate. In view of focal recurrence, osteolytic change of local bone cortex of the neck of femur and cystic change of medullary space, the efficacy of the internal fixation by the 3 nails should be discussed further, and meanwhile unclear fixation efficacy of proximal nail of steel plate led to stress bone fracture. 26 Figure 6 curettage and bone grafts with high-speed burring + ESIN for the foci of neck of femur. The x-rays are present in Month 1, 6, 18 and 40 after surgery respectively. The patients of this group had no pathological fracture even after focal recurrence, and no epiphyseal premature closure was observed in 4 years eventhough the penetration of ESIN into the epiphyseal plate (5-9 years old), and bilateral lower limbs were equal in length, and the score of MSTS (musculoskeletal tumor society) was 28. 28 Figure 6 curettage and bone grafts with high-speed burring + ESIN for the foci of neck of femur. The x-rays are present in Month 1, 6, 18 and 40 after surgery respectively. The patients of this group had no pathological fracture even after focal recurrence, and no epiphyseal premature closure was observed in 4 years eventhough the penetration of ESIN into the Figure 6 curettage and bone grafts with high-speed burring + ESIN for the foci of neck of femur. The x-rays are present in Month 1, 6, 18 and 40 after surgery respectively. The patients of this group had no pathological fracture even after focal recurrence, and no epiphyseal premature closure was observed in 4 years eventhough the penetration of ESIN into the epiphyseal plate (5-9 years old), and bilateral lower limbs were equal in length, and the score of MSTS (musculoskeletal tumor society) was 28. Figure 7 A B and C showed the lesion in one year before surgery, preoperative and 4 years postoperation respectively, the patients had the lesion in proximal humerus, which was an active ABC. The patient received curettage and bone grafts with high-speed burring + ESIN. According to our long-term imaging follow-ups, we found that there was a focal recurrence, but the focal recurrence migrated away from the epiphysis with the passage of time, and the patient had no obstacles in physical exercise so far, the score of MSTS (muscular skeletal tumor society) was 28. 34 Figure 7 35 A B and C showed the lesion in one year before surgery, preoperative and 4 years postoperation respectively, the patients had the lesion in proximal humerus, which was an active ABC. The patient received curettage and bone grafts with high-speed burring + ESIN. According to our long-term imaging follow-ups, we found that there was a focal recurrence, but the focal recurrence migrated away from the epiphysis with the passage of time, and the patient had no obstacles in physical exercise so far, the score of MSTS (muscular skeletal tumor society) was 28. Figure 7 A B and C showed the lesion in one year before surgery, preoperative and 4 years postoperation respectively, the patients had the lesion in proximal humerus, which was an active ABC. The patient received curettage and bone grafts with high-speed burring + ESIN. According to our long-term imaging follow-ups, we found that there was a focal recurrence, but the focal recurrence migrated away from the epiphysis with the passage of time, and the patient had no obstacles in physical exercise so far, the score of MSTS (muscular skeletal tumor society) was 28. Supplementary Files This is a list of supplementary files associated with this preprint. Click to download.
2020-02-20T09:15:05.599Z
2020-02-17T00:00:00.000
{ "year": 2020, "sha1": "e7a2d2b1f49f3ad43877f714f2d55c1504e4549e", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-14296/v1.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "d24a3c5aac229e1854939a2fc6dfa2214c9de161", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15907519
pes2o/s2orc
v3-fos-license
Identification of Late Larval Stage Developmental Checkpoints in Caenorhabditis elegans Regulated by Insulin/IGF and Steroid Hormone Signaling Pathways Organisms in the wild develop with varying food availability. During periods of nutritional scarcity, development may slow or arrest until conditions improve. The ability to modulate developmental programs in response to poor nutritional conditions requires a means of sensing the changing nutritional environment and limiting tissue growth. The mechanisms by which organisms accomplish this adaptation are not well understood. We sought to study this question by examining the effects of nutrient deprivation on Caenorhabditis elegans development during the late larval stages, L3 and L4, a period of extensive tissue growth and morphogenesis. By removing animals from food at different times, we show here that specific checkpoints exist in the early L3 and early L4 stages that systemically arrest the development of diverse tissues and cellular processes. These checkpoints occur once in each larval stage after molting and prior to initiation of the subsequent molting cycle. DAF-2, the insulin/insulin-like growth factor receptor, regulates passage through the L3 and L4 checkpoints in response to nutrition. The FOXO transcription factor DAF-16, a major target of insulin-like signaling, functions cell-nonautonomously in the hypodermis (skin) to arrest developmental upon nutrient removal. The effects of DAF-16 on progression through the L3 and L4 stages are mediated by DAF-9, a cytochrome P450 ortholog involved in the production of C. elegans steroid hormones. Our results identify a novel mode of C. elegans growth in which development progresses from one checkpoint to the next. At each checkpoint, nutritional conditions determine whether animals remain arrested or continue development to the next checkpoint. Introduction The development of multicellular organisms requires the coordinated differentiation and morphogenesis of multiple cell types that interact to form functional tissues and organs.In favorable environmental conditions, development proceeds in a largely stereotyped pattern.When faced with adverse conditions, tissue growth may slow or arrest until the environment improves [1][2][3].The most critical environmental factor that regulates development is nutrient availability.Organisms can modulate growth programs in response to changing nutritional conditions [4], although the mechanisms through which organisms sense changes in nutrient availability and alter diverse cellular processes in a coordinated manner are incompletely understood. The nematode Caenorhabditis elegans is a powerful model for understanding the effects of nutrition on development due to its short life cycle (3-4 days from embryo to adult), simple cellular make-up, and highly stereotyped development.The postembryonic development of C. elegans entails passage through four larval stages (L1-L4) that are separated by molts, before reaching reproductive adulthood.Two alternative pathways of development exist in C. elegans: continuous passage through the four larval stages, or entry into an L3 dauer stage, a growth-arrested state characterized by altered body morphology, elevated stress resistance, and prolonged survival [3].Entry into dauer is initiated late in the L1 stage in response to unfavorable environmental conditions, in particular high population density, high temperature, and reduced nutrient availability [5].Additional points of arrest in response to poor nutritional conditions have been identified early in the C. elegans life cycle and in adults.Animals that hatch in the absence of food undergo L1 arrest [6,7], and animals reared from hatching on a limited supply of heat-killed bacteria arrest in the L2 stage [8].Finally, adult C. elegans arrest embryo production and shrink their germlines following removal of food [9,10]. Studies on dauer and L1 arrest have revealed critical roles for the insulin/insulin-like growth factor (IGF) signaling pathway in sensing the nutritional environment and regulating entry into arrest [7,11,12].In C. elegans, insulin-like peptides are generated during feeding and signal through DAF-2, the insulin/IGF receptor.Activation of DAF-2 leads to the phosphorylation and cytoplasmic sequestration of DAF-16, a forkhead box O (FOXO) transcription factor.During conditions of low nutrition, the DAF-2-mediated phosphorylation of DAF-16 is reduced, allowing DAF-16 to enter the nucleus and transcriptionally regulate genes implicated in developmental arrest [7,[13][14][15][16].Mutant animals with reduced daf-2 function may arrest in the L1 stage or form dauers constitutively (Daf-c phenotype) [11], whereas daf-16 null mutants continue development past the wild type timing of L1 arrest and are defective in dauer formation (Daf-d phenotype) [2,12]. In worms, insects, and mammals, insulin-like signaling affects the production of steroid hormones, lipophilic molecules that bind to nuclear hormone receptors and induce cellular responses [17].In C. elegans, bypassing dauer formation requires the bile-acid like steroid hormone dafachronic acid (DA) [18].DAF-9, a cytochrome P450 ortholog, is required for the production of DAs, and daf-9 null animals are Daf-c [19][20][21].The effects of DAF-9 on dauer formation are mediated by DAF-12, a nuclear hormone receptor that binds DAs [18].The steroid hormone pathway functions downstream of insulin-like signaling during dauer formation, as daf-12 Daf-d alleles suppress the Daf-c phenotype of daf-2 mutants [11]. Despite extensive work on dauer and other arrests, questions remain about the response of C. elegans to nutritional scarcity.Among these are whether the arrests in L1, L2, dauer, and the adult represent unique periods of the life cycle during which animals are sensitive to their nutritional environment, or if arrest can also occur at other times.It is also not known whether bypassing the opportunity to form a dauer leads to continuous development to adulthood, or whether additional opportunities exist to arrest development when faced with nutrient deprivation.Finally, the mechanisms through which numerous tissues and cellular processes are able to coordinately arrest in response to nutrient withdrawal are not well understood. We sought to address these questions by examining the response of C. elegans to nutrient deprivation during the late larval stages (L3 and L4), after the opportunity to form a dauer has been passed.Several tissues that contribute to the reproductive system undergo differentiation, growth, and morphogenesis during the L3 and L4 stages, making this period an ideal time to determine how ongoing developmental processes are affected by nutrient deprivation.By removing animals from food at different times, we show that specific checkpoints exist in the early part of the L3 and L4 stages that restrict progression through the larval stage and systemically arrest the development of diverse tissues and cellular processes.Insulin-like signaling regulates the response to nutrient deprivation in the L3 and L4 stages through cell-nonautonomous DAF-16 activity in the hypodermis (skin), and functions to suppress DAF-9-mediated signaling activity.Our results identify a mode of metazoan growth in which development proceeds from checkpoint to checkpoint.At these checkpoints, nutritional conditions determine whether animals remain in an arrested state or continue development to the next checkpoint. Overview of vulval development in C. elegans To study the effects of nutrient deprivation on tissue development during the late larval stages, we first focused on the hermaphrodite vulva, which develops through a stereotyped pattern of cell specification, cell division, and morphogenesis during the L3 and L4 stages (Fig. 1A) [22].The vulva derives from three epidermal cells, P5.p-P7.p,which are specified early in the L3 stage to either the 1u vulval precursor cell (VPC) fate (P6.p) or the 2u VPC fate (P5.p and P7.p) (Fig. 1A).The VPCs undergo three rounds of cell division in the L3 stage to generate 22 cells, which differentiate into seven vulval subtypes, vulA-vulF.In the L4 stage, the 22 vulval cells undergo morphogenetic processes that include invagination, migration, and cell-cell fusion (Fig. 1A) [22].Proper development of the vulva requires the uterine anchor cell (AC), which invades in the mid L3 stage across basement membranes separating the uterine and vulval epithelia to form a connection between the tissues [23].The AC remains at the vulval apex after invasion until fusing with surrounding uterine cells in the mid L4 stage (Fig. 1A). Vulval development arrests early in the L3 larval stage after removal of food We examined the effects of nutrient deprivation on vulval development by growing a synchronized population to late in the L2 stage, prior to the onset of vulval formation, and removing animals from food (Fig. 1B).Part of the population was returned to food to serve as controls, with the remainder maintained in M9, a buffer lacking a carbon source.In addition to the vulva, we also assessed the onset of molting (observable by cuticle covering the mouth; see Fig. 1A, bottom left), which serves as a marker for the transition between larval stages.Results of the experiment are depicted graphically in Fig. 1B; raw data and results of triplicate assays are in Fig. S1.The control group that was returned to food progressed through the stages of vulval development with the predicted timing.The group that remained deprived of food molted into the L3 stage and uniformly arrested prior to the first VPC divisions.No VPC divisions were observed after 10 days in the absence of food (Fig. 1B).Arrested animals were phenotypically distinct from dauer larvae, which arrest after molting into a specialized L3 dauer stage (Fig. S2).When animals were returned to food after 8 d, 97.5% of the population (n = 200) continued development to adulthood, demonstrating that animals retain the capacity to resume development upon re-introduction of food.The median survival of animals under the experimental conditions used was 11.761.2d (n = 3 trials). In C. elegans, removal of the germline either through ablation or genetic mutation extends lifespan and maintains adult somatic tissues for a longer duration in a youthful state [24][25][26].This suggests the possibility of a soma-germline tradeoff in which resources are allocated away from germ cells to somatic tissues.We asked if the absence of a germline could promote the continued development of somatic tissues by growing glp-1(e2144) mutants, which do not proliferate germline progenitor cells when reared at 25uC, to late Author Summary Organisms in the wild often face long periods in which food is scarce.This may occur due to seasonal effects, loss of territory, or changes in predator-to-prey ratio.During periods of scarcity, organisms undergo adaptations to conserve resources and prolong survival.When nutrient deprivation occurs during development, physical growth and maturation to adulthood is delayed.These effects are also observed in malnourished individuals, who are smaller and reach puberty at later ages.Developmental arrest in response to nutrient scarcity requires a means of sensing changing nutrient conditions and coordinating an organism-wide response.How this occurs is not well understood.We assessed the developmental response to nutrient withdrawal in the nematode worm Caenorhabditis elegans.By removing food in the late larval stages, a period of extensive tissue formation, we have uncovered previously unknown checkpoints that occur at precise times in development.Diverse tissues and cellular processes arrest at the checkpoints.Insulin-like signaling and steroid hormone signaling regulate tissue arrest following nutrient withdrawal.These pathways are conserved in mammals and are linked to growth processes and diseases.Given that the pathways that respond to nutrition are conserved in animals, it is possible that similar checkpoints may also be important in human development. in the L2 stage and removing them from food.We found no difference in the timing of arrest, as all glp-1(e2144) animals (n = 100) arrested prior to VPC divisions.These results demonstrate that the absence of the germline does not alter the timing of somatic tissue arrest in response to nutrient removal. In addition to cell divisions, fate specification of the vulval cells was also examined in L3-arrested animals.Vulval fates are specified between the late L2 and early L3 stages, when an inductive LIN-3/EGF signal from the AC and lateral LIN-12/NOTCH signaling between VPCs combine to specify the 1u fate in P6.p and 2u fates in P5.p and P7.p [27,28].To determine the state of VPC specification in arrested animals, we examined a marker of 1u fate, egl-17.GFP [29], and a marker of 2u fate, lip-1.NLS-GFP [30] (see Materials and Methods for description of transgene nomenclature).All arrested animals expressed egl-17.GFP exclusively in P6.p, and 93% of animals expressed lip-1.NLS-GFP at elevated levels in P5.p and P7.p (n = 30 per assay; Fig. 1C), demonstrating that arrest occurred after 1u and 2u VPC specification.This contrasts with dauer larvae, which are not stably specified to a VPC fate [31,32].Coupled with the absence of VPC divisions, these results suggest that, when removed from food late in the L2 stage, vulval development arrests early in the L3 stage in a manner that is distinct from dauer arrest. A second arrest point in vulval development occurs early in the L4 stage The uniform arrest of vulval development early in the L3 stage suggested that a specific developmental checkpoint existed at this time.To determine if this was the case, we asked whether vulval development could arrest at later times in the L3 stage.A synchronized population was grown for 28 h to the mid L3 stage and removed from food.At the time of food removal, 84% had undergone one VPC division, indicating that they had bypassed the L3 arrest point (Fig. 2A; Fig. S1).Animals removed from food continued through the L3 stage, molted into L4, and arrested in L4 after completion of VPC divisions (Fig. 2A).After 48 h in the In the mid L4 stage, the AC fuses with the surrounding uterine cells, and egl-17.GFP expression changes from the 1u to 2u-fated cells.At the end of L4, the cells turn partially inside out (evert).Times for each developmental stage are after release from L1 arrest at 20uC.(B) Late L2 nutrient deprivation assay.Animals were removed from food after 22 h growth at 20uC and either returned to food or kept deprived of food.Starting at time 0, both groups were maintained at RT (22uC).In the chart, developmental stages on the Y-axis were determined by the extent of vulval development and the molt, and the duration of feeding or removal from food is indicated on the X-axis.The areas of the circles in the chart reflect the percentage of the population at each stage of development; n$50 for each time point.See Fig. S1 for raw data and results of replicate assays.(C) Animals after 2 d removal from food, with no divisions of P5.p-P7.p,and expressing the 1u fate marker, egl-17.GFP (top), or the 2u fate marker lip-1.NLS-GFP (bottom).In these and other figures, anterior is left.Scale bars, 10 mm.doi:10.1371/journal.pgen.1004426.g001absence of food, 94% of the population was arrested after VPC divisions in the L4 stage.The remaining 6% of animals was arrested in the L3 stage prior to VPC divisions, and likely represent the youngest members of the population that failed to bypass early L3 arrest.No animals were identified at intermediate stages between the two arrest points, demonstrating that bypass of the L3 arrest point led to invariant progression to the L4 arrest point (Fig. 2A).We examined the effect of the germline on L4 arrest by removing glp-1(e2144) mutants grown at 25uC from food in the mid L3 stage, and found that all animals arrested in early L4 similar to wild type (n = 100).The median survival of populations removed in the mid L3 stage was 11.0 d (n = 3 trials). All L4-arrested animals completed vulval cell divisions (n = 30), suggesting that arrest in vulval formation could occur at a precise developmental time rather than in a variable manner.To test this notion, we first examined the reporter gene egl-17.GFP, which is expressed in 1u VPC progeny early in the L4 stage and shifts to 2u VPC progeny in mid L4 (Fig. 1A).Expression of egl-17.GFP was exclusively in 1u VPC progeny in arrested animals (Fig. 2B), supporting the hypothesis of a precise timing of arrest early in the L4 stage.A second marker for L4 stage timing in vulval development is cell-cell fusions.Fusions occur between homotypic cells (i.e., vulA with vulA), starting with vulA cells shortly after terminal cell divisions and continuing two hours later with vulC cells (Fig. S3) [33].Examination of a strain expressing GFP-tagged AJM-1, an apical-membrane-localized protein that delineates the boundaries of vulval cells [33,34], showed that all L4-arrested animals had undergone vulA fusions but not vulC fusions (Fig. S3).Importantly, the same timing of arrest between vulA and vulC fusions occurred in 97% of the population when animals were grown for an additional four hours prior to removal from food (n = 30 per assay; Fig. S3), demonstrating that the timing of arrest in vulval development in the L4 stage is largely independent of feeding duration.Based on the nutrient removal experiments, we conclude that specific checkpoints exist early in the L3 and L4 larval stages that arrest vulval development at precise developmental times. Only a single checkpoint on vulval development was identified in the L3 stage, and we wanted to determine whether this was also the case with the L4 stage.Animals were grown on food to the early-to-mid L4 stage and developmental progression examined following food removal (Fig. 2C; Fig. S1).After 48 h in the absence of food, 96% of the population had progressed into adulthood, as evidenced by eversion of the vulva (Fig. 2D), with the remaining animals arrested early in the L4 stage, and no animals at intermediate times (Fig. 2C).Arrest occurred in nearly all adult animals (99/100) prior to oogenesis.Taken together, the nutrient removal assays show that arrest in C. elegans vulval development occurs only at precise checkpoints early in the L3 and L4 stages, and that passage through one checkpoint leads invariantly to progression through the larval stage to the next checkpoint. A cell-nonautonomous developmental mechanism regulates the timing of arrest There are two alternative possibilities for the timing of arrests observed in vulval formation in the L3 and L4 stages.The first is of a tissue-autonomous program in which arrests occur only at specific times in the developmental program, either prior to cell divisions (early L3) or after cell divisions (early L4).The second is of a global timing mechanism that arrests vulval development at precise times early in each larval stage.To determine which of these was correct, we examined hbl-1(ve18) mutant animals, which have precocious VPC divisions that occur as early as the late L2 stage (Fig. 3A) [35].We hypothesized that if the vulval cells were regulated by an autonomous program, then shifting the time of cell divisions relative to the L3 larval stage would not affect the all-or-none pattern of cell divisions.If instead a global timing mechanism directed the arrest of vulval development, then cell divisions would be predicted to arrest upon reaching the L3 larval stage checkpoint.Results of the experiment show that after removal from food late in the L2 stage, P6.p divisions continued but stopped prior to completion (Fig. 3B-C).Only 43% of the population was arrested either prior to or after cell divisions, with the remainder at intermediate stages of division (Fig. 3B).These results support the idea of a global timing mechanism that acts on vulval development to arrest it at specific times early in the larval stage. Arrest occurs at a precise time in the larval stage and molting cycle The experiments on vulval development suggested that checkpoints exist at particular points in the larval stage.We wanted to explore this question in more detail by examining progression through the larval stage in the absence of food.Each C. elegans larval stage comprises a period of foraging for food that lasts for several hours, followed by an approximately two-hour period of lethargus during which pharyngeal pumping stops and animals do not feed.At the end of lethargus, C. elegans undergo molting, the detachment (apolysis) and shedding (ecdysis) of the existing cuticle [36].We first asked if animals removed from food during the period of foraging underwent lethargus, and found that both the onset and duration of lethargus were similar to a control population that was maintained on food.Further, animals exited lethargus and resumed pharyngeal pumping for at least 24 h after removal from food (Fig. 4A).These results show that lethargus, a key feature of the larval stage, is maintained in the absence of food. We next examined how nutrient deprivation affected the molting cycle, the oscillatory pattern of gene expression and cuticle replacement that occurs in each larval stage.Cuticle components are synthesized starting in the mid-larval stage and deposited underneath the existing cuticle, which is shed at the end of the larval stage [37,38].The timing of the checkpoints in the early part of larval stage suggested that arrest could occur after molting and prior to new cuticle synthesis.We first asked whether this was the case by examining the execution of the molt following removal of food.We found that all L3-and L4-arrested animals completed ecdysis (Fig. 4B), although 17% of adult-arrested animals remained attached to the L4 cuticle after 48 h in the absence of food (n = 100 animals per assay).It is possible that larger animals may not be able to fully shed cuticle in the absence of sufficient feeding.Despite this defect, animals were viable and resumed pharyngeal pumping, with the L4 cuticle remaining attached only in the tail region.These results demonstrate that molting is successfully executed in most instances upon passage through a checkpoint. We then explored the second part of our hypothesis that arrest occurred prior to new cuticle synthesis.To do this, we examined the expression pattern of mlt-10, a gene required for proper execution of the molt [38,39].mlt-10 mRNA increases in the midlarval stage at the time of new cuticle synthesis, peaks during the molt, and declines upon completion of molting.A destabilized reporter gene, mlt-10.GFP-PEST, recapitulates this oscillatory mRNA expression pattern and serves as a marker for progression through the larval stage [38,39].A population of mlt-10.GFP-PEST-expressing animals was removed from food late in the L2 stage and a portion of the population returned to food to serve as controls.The fed and nutrient-deprived groups were then examined for reporter gene expression over an 8 h period.Results show that expression levels were similar in the two groups as they molted and entered the L3 stage (Fig. 4C).Approximately 4 h after molting, the control group increased gene expression, indicating initiation of the L3 molting cycle.The nutrient-deprived group failed to increase expression, however, demonstrating that it had arrested prior to initiation of the L3 molting cycle (Fig. 4C).Similar results were observed when animals were removed from food late in the L3 stage (data not shown).The loss of mlt-10.GFP-PEST expression was unlikely to be due to general transcriptional silencing during nutrient deprivation, as past research has shown that several transcriptional reporters similarly tagged with PEST motifs maintain expression during L1 arrest [40].Collectively, these results demonstrate that Feeding is required after molting to bypass the L3 and L4 checkpoints Our results identified nutrient-sensitive developmental checkpoints in the early part of the L3 and L4 larval stages.We sought to determine the amount of feeding required to pass the checkpoints.To achieve the greatest degree of synchronization and most accurate measurement of timing, individual animals were isolated during ecdysis, the final 10-15 minutes of molting that precede foraging [36].Animals undergoing ecdysis were either removed from food or allowed to feed for additional 30 min intervals (Fig. S4).Feeding for 30-60 min after ecdysis was required for most animals to pass the L3 and L4 checkpoints within 24 h after food removal, and 90 minutes of feeding resulted in all animals passing the checkpoints (Fig. S4).We conclude that a sufficient duration of feeding is required after molting to advance past the L3 and L4 stage checkpoints. Insulin-like signaling regulates the response to nutritional conditions in the L3 and L4 stages The insulin-like signaling pathway is a key regulator of growth in response to nutrition [41].We wanted to determine if insulinlike signaling regulated arrest in the L3 and L4 stages following nutrient removal.We first asked if daf-16, a FOXO transcription factor that is a major target of insulin-like signaling and is required for the proper timing of L1 arrest and dauer formation [7,12], played a role in L3 and L4 arrest.Animals with the null mutation daf-16(mu86) were removed from food late in the L2 stage, and the developmental stage assessed over time by examination of the vulva and molt.The pattern of growth by 8 h after food removal was similar to wild type: all animals had molted into L3 and 97% were in the early L3 stage (Fig. 5A; raw data and results of replicate assays are in Fig. S5).By 24 h after removal from food, however, 63% of the population had progressed past the L3 checkpoint (compared with 0% of wild type animals removed from food at a similar time; Fig. 1B), ultimately arresting early in the L4 stage (Fig. 5A).A second experiment was performed with daf-16(mu86) animals removed from food late in the L3 stage.Again, the absence of daf-16 caused animals to bypass arrest, with 72% of the population progressing to adulthood after 48 h (Fig. 5B; Fig. S5).The time in the larval stage at which daf-16(mu86) animals arrested was similar to wild type, based on the absence of VPC divisions in L3-arrested animals, the completion of divisions in L4-arrested animals, and no animals at intermediate stages of division (n = 30; Fig. 5C). In the presence of food, DAF-16 activity is inhibited by a signaling pathway downstream of DAF-2, the insulin/IGF receptor.We hypothesized that animals with reduced DAF-2 function would require a longer duration of feeding to inhibit DAF-16 activity and progress through the L3 and L4 larval stages.To test this hypothesis, we examined the L3 and L4 development of a temperaturesensitive daf-2 mutant, daf-2(e1370), which is Daf-c at 25uC but develops to adulthood at 15uC [11].Animals were grown at the permissive temperature of 15uC to the mid-L2 stage, bypassing the opportunity to form a dauer, and shifted to the restrictive temperature of 25uC for an additional 24 h feeding (Fig. 5D).Following this regimen, 25% of the population was in the early L3 stage and 42% was in early L4 stage.In contrast, a control wild type population had progressed to the L4/adult molt or beyond (Fig. 5D).The high proportion of the population in the early L3 and early L4 stages suggests that prolonged pausing at the checkpoints may be a factor in the delayed development of daf-2(e1370) animals.This delayed development required the presence of daf-16, as daf-16(mu86); daf-2(e170) double mutant animals advanced through the L3 and L4 stages at a rate comparable to wild type (Fig. 5D).Taken together, the results of the daf-16 and daf-2 experiments demonstrate a role for the insulin-like signaling pathway in regulating progression through the L3 and L4 developmental arrest checkpoints in response to nutritional conditions. To complement these studies, we carried out tissue-specific RNAi of daf-16.Reducing daf-16 specifically in the hypodermis reproduced the phenotype of systemic loss of daf-16.Targeted reduction of daf-16 in the intestine or muscle did not alter sensitivity to the removal from food (Fig. 6C).We were unable to reduce daf-16 specifically in neurons because of the lower sensitivity of this tissue to RNAi [45], and the inability to directly target neurons by RNAi without off-target effects in the hypodermis [46].Taken together, the results of daf-16 tissue-specific rescue and RNAi experiments suggest that the hypodermis is a key site of action for the insulin-like signaling pathway in responding to nutritional conditions during the L3 and L4 stages.Our results do not rule out the possibility that DAF-16 functions in other tissues to also regulate L3 and L4 development, either through modulation of hypodermal DAF-16 function or through independent pathways that synergize with hypodermal DAF-16.Previous studies have shown that DAF-16 can function in multiple tissues to regulate dauer formation and metabolism [42,43], and such a situation could also occur in regulating passage through the L3 and L4 larval stages. DAF-9 regulates L3 and L4 developmental progressions downstream of DAF-16 The ability of DAF-16 to affect tissue development cellnonautonomously implicated the presence of pathways that signal systemically.One such candidate is DAF-9-mediated steroid hormone signaling, which is downstream of insulin-like signaling during dauer formation [11,19,21].A key site of action for DAF-9 during larval development is the hypodermis [19,21,47], suggesting that it could similarly function downstream of insulin-like signaling during the L3 and L4 stages.To test this possibility, we depleted daf-9 by dsRNA feeding in daf-16(mu86) animals, and assessed the response to nutrient removal in the L3 and L4 stages.We hypothesized that, if nuclear DAF-16 inhibited L3 and L4 stage progressions through inhibition of DAF-9-mediated steroid hormone signaling, then the bypass of arrest observed in daf-16 null animals would be suppressed by reduction of daf-9.Consistent with this hypothesis, daf-9 dsRNA-fed daf-16(mu86) animals had a 2.6-fold reduction of bypassing L3 arrest and a 1.9-fold reduction of bypassing L4 arrest compared to empty vector controls (Fig. 7A).These results support the idea that the insulin-like signaling pathway regulates DAF-9-mediated steroid hormone production during the L3 and L4 stages. Since DAF-9 appeared to be involved in generating hormonal signals that promoted progression through the L3 and L4 larval stages, we asked whether increasing the levels of DAF-9 would lead to bypass of arrest in a manner akin to daf-16 null animals.This was tested by examining the response to nutrient removal of a strain overexpressing functional daf-9::GFP (dhIs64) [19].When removed from food late in the L2 stage, daf-9-overexpressing animals bypassed arrest at high levels, with 90% of the population progressed beyond the early L3 stage after 24 h in the absence of food (Fig. 7B; Fig. S5).In contrast to the phenotype of daf-16(mu86), which paused at the L3 checkpoint before bypassing it (Fig. 5A), daf-9-overexpressing animals continued past the checkpoint with minimal pausing (Fig. 7B).Further, whereas daf-16(mu86) bypassed only one arrest point (Fig. 5A-B), a portion of the daf-9overexpressing population advanced through both the L3 and L4 arrest points and reached adulthood (Fig. 7B).Thus, the bypass of arrest caused by overexpression of daf-9 is more rapid and robust than that caused by loss of daf-16. daf-9-overexpressing animals that progressed to adulthood were typically surrounded by undetached cuticle (41/50 animals); in some cases both the L3 and L4 cuticles remained attached (Fig. 7C).The inability to shed cuticle surrounding the mouth led to lethality in a portion of the population within 24 h of food removal (Fig. 7C).In contrast, neither wild type nor daf-16 null animals showed such rapid death.These findings demonstrate that overexpression of daf-9, which forces animals through larval stages in the absence of food, has deleterious effects on the execution of the molt. Our finding that daf-9 overexpression causes continued development in the absence of food were somewhat surprising since a previous study showed that hypodermal DAF-9::GFP expression is sharply reduced during nutrient deprivation [19].We examined the expression of hypodermal DAF-9::GFP following removal from food late in the L2 stage, and indeed found a reduction in expression over time (Fig. 7D).Expression persisted at low levels in most animals for at least 8 h after removal from food, however, and was visible in some animals even after 24 h (Fig. 7D).These results suggest that, when expressed at elevated levels, enough DAF-9 protein remains in the hypodermis to promote continued larval stage progressions in the absence of food.It is also possible that the second site of DAF-9 expression during larval development, the two neuronal XXX cells, also contribute to the bypass of arrest, as expression is not reduced in those cells following food removal [19].Collectively, our results offer evidence that DAF-9 promotes passage through the L3 and L4 developmental arrest checkpoints. DAF-12 does not regulate L3 and L4 stage progressions DAF-9 is required for the synthesis of dafachronic acids (DAs), steroid hormones that bind to the nuclear hormone receptor DAF-12 to promote bypass of dauer formation [18].In the absence of DAs, DAF-12 regulates entry into dauer though its DNA-binding activity [48].Because our results showed that genes that regulate dauer formation-daf-2, daf-16, and daf-9-also regulate later larval development, we asked if daf-12 similarly had a role in regulating progression through the L3 and L4 stages downstream of daf-9.We first examined the response to nutrient removal of daf-12(rh61rh411), a null mutant that has a Daf-d phenotype [48].In contrast to daf-16(mu86) Daf-d mutants, which bypass L3 arrest over 60% of the time (Fig. 5B), no daf-12 null mutants bypassed L3 arrest after 48 h in the absence of food (n = 54).We also performed epistasis experiments by generating daf-12(rh61rh411); daf-16(mu86) double mutant animals.The bypass phenotype of daf-16(mu86) was not suppressed by the loss of daf-12 function, as 71% of double mutant animals bypassed the L3 checkpoint after 48 h in the absence of food (n = 68), similar to the percentage of daf-16(mu86) animals that bypass arrest (Fig. 5B).These results suggest that daf-16 functions independently of daf-12 in regulating L3 stage progression.We also asked whether the phenotype of daf-9 overexpression was suppressed by a null allele of daf-12.When removed from food late in the L2 stage, DAF-9::GFP; daf-12(rh61rh411) animals still bypassed arrest to a high degree: 87% of the population had bypassed L3 arrest by 24 h (n = 61), similar to the 90% observed in DAF-9::GFP animals (Fig. 7B).These results provide evidence that DAF-12 does not regulate L3 and L4 stage progressions, and that DAF-9 promotes bypass of the L3 and L4 checkpoints through a different downstream effector. The L3 and L4 checkpoints arrest tissue development systemically Our experiments with vulval formation and the molting cycle showed that checkpoints are present early in the L3 and L4 stages that limit continued development.We took advantage of the fact that several additional tissues undergo developmental processes in the L3 and L4 stages to determine if other tissues are also arrested at the checkpoints.We first examined the uterine AC, which becomes polarized early in the L3 stage, when F-actin and actin regulators localize to the basal, invasive cell membrane [49].The AC breaches the basement membrane in the mid L3 stage, before fusing with the surrounding uterine cells in the mid L4 stage (Fig. 1A).Examination of a marker of AC polarization, the F-actin probe cdh-3.mCherry::moesinABD,showed that animals removed from food late in the L2 stage arrested early in the L3 stage with polarized ACs, but in no instance did invasion occur (n = 100; Fig. 8A).When removed from food in the mid L3 stage, AC invasion occurred in all animals, yet in no instance did the AC fuse with the surrounding uterine cells (n = 100; Fig. 2B).These results suggest that the developmental program of the AC, similar to that the vulval cells and molting cycle, arrests at the early L3 and early L4 checkpoints. We next examined the two sex myoblast (SM) cells, which divide three times between the mid L3 and early L4 stages, followed by short-range migrations of terminally divided progeny cells in the L4 stage (Fig. 8B).When animals were removed from food late in the L2 stage, no SM cell divisions were observed after 48 h, indicative of arrest early in the L3 stage.When removed from food in the mid L3 stage, SM cell divisions initiated in all animals and typically divided twice, although in some instances fewer or more cell divisions were observed (Fig. 8B).Animals that were grown on food to late in the L3 stage and arrested in the L4 stage with completed cell divisions did not undergo short-range migrations (Fig. 8B), demonstrating that the L4 morphogenetic program did not advance past the early L4 checkpoint.Although cell divisions of the SM cells are not as tightly regulated as the vulval cells, these results suggest that development of the SM cells is also under the control of the L3 and L4 checkpoints. We then examined the seam cells, stem cells that divide during the L1-L3 molts, generating an anterior daughter that fuses with the surrounding hypodermal syncytium and a posterior daughter that retains stem-like properties (Fig. 8C).Animals that were removed from food in the L3 stage showed variability in the timing of seam cell arrest.Some seam cells failed to divide; others divided but anterior daughters did not fuse; and in the most advanced animals, daughter cells fused but the adherens junctions that connect seam cells did not re-form (Fig. 8C).Thus, removal of animals from food in the L3 stage causes the arrest of a several aspects of the seam cell division program prior to reaching the L4 checkpoint. We finally looked at elongation of the gonad, which occurs in a continuous manner from the L2 to L4 stages.When animals were removed from food in mid L3 to cause arrested in the early L4 stage, gonad arm elongation was 35% shorter than in a fed control population of early L4 animals (Fig. 8D), indicating that gonadal elongation arrested prior to the L4 checkpoint.Taken together, these results show that diverse cellular processes arrest following removal of food.Although some tissues had a variable pattern of arrest, in no instance did development continue past the early L3 and early L4 stages, demonstrating that the checkpoints limit tissue development in a systemic manner. Discussion Organisms have the ability to sense their nutritional environment and alter growth and metabolism in response.When nutritional conditions are limiting during development, the effects on tissue maturation must be systemic in nature and temporally coordinated in order to maintain the capacity to form functional organs [1].We examined the C. elegans response to nutrient deprivation during the L3 and L4 larval stages and have uncovered a means by which different tissues are able to arrest in a coordinated manner.We show that distinct checkpoints are present in the early part of the larval stages that regulate development throughout the organism and arrest a range of tissues and cellular processes.At the L3 and L4 checkpoints, nutritional conditions inform a systemic decision to either remain arrested or continue development.This decision is regulated by insulin-like and steroid hormone signaling pathways (see model, Fig. 9). Characterization of two new developmental checkpoints in the early L3 and early L4 larval stages Previous work on L1 arrest, dauer, and adult reproductive diapause have shown that, in response to unfavorable nutritional conditions, cellular processes can arrest for extended durations and resume upon re-feeding [3,7,9,10,16].The nature of the response to nutrient deprivation at other times in development had not been characterized.By focusing initially on the vulva, which has a stereotyped pattern of development during the L3 and L4 larval stages, we show here that checkpoints are present in the early part of the L3 and L4 stages that arrest tissues throughout the organism.The timing of arrest reflects a specific point in the larval stage after molting and prior to initiation of the subsequent molting cycle.A connection between nutrition and the molting cycle has been described in other ecdysozoans [50,51].In insects, for instance, molting to a new larval instar occurs only after a sufficient duration of feeding and attainment of a critical weight [50].It has been speculated that similar nutritional factors impinge on the endocrine signals that trigger the onset of the molting cycle in C. elegans [38].Our results provide support for this model of nutritional control of molting cycle commitment. The response to nutrient deprivation in the L3 and L4 stages is systemic in nature and causes the arrest of multiple tissues and cellular processes.Although all tissues arrested prior to or at the checkpoints, vulval formation and the molting cycle were unique in arresting within very narrow developmental windows and in a uniform manner throughout a population.These tissues may have been under selection to arrest in such a precise way.A properly formed vulva is necessary for mating and egg-laying, and a robust developmental program with minimal variation is important for reproductive fitness [52].For the molting cycle, the inability to shed cuticle surrounding the head during ecdysis causes rapid lethality, as observed in daf-9-overexpressing animals, making it incumbent to execute the molt.A previous study showed that the buccal cavity, which comprises the anterior-most portion of the pharynx and constrains the amount of food consumed with each pumping event, grows only during molts and not between them, which is thought to increase the amount of food that can be consumed during the larval stage [53].Certain tissues therefore have distinct patterns of growth that are unique to their functions in development and reproduction. The insulin-like and DAF-9 signaling pathways regulate arrest following nutrient deprivation We show that the insulin-like signaling pathway regulates that connection between nutritional conditions and progression past the L3 and L4 checkpoints.In wild type animals, feeding of 30-60 minutes is required after molting to attain a sufficient threshold for bypassing the larval stage checkpoints.Perturbations of key genes in the insulin-like signaling pathway alter the duration of feeding required to bypass arrest.Reduction in the function of daf-2, the insulin/IGF receptor, increases the amount of feeding, such that animals pause at the L3 and L4 checkpoints and have delayed development through the L3 and L4 stages.Loss of function of daf-16, a FOXO transcription factor that is a key target of the insulinlike signaling pathway [54], decreases the amount of feeding required to bypass the checkpoints. The bypass of arrest caused by loss of daf-16 is partially suppressed by reduced expression of daf-9, a cytochrome P450 ortholog required for the production of certain C. elegans steroid hormones [18,55].This result suggests that DAF-16 regulates arrest in the L3 and L4 stages at least in part through inhibition of steroid hormone signaling.DAF-16 has been shown to inhibit daf-9 expression during cholesterol starvation, an unfavorable growth environment that causes larval arrest [32,56].It is possible that late larval stage arrest caused by nutrient deprivation also involves DAF-16 inhibition of daf-9 expression.A key site of action for DAF-16 in regulating L3 and L4 arrest is the hypodermis, where daf-9 is also expressed during larval development [47], further suggesting that DAF-16 regulates daf-9 expression.Collectively, these findings support a model in which DAF-16 inhibits daf-9 expression, and possibly other genes involved in steroid hormone production, to limit progression through the L3 and L4 stages (Fig. 9). The level of DAF-9 protein is a key determinant of L3 and L4 stage progressions in the absence of food, which is demonstrated by the striking ability of overexpressed DAF-9 to promote continued development through one or two larval stages.The hormonal signaling pathway that functions downstream of DAF-9 in the L3 and L4 stages appears to be different from the pathway that regulates dauer formation.During the dauer decision, the DAF-9 biosynthetic pathway produces dafachronic acids (DAs), which bind to the nuclear hormone receptor DAF-12 to promote bypass of dauer [18].Our experiments with a daf-12 null mutant failed to show a similar role for DAF-12 in the L3 and L4 stages.Consistent with these results, we have also found that supplementation of M9 buffer with DAs does not promote continued development past the checkpoints after food removal (Schindler & Sherwood, unpublished observations), further implicating a mode of hormone signaling during the L3 and L4 larval stages that is distinct from that during dauer formation. A developmental decision regulates progression through the L3 and L4 larval stages A key implication from these results is that wild type C. elegans arrest development in the L3 and L4 stages despite possessing a sufficient amount of nutrients to continue further development.This is demonstrated by the ability of animals lacking daf-16 or overexpressing daf-9 to bypass one or even two arrest points and progress through the larval stages in the absence of food.Developmental arrest in wild type animals therefore reflects a decision to halt larval stage progressions rather than a lack of available resources to sustain further development.Continued progression in the absence of food appears to have deleterious consequences, as exemplified by the death and molting defects observed in daf-9overexpressing animals.Limiting progression through the larval stages when nutritional conditions are poor may allow resources to be conserved for survival and tissue homeostasis during prolonged periods of starvation. This scenario of sensing the environment and arresting development in response to unfavorable conditions also occurs during the C. elegans dauer decision [3].Both non-dauer and dauer arrest are regulated by insulin-like and DAF-9 signaling pathways, and studies comparing gene expression in dauer and starved animals have revealed overlap between the two types of arrest [57,58].From an evolutionary perspective, it is intriguing to speculate that dauer formation, a nematode-specific developmental diapause, evolved from pathways of starvation-induced arrest that are conserved among metazoans. A mode of saltational growth regulates C. elegans development Two types of growth have been described for ecdysozoans: continuous, with growth occurring throughout the course of development; and saltational, with growth occurring only at distinct times [53].In organisms with rigid exoskeletons, growth occurs only at molts, an example of saltational growth.C. elegans, with flexible exoskeletons, grow in a continuous manner through the larval stages [53,59].By manipulating the nutritional environment, we show that C. elegans growth has an additional saltational aspect to it, with distinct checkpoints present in the early part of the larval stage.At each checkpoint the nutritional environment informs a systemic decision to either proceed through the larval stage or to remain arrested (Fig. 9).Two key pathways that regulate this developmental decision-insulin-like and steroid hormone signaling-are present throughout metazoans [60,61], suggesting that the mode of growth control described in this work could be conserved.A greater understanding of the mechanisms of growth control could provide insight into aging and metabolic diseases, which are linked to the dysregulation of developmental pathways important for growth [62,63].Our work in C. elegans demonstrates a type of saltational growth from checkpoint to checkpoint that may similarly regulate development and physiology in other species. General methods and strains Nematodes were reared at 20uC on NGM plates seeded with OP50 E. coli using standard procedures.In the text and figures we refer to linked DNA sequences that code for a single fusion protein using a (::) annotation.For designating linkage to a promoter we use a (.) symbol.The wild type strain N2 and the following mutant strains and transgenes used were: dhIs64[daf- Image acquisition, processing and data analysis Images were acquired using either a Zeiss AxioImager A1 microscope with a 106, 206, or 1006 plan-apochromat objective and a Zeiss AxioCam MR charge-coupled device camera, controlled by Zeiss Axiovision software (Zeiss Microimaging, Inc., Thornwood, NJ), or with a Yokogawa spinning disk confocal microscope mounted on a Zeiss AxioImager A1 microscope using iVision software (Biovision Technologies, Exton, PA).Images were processed in ImageJ (NIH Image) and Photoshop CS6 (Adobe Systems Inc., San Jose, CA).Z-stack projections were generated using IMARIS 6.0 (Bitplane, Inc., Saint Paul, MN). Quantification of fluorescence intensity was performed on images acquired at identical exposure settings using ImageJ.For quantifying GFP::DAF-16 in the hypodermis, the fluorescence intensity in four nuclei (excluding nucleoli) were averaged per animal.All measurement of nuclear GFP::DAF-16 were taken within 5 min of removal from food to minimize relocalization of DAF-16 into the nucleus.For quantifying DAF-9::GFP, a contiguous area of the hypodermal syncytium that excluded nuclei was measured in a region below the pharynx. Food removal assays Populations containing gravid adults were hypochlorite treated to release embryos, which hatched in M9 buffer and arrested in L1.The duration of L1 arrest did not exceed 20 h.Populations of L1-arrested animals were plated onto NGM plates seeded with OP50 E. coli that covered at least half the plate to minimize the duration of wandering away from food.Maximum population density was 2500 animals/60 mm dish.Animals were reared at 20uC unless indicated otherwise.For removal from food late in the L2 stage, populations were monitored starting at 22 h post-plating.The assessment of developmental age was made by observation of the gonad (which grows through the L2 stage) and the molt (which covers the mouth during the time of molting, see Fig. 1A).Unless a specific duration of growth is indicated, animals were removed from food when the oldest members of the population were molting, and the remaining members were in the late L2 stage, based on gonad size.Populations that contained greater than 5% L3 animals were not used.N2 and daf-12(rh61rh411) populations typically developed in a synchronized manner; hbl-1(ve18), daf-16(mu86), and daf-9::GFP, populations grew more variably and had a wider spread of developmental ages at the time of food removal.To remove food, 1 ml M9 was added to each plate and gently rocked to dislodge worms with minimal removal of E. coli.Animals were transferred to low-retention Eppendorf tubes and centrifuged for 1 min at 5006g, a speed at which C. elegans sank to the bottom but E. coli remained largely in suspension.Liquid was aspirated, and an additional 1 ml M9 added for 6 total washes.Tests of supernatants found that bacteria were removed by the third wash, based on the inability of the supernatant to form colonies on LB plates.Animals were placed in M9 buffer at 0.5-1 animal/ml in 25-ml glass conical tubes and rotated in a roller drum (New Brunswick Scientific, Enfield, CT) at ambient temperature (22uC).For visualization of ecdysis, animals were removed from food late in the L2 stage, anesthetized in levamisole, mounted on agar pads with sealed cover slips, and maintained for 24 h in a humidified chamber. Scoring developmental stages Developmental stages from L3 to young adult were assessed using the progression of the vulva (see Fig. 1A) and the molt.The two processes occurred synchronously in both fed and nutrientdeprived animals.Statistical significance of differences in arrest response was determined by two-tailed Fisher's exact test.For tissue-specific daf-16 rescue experiments, percentages of L3-and L4-arrested animals were determined for each promoter-driven GFP::daf-16 strains and compared to the promoterless GFP::daf-16 strain (qyEx266).For tissue-specific daf-16 dsRNA feeding, percentages of L3-and L4-arrested animals were compared between animals fed either daf-16 dsRNA or vector control.A similar comparison was made in daf-16(mu86) animals fed either daf-9 dsRNA or vector control.All assays were repeated in triplicate with n$50 animals per assay. Survival and recovery assays Populations of wild type animals were removed from food in either late in the L2 or in the middle of the L3 stage and starved in M9 buffer at an approximate population density of 1 animal/ml.Every 2 d, an aliquot of media containing at least 50 animals was removed using Rainex-coated tips to prevent adherence to the plastic, and plated onto NGM plates.After liquid was absorbed into the plate, animals were determined to be alive if they moved or dead if they did not move upon tail poke.Dead animals had a characteristic rod-like appearance.The median survival was determined as the first day at which 50% of the population was dead, for n = 3 trials. To test recovery from nutrient deprivation, early L3 arrested animals were plated onto NGM+OP50 after 8 d in the absence of food.After 72 h at 20uC, the population was scored for fertile and nonfertile adults. Determination of feeding requirements to bypass arrest Synchronous populations were grown to the L2/L3 or L3/L4 molts.Animals in ecdysis were isolated by the appearance of detached cuticle separated from the body, which was most apparent in the head and tail regions.Individual animals were transferred onto NGM+OP50 plates for 309-909 further feeding or placed directly in M9 buffer.Animals were maintained in the absence of food for 24 h and the developmental stage assessed. Generation of GFP::daf-16 transgenic strains Transgenic strains expressing promoter-driven daf-16 cDNA fused at the N-terminus with GFP were generated by injection of the following plasmids: pNL205 (promoterless), pNL206 (unc-119 promoter), pNL209 (daf-16 promoter), pNL212 (myo-3 promoter), pNL213 (ges-1 promoter), pNL216 (unc-115 promoter), and pAS10 (col-12 promoter).With the exception of pAS10, plasmids were gifts of the Kenyon lab and are described elsewhere [42].pAS10 was generated by PCR amplification of 1.1 kb of col-12 promoter 59 to the start site, which was cloned into the SnaBI restriction sites in pNL205.GFP::daf-16 plasmids were injected at 100 ng/ml into daf-16(mu86); unc-119(ed4) adults with 50 ng/ml unc-119(+) plasmid.Animals carrying extrachromosomal arrays were isolated by rescue of the unc-119 locomotion defect and the expression of GFP::DAF-16 validated.Although unc-115 has been reported to express in both neurons and hypodermis [64], expression was only detected in neurons.With the exception of qyEx266 (expressing pNL205), which did not possess a gene promoter for GFP::DAF-16, and qyEx292 (expressing pAS10), which sometimes had undetectable or minimal GFP expression in the absence of food, animals carrying the array were identified in nutrient removal assays by GFP expression.qyEx266 and qyEx292 animals were plated on NGM plates lacking food, and those that moved freely (indicating presence of the unc-119 rescue array) were selected for analysis. Measurements of AC polarity, SM cell distance, and gonad elongation To assess AC polarization, the average fluorescence intensity was determined from three-pixel-wide linescans drawn along either the basal or apicolateral membranes of Z-stack projections.To determine the movement of SM cell progeny, the distance between the nuclei of the two inner cells from among the four cells on each lateral half were measured.A similar measurement was made to determine the distance between the two outer nuclei.Gonad length was measured from the vulva to the distal end.All measurements were made using ImageJ software.Figure S3 Arrest in L4 occurs at a precise time in vulval development.Cell-cell fusions occur between homotypic cells following terminal vulval cell divisions.Shown on left is a lateral schematic of the vulva after terminal cell divisions.This view was rotated 90uC to a distal-proximal view (from tail to midbody), and one-half of the vulva is shown.Schematics show that the vulA cells on each lateral half of the vulval midline fuse at approximately 35 h post-hatch, followed 2 h later by vulC cell fusion [33].Animals removed from food after 28 h (mid L3 stage) undergo vulA fusions but not vulC fusions (detectable by the presence of a membrane boundary between cells, yellow arrowhead).When animals were removed from food after 32 h (L3/L4 molt), 97% of the population arrested at the same stage of cell-cell fusions, although VPCs migrated further toward the midline (n = 30 per assay).Membranes are demarcated by the apical junction reporter gene AJM-1::GFP.Scale bars, 3 mM.(PDF) Supporting Information Figure S4 Feeding is required after molting to bypass the L3 and L4 checkpoints.A schematic of the C. elegans larval stage is depicted on the left.Ecdysis, the shedding of cuticle, occurs after the period of lethargus and precedes foraging.Animals were removed from food at ecdysis or at 30 min intervals thereafter, and assayed for developmental stage after 24 h.Results show that approximately 30-60 min feeding is required after molting to bypass developmental checkpoints in the L3 and L4 stages.(PDF) Figure 1 . Figure 1.Removal from food induces arrest in vulval development early in the L3 stage.(A) Vulva development in the L3 and L4 larval stages.The 1u-fated vulval precursor cell (VPC), P6.p, expresses egl-17.GFP in green; P5.p and P7.p are specified to the 2u VPC fate.Basement membrane (BM; expressing LAM-1::GFP protein in green) separates the uterine and vulval epithelium.The uterine anchor cell (AC; expressing zmp-1.mCherry in magenta) is dorsal to P6.p and invades across BM between the first and second VPC divisions.The final VPC divisions occur at the time of the L3/L4 molt.Molting animals can be distinguished by the formation of buccal caps covering the mouth (inset, bottom left panel).(Top right of panel A) Cell divisions produce 22 VPC progeny that comprise seven vulval subtypes, vulA-vulF.Some of the cells divide along the left-right axis (hatched lines in early L4 schematic) outside the central plane of focus.In the mid L4 stage, the AC fuses with the surrounding uterine cells, and egl-17.GFP expression changes from the 1u to 2u-fated cells.At the end of L4, the cells turn partially inside out (evert).Times for each developmental stage are after release from L1 arrest at 20uC.(B) Late L2 nutrient deprivation assay.Animals were removed from food after 22 h growth at 20uC and either returned to food or kept deprived of food.Starting at time 0, both groups were maintained at RT (22uC).In the chart, developmental stages on the Y-axis were determined by the extent of vulval development and the molt, and the duration of feeding or removal from food is indicated on the X-axis.The areas of the circles in the chart reflect the percentage of the population at each stage of development; n$50 for each time point.See Fig. S1 for raw data and results of replicate assays.(C) Animals after 2 d removal from food, with no divisions of P5.p-P7.p,and expressing the 1u fate marker, egl-17.GFP (top), or the 2u fate marker lip-1.NLS-GFP (bottom).In these and other figures, anterior is left.Scale bars, 10 mm.doi:10.1371/journal.pgen.1004426.g001 Figure 2 . Figure 2. Vulval development arrests at a precise time in the L4 stage.(A) Schematic and chart for mid L3 nutrient removal assay.A wild type population was grown for 28 h at 20uC and removed from food.Stages of vulval development were assessed in fed and nutrient deprived groups as described in Fig. 1B; n$50 for each time point.(B) Image of L4-arrested animal 48 h after removal from food.egl-17.GFP was expressed exclusively in 1u VPC progeny and VPC divisions had completed.The AC, expressing zmp-1.mCherry,invaded across basement membrane but did not fuse with the surrounding uterine cells.(C) Schematic of early L4 nutrient deprivation assay, with animals grown for 36 h at 20uC and scored for developmental stage as described for Figs.1B and 2A.See Fig. S1 for raw data and replicates of assays in (A) and (C).(D) Adult animal after 2 d removal from food, with eversion of the vulva.Scale bars, 10 mm.doi:10.1371/journal.pgen.1004426.g002 Figure 3 . Figure 3.A systemic mechanism coordinates the timing of L3 arrest.(A) Precocious VPC divisions in hbl-1(ve18) mutant animals.At the time of the L2/L3 molt, P5.p and P6.p have undergone cell divisions.(B) Variable number of P6.p progeny after 2 d removal from food.The number of P6.p progeny was counted when food was removed late in the L2 stage, and again after 2 d. (C) Image of hbl-1(ve18) after 2 d removal from food, with four P6.p progeny and two P5.p progeny.The AC (signified by white arrow) has not breached the basement membrane, indicative of arrest early in the L3 stage.Scale bars, 10 mm.doi:10.1371/journal.pgen.1004426.g003 Figure 4 . Figure 4. Arrest occurs at a specific time in the larval stage and molting cycle.(A) Percentage of animals with pharyngeal pumping in fed and nutrient-deprived groups maintained at 22uC; n$50 at each time point per group.Absence of pharyngeal pumping indicates that animals are in lethargus.(B) An L3-arrested animal having completed ecdysis of the L2 cuticle, after 24 h in the absence of food.The shed cuticle surrounds the head.(C) Images of fed and nutrient-deprived animals expressing the molting cycle reporter gene mlt-10.GFP-PEST.Chart shows quantification of mlt-10.GFP-PEST expression levels over an 8 h interval starting late in the L2 stage.Error bars 6 S.D. for n$30 at each time point.Scale bars, 10 mm.doi:10.1371/journal.pgen.1004426.g004 Figure 5 . Figure 5.The insulin-like signaling pathway regulates progression past the L3 and L4 checkpoints.(A) daf-16(mu86) animals were grown to late in the L2 stage, removed from food, and the stage of development assessed at intervals using vulval development and the molt as markers; n$50 at each time point.(B) Similar assay as in (A), with daf-16(mu86) animals grown to late in the L3 stage.See Fig. S5 for raw data and replicates of assays in (A) and (B).(C) Images of daf-16(mu86) animals after 2 d removal from food and arrested in the early L3 (top) or early L4 (bottom) stages.No VPC divisions were observed in L3-arrested animals, and VPCs completed divisions in L4-arrested animals.(D) Wild type, daf-2(e1370), and daf-2(e1370); daf-16(mu86) animals were fed to the mid L2 stage at 15uC and shifted to 25uC.After 24 h additional feeding at 25uC, the developmental stage was examined for n = 100 animals per genotype.In the presence of food, daf-2(e1370) animals paused preferentially early in the L3 and L4 stages (highlighted in magenta).Scale bar, 10 mm.doi:10.1371/journal.pgen.1004426.g005 Figure 6 . Figure 6.Expression of DAF-16 in the hypodermis regulates the L3 nutritional response.(A) A schematic diagram of transgenes tested for rescue of the daf-16(mu86) bypass phenotype and their sites of expression (see also Supplemental Fig. 6).(B) daf-16(mu86) animals expressing the transgenic arrays in (A) were assayed for bypass of L3 arrest.Averages of 3 assays; n$50 per assay.Error bars denote 95% confidence interval; *p,.0001, **p,.001 by two-tailed Fisher's exact test.(C) Wild type and tissue-specific RNAi sensitive strains (see Experimental Procedures for descriptions) were fed either L4440 (empty vector control) or daf-16 dsRNA, and the percentage of animals bypassing L3 arrest were measured 2 d after removal from food.Average of 5 assays (wild type) or 3 assays (all others); n$50 per assay.Error bars denote 95% confidence interval; *p,.0001 by two-tailed Fisher's exact test.doi:10.1371/journal.pgen.1004426.g006 Figure 7 . Figure 7. DAF-9 regulates L3 and L4 arrest downstream of DAF-16.(A) daf-16(mu86) animals were fed either L4440 (empty vector) or daf-9 dsRNA and the percentage of animals bypassing L3 and L4 arrest measured 2 d after removal from food.Average of 3 assays; n$50 per assay.Error bars denote 95% confidence interval; *p,.0001 by two-tailed Fisher's exact test.(B) DAF-9::GFP (dhIs64) animals were removed from food late in the L2 stage and developmental progression assessed for n$50 at each time point.See Fig. S5 for raw data and replicates.(C) Images of adult DAF-9::GFP-expressing animals.Top shows a tail region with both the L3 and L4 cuticles (arrows) still attached.Bottom is a dead or dying animal that has not shed the L4 cuticle surrounding the head (arrow).(D) Normalized expression levels of hypodermal DAF-9::GFP following nutrient deprivation late in the L2 stage.After 24 h, some animals still had detectable levels of the transgene.Error bars 6 S.E.M.; n = 20 for each time point.Scale bars, 10 mm.doi:10.1371/journal.pgen.1004426.g007 Figure 8 . Figure 8. Arrest of tissues in the L3 and L4 stages.(A) Anchor cell (AC) arrest.Both fed and nutrient-deprived animals showed polarization of the AC-specific F-actin probe cdh-3.mCherry::moesinABD.Insets show heat maps of cdh-3.mCherry::moesinABD.Chart is quantification of polarity in fed and nutrient-deprived groups.Error bars 6 S.E.M.; n = 16 per group.(B) Sex myoblast (SM) arrest.The SM cells divide three times between the mid L3 and early L4 stages, as depicted in the cell lineage diagram.Following cell divisions, the progeny cells (visualized with an HLH-8::GFP reporter gene) undergo short-range migrations during the L4 stage.In L3-arrested animals, no cell divisions occurred (n = 30).In animals removed from food in mid L3 that had arrested in early L4, two rounds of cell divisions typically occurred, although variability was present in the population.Graph shows percentage of the population with the indicated number of SM cell progeny (n = 30).When animals were grown to later times in L3 to allow completion of cell divisions, the short-range cell migrations that occur during the L4 stage were not observed.Chart compares distances between the nuclei of the two inner cells from each group of four (white brackets), which move closer together during the L4 stage, with the distance between the nuclei of the two outer cells (yellow brackets), which move further away from each other during L4.Error bars 6 S.D.; n = 20 per group; p = .36by twotailed Student's t-test.(C) Seam cell arrest.Seam cells, which are separated by adherens junctions, divide during the molt, followed by fusion of the anterior daughter cell and re-formation of adherens junctions early in the larval stage.When animals were removed from food in the L3 stage and examined after 2 d, seam cells showed a variable pattern of arrest.In the top image, the posterior seam cells have divided but the anterior cells have not.In the bottom image, seam cells have divided and anterior daughter cells have fused, but the adherens junctions that separate cells have not reformed.Chart shows quantification of arrested state in the V1 seam cell 2 d after food removal (n = 30).Seam cells were visualized with an AJM-1::GFP reporter gene.(D) Gonad elongation arrest.One of two gonad arms, outlined in magenta, in a fed early L4 animal and in an animal removed from food in the mid L3 stage.n = 20 animals; error bars 6 S.D.; *p,1610 210 by two-tailed Student's t-test.Scale bars, 10 mm.doi:10.1371/journal.pgen.1004426.g008 Figure S1 Figure S1 Raw data and replicate assays of wild type time course experiments.The graphs in Figs.1B, 2A, and 2C are reproduced to show the percentages of animals and sample sizes at each time point.Percentages are rounded to the nearest whole number and may not equal 100.To the right are results of three replicate assays, with measurements at 24, 48, and 96 h.The Figure S5 Figure S5 Raw data and replicate assays of daf-16(mu86) and DAF-9::GFP time course experiments.The graphs in Figs.5A, 5B, and 7B are reproduced to show the percentages of animals and sample sizes at each time point.Percentages are rounded to the nearest whole number and may not equal 100.To the right are results of three replicate assays, with measurements at 24, 48, and 96 h.The percentage of animals at each developmental stage is averaged across the three experiments.Error bars+S.D. (PDF) Figure S6 Expression patterns of GFP::DAF-16 transgenic strains.Promoterless GFP::DAF-16 has no detectable expression (signal is due to autofluorescence).daf-16.GFP::DAF-16 had prominent expression in neurons (N), hypodermis (Hyp), body wall muscle (BWM), and intestine (Int).Tissue-specific strains expressed in the predicted tissues: unc-119 and unc-115 promoter constructs in the neurons, myo-3 in the body wall muscle, ges-1 in the intestine, and col-12 in the hypodermis.All bottom images taken at 500 ms exposure.Scale bar top images, 100 mM; bottom images, 10 mM.(PDF) Figure S7 Expression of col-12.GFP::DAF-16 declines following removal from food compared to daf-16.GFP::DAF-16.Fluorescence intensity measurements were taken of hypodermal nuclei in feeding L3 stage animals and 1-3 d after removal from food.Error bars 6 S.E.M.; n = 25 animals for each measurement.(PDF)
2016-05-04T20:20:58.661Z
2014-06-01T00:00:00.000
{ "year": 2014, "sha1": "d0f741cc544c1615fd58ccf885e83bd8bea83687", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1004426&type=printable", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a9544efb5a2faa8553d1e2f00e3b4069d01f9807", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
34068464
pes2o/s2orc
v3-fos-license
Effect of sodium butyrate on synthesis of specific proteins by human breast-carcinoma cells. ImagesFig. 2 SODIUM BUTYRATE has been observed to reduce growth rate, alter synthesis of specific proteins and induce biochemical and morphological differentiation, in a variety of cultured tumour-cell lines (Prasad & Sinha, 1976). Its effects on cell lines of human origin include: (1) induction of erythroid differentiation in leukaemia cells (Andersson et al., 1979); (2) induction of neurite formation in neuroblastoma cells (Prasad & Kumar, 1974); (3) stimulation of synthesis of the glycoprotein hormones FSH and hCG, and of their common a-subunit by HeLa cells (derived from cervical carcinoma) (Ghosh & Cox, 1976Lieblich et al., 1977); and (4) stimulation of synthesis of osubunit by a bronchial-carcinoma line (Chou et al., 1977). We report here that sodium butyrate has marked effects on protein synthesis in human breast-carcinoma cells. These actions are not limited to induction of differentiation. Production of o-subunit is normally confined to placental tissue and to certain endocrine cells. However, inappropriate synthesis of glycoprotein hormones and their subunits has been noted in a variety of tumour types (Rosen et al., 1975). In order to clarify the actions of butyrate we have studied its effects on the MCF7 human breast-carcinoma line, with regard to synthesis of (a) the milk protein lactal-J[ To wliom correspondence should be addressed. Accepted 26 June 1980 bumin, (b) oa-subunit, an inappropriate product, and (c) the oncofoetal antigen CEA. These three proteins are frequently present in primary carcinomas of the breast (Woods et al., 1979;Cove et al., 1979 a and b). The mammary origin of the MCF7 line has been amply confirmed (Engel & Young, 1978). We have found that butyrate causes a dose-related stimulation of lactalbumin and a-subunit production; CEA synthesis is only slightly increased. MCF7 cells were obtained from Dr Marvin Rich, Michigan Cancer Foundation, in August, 1976. The cells were grown in Dulbecco's modification of Eagle's medium containing 10% foetal bovine serum, insulin (1 ng/ml) and penicillin (200 u/ml). Replicate plates were seeded on Day 0 of the experiments and maintained for 3 days at 37TC in 95% air, 500 CO2. In order to measure both intracellular and secreted protein products, the cells were then disrupted in their supernatant either with a manual homogenizer (first experiment) or by freezing and thawing x 3 (second and third experiments). The supernatant obtained by centrifugation at 100,000 g for 60 min was concentrated 5-fold by lyophilization. Radioimmunoassays for lactalbumin (Woods & Heath, 1977), ax-subunit (Cove et al., 1979a or b) and CEA (Booth et al., 1973) were as lescribed. The specificity of The effect of butyrate on cell number has been examined in detail and protein synthesis is shown in Fig. 1 adies. Cytosol preparations of and the Table. The observed stimulation of us and kidney produced no lactalbumin and a-subunit production t of tracer. Controls for the does not appear to be a direct result of ted here included unused culinhibition of growth, since we have shown concentrated 5-fold with and in other experiments that it could not be i butyrate, which produced no reproduced when growth was retarded by in the assay systems. To confluence or by sub-lethal (55 nm and presence of specific proteins in 550 nM) concentrations of methotrexate. e samples, assays were per-MCF7 cells exposed to 5mM butyrate 3rial dilutions to demonstrate showed two ultrastructural features which vith the standard curve. were inconspicuous in controls, namely electron-dense granules and clusters of microvilli. Both features were prominent around lumina which appeared to be intracellular ducts, though an intercellular location could not be excluded (Fig. 2). These duct-like structures have been described before in MCF7 cells (Russo et al., 1977). The morphology of control and treated cells was otherwise similar. The reduction in cell numbers on exposure to 5mM butyrate ( Fig. 1) suggests that this concentration is toxic. We are unable to separate inhibition of growth from increased cell death in these studies. However, the experiments were designed to measure the total amounts of the specific proteins in the system, so that passive release of proteins from damaged cells cannot account for the results obtained. Synthesis of lactalbumin by MCF7 has been detected by several groups, though the amounts detected have varied widely (Lieblich et al., 1977). We have noted a fall in the rate of synthesis of lactalbumin in our cultures of MCF7 over a period of many months. This is reflected in the response to butyrate (Table) in separate experiments, and may be due to a progressive change in the cell population. The effects of butyrate on human mammary tissue have not previously been reported. Sodium butyrate does not alter the growth rate of rat mammary tumours in vivo (Cho-Chung & Gullino, 1974). It is clear from the reports cited above and from our own data, that butyrate can profoundly modify gene expression in neoplastic mammalian cells of diverse origins. Although its effects are selective (Rubinstein et al., 1979) they are not confined to the induction of differentiated characteristics. It is of particular interest that butyrate can stimulate ectopic synthesis of a-subunit by HeLa and bronchialcarcinoma cells, yet inhibit eutopic synthesis of the same protein by three different strains of trophoblastic tumour (Chou et al., 1977). The basis of such selectivity deserves further study. This work was supported by an M.R.C. project grant.
2014-10-01T00:00:00.000Z
1980-10-01T00:00:00.000
{ "year": 1980, "sha1": "bcf64e80438240277f10d1c1165b8d914b16c932", "oa_license": null, "oa_url": "https://www.nature.com/articles/bjc1980287.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "bcf64e80438240277f10d1c1165b8d914b16c932", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
1959384
pes2o/s2orc
v3-fos-license
Does Secondary Inflammatory Breast Cancer Represent Post-Surgical Metastatic Disease? The phenomenon of accelerated tumor growth following surgery has been observed repeatedly and merits further study. Inflammatory breast carcinoma (IBC) is widely recognized as an extremely aggressive malignancy characterized by micrometastasis at the time of diagnosis, with one interesting subgroup defined as secondary IBC where pathologically identifiable IBC appears after surgical treatment of a primary non-inflammatory breast cancer. One possible mechanism can be related to the stimulation of dormant micrometastasis through local angiogenesis occurring as part of posttraumatic healing. In this report, we review cases of secondary IBC and others where localized trauma was followed by the appearance of IBC at the traumatized site that have been identified by our IBC Registry (IBCR) and hypothesize that angiogenesis appearing as part of the healing process could act as an accelerant to an otherwise latent breast malignancy. It is therefore possible that secondary IBC can be used as a model to support local angiogenesis as an important contributor to the development of an aggressive cancer. Introduction Inflammatory breast cancer (IBC) is widely recognized as an extremely aggressive malignancy that is usually characterized by micro-metastases at the time of diagnosis. IBC is characterized clinically as a rapidly growing tumor with skin manifestation of erythema, warmth and edema and pathologically by invasion of the dermal lymphatics with tumor microemboli. In 1938, Taylor and Meltzer described two types of IBC, the primary form, where the characteristic clinical features were prominent from the outset and the secondary form, where the clinical features appeared subsequent to treatment for a primary non-inflammatory breast cancer [1]. IBC affects approximately 2.5% of women with breast cancer annually in the United States and thus affects more than 4,800 women each year, more than twice as many as those developing chronic myelocytic leukemia or acute lymphocytic leukemia [2]. It is a clinically and pathologically distinct form of breast cancer that is particularly fast growing, highly angiogenic and angioinvasive with its aggressiveness and angiogenicity present from its inception. The precise case definition is controversial [2], with the American Joint Committee on Cancer (AJCC) focusing on a clinical case definition [3], and the Surveillance, Epidemiology and End Results (SEER) Program of the National Cancer Institute focusing primarily on a pathological case definition [4]. While the typical patient presents with pain and a tender, firm enlarged breast with the symptoms developing in less than six months, IBC may be diagnosed with less than half of the breast involved and without the pathological confirmation of dermal lymphatic invasion [2,5]. The skin over the breast is reddened, warm, thickened and often has a pitted appearance termed "peau d' orange". The designation "inflammatory" stems from the clinical appearance that mimics an acute inflammation, but this is somewhat of a misnomer [6]. Dermal lymphatic occlusion by tumor infiltrate, a finding which pathologists rely upon to confirm their clinical diagnosis, is believed to lead to increased vascular pressure and stasis, and inflammation does not actually contribute in any consequential way to the skin manifestations [7]. Since IBC tumors produce negligible amounts of most inflammatory cytokines, host inflammatory cells are rarely detected around the tumor stroma [6]. The Inflammatory Breast Cancer Registry (IBCR) was developed to provide a standardized population of IBC patients for epidemiologic and laboratory studies, and among the 156 patients enrolled thus far, eight were identified as having secondary IBC. In our review of these cases and others where localized trauma was followed by the appearance of IBC at the traumatized site, we hypothesized that angiogenesis appearing as part of the healing process could act as an accelerant to an otherwise latent breast malignancy. We present examples of this possible phenomenon which could suggest a population of patients for further investigation. Experimental Section The Inflammatory Breast Cancer Registry (IBCR) was established 1 June 2002 to collect standardized clinical data and biospecimens from patients with IBC in the United States and Canada [2]. Patients with IBC who were entered into the Registry were at least 18 years of age, signed an Informed Consent, agreed to be interviewed and to provide access to medical records and tissue blocks. Patients contacted the Registry after learning about it on the internet or from local oncologists. It was funded initially by the Department of Defense and is now supported by laboratories that use the tissue samples to characterize the disease. In this report we document the histories of two patients with secondary IBC as well as two additional patients whose disease presentation also supports the possible occurrence of IBC secondary to breast trauma. Secondary IBC cases were defined as women who have had surgery for non-inflammatory breast cancer with recurrence manifest as skin erythema shown to be associated with pathologically confirmed tumor emboli in the dermal lymphatics. IBC 13-Secondary IBC This 58 year old woman was diagnosed with Stage II infiltrating ductal carcinoma of the right breast in November 1992. She had a modified radical mastectomy and lymph node dissection, three of seventeen nodes being positive for tumor. She was treated with six months of cyclophosphamide, methotrexate and 5-fluorouracil and had no evidence of recurrence. In January 2000 she had a second right breast reconstructive operation and post-operatively redness was noted at the surgical scar site which was first considered to be an allergic reaction and was not biopsied. In August 2000 she developed axillary metastases and she was treated with herceptin, taxotere and carboplatin. While on chemotherapy the redness progressively got worse and was eventually documented as being due to dermal lymphatic invasion. In March 2002 she developed skin involvement of the left breast which also showed dermal lymphatic invasion and she died in May 2008. IBC 20-Secondary IBC This 45 year old woman was noted to have a 3.5 cm mass with spiculated margins 13 June 2000. One week later she was diagnosed as having an infiltrating ductal carcinoma and a lumpectomy was subsequently performed. No skin involvement was observed. In July she had a right partial mastectomy and there appeared to be a complete resection with adequate margins beyond the tumor. Three of 14 axillary lymph nodes were noted to be involved with tumor. The surgery was followed by chemotherapy with adriamycin and cytoxan. Erythema of the skin first appeared in September 2000 and biopsy showed dermal lymphatic invasion with tumor microemboli. Despite treatment with Taxol and a right total mastectomy which showed no residual tumor, she developed metastatic disease and died in September 2003. IBC 36-Post-Traumatic IBC This 63 year old woman with a history of fibrocystic disease had a routine mammogram in 25 October 2000 which showed "fibrogandular elements" in the left breast. On clinical exam the left breast seemed larger than the right and there was some flattening of the nipple. She had first noticed in August 2000 some tenderness of the left breast and the breast felt engorged. An ultrasound was performed which showed a small cyst in the retroareolar region with dilated retroareolar ducts. The surgeon suggested a ductogram which was performed 18 December 2000 and showed evidence of a filling defect in one of the ducts approximately 5 cm posterior to the areola. The patient described the procedure as being extremely painful and subsequently she had constant pain in the left breast. Subsequently she awoke at night with breast engorgement and heaviness accompanied by nipple inversion and she insisted on an evaluation and biopsy. A biopsy 15 January 2001 showed poorly differentiated ductal carcinoma with extensive lymphatic carcinomatosis. On further evaluation one week later the diagnosis of IBC was made based on erythema and some peau d' orange under her breast. She received three cycles of adriamycin and cytoxan with an immediate clinical response followed by a left modified radical mastectomy performed on 30 March 2001 which showed a residual mass with multifocal lymphatic involvement, including invasion of the dermal lymhatics. Surgery was followed by a fourth cycle of adriamycin and cyclophosphamide, three cycles of Taxol IBC 46-Post-Traumatic IBC This 33 year old woman was in good health and was employed as a civilian working for the military in Guam when she decided to have her nipples pierced. The procedure was performed in early 1999 and subsequently she noticed that the right nipple slowly began to swell. By the end of December the swelling was very prominent and she had the ring removed. On 1 January 2000 she noted a large lump behind the areola. She was able to get an ultrasound in Guam and noticed that the lump had doubled in size in the next five days; she also developed pain with intermittent stinging sensations but no redness. Her original Ob/Gyn doctor in Guam thought it to be an infection and put her on antibiotics. While traveling back to the U.S. where she was hired for a job in Washington, DC, the patient developed erythema of the entire breast with a thumb size port wine stain laterally. She noted that the lump was now of the size of a grapefruit. On evaluation in the U.S. 28 January 2000 the right breast was noted to be tender, painful and swollen with hyperemia, dermal thickening and induration especially in the inferolateral breast. A mammogram that day showed only nonspecific changes suggestive of mastitis. She was treated with antibiotics with no improvement. A biopsy of an indurated part of the port wine stain documented a malignancy and the diagnosis of IBC was made. The mass grew quickly involving more than half the breast with peau d'orange appearance and she received four cycles of adriamycin and cytoxan with excellent response. Mastectomy performed in June 2000 showed infiltrating ductal carcinoma with involved margins and dermal/lymphovascular invasion. All five lymph nodes examined showed metastatic disease. In May 2001 she developed severe back pain and MRI suggested lesions in C7 and T11. She was started on radiation therapy but in June she developed lung lesions and she was treated with taxotere followed by herceptin and letrozole. Her disease persisted but she continued treatment and survived until June 2010. Discussion The phenomenon of accelerated tumor growth following surgery has been observed repeatedly and merits further study in order to determine which patients are more likely to encounter this problem. It appears that there is more than one clinical manifestation of this occurrence, however, and more than one mechanism may be involved. Other reports in this symposium [8,9] have found a hormonal pathogenesis whereby removal of the primary tumor has led to a release of an inhibition on latent metastases. Another possible mechanism, however, is the stimulation of tumor growth through local angiogenesis occurring as part of post-traumatic healing. The central importance of tumor neovascularization has been emphasized by clinical trials using antiangiogenic therapy in breast cancer. Although findings to date have not indicated significant benefits in terms of survival, nevertheless significant improvements in response rates have been documented [10]. "Surgery-driven enhancement of metastases", the subject of a recent review [11] as well as a focus of this symposium, may well be exemplified by secondary IBC. IBC is a particularly aggressive form of breast cancer, treated initially with chemotherapy because of the likelihood of dissemination of micro metastases from the outset. The clinical and pathologic findings, while differing in extent from patient to patient, are striking and readily apparent to anyone familiar with this disease. A rapid spread of erythema often with documented invasion of the dermal lymphatics is pathognomonic of IBC. A diagnosis of inflammatory breast carcinoma is made on the basis of the clinical findings. A skin biopsy specimen that is negative for dermal lymphatic invasion does not rule out inflammatory carcinoma [7]. Our experience with the IBC Registry which has currently enrolled 156 well documented patients with the disease, has confirmed that most patients present with the sudden appearance of redness, swelling and tenderness of the breast. One interesting subgroup of patients, however, are the eight patients in our Registry meeting the case definition of secondary IBC; described by Taylor and Meltzer who noted that "In the group which we would designate as secondary (IBC) the inflammatory manifestations may appear suddenly in a breast which has long been the seat of a scirrhous carcinoma…or it may follow mastectomy for scirrhous carcinoma, either at the original site or the opposite breast" [1]. Our experience with IBC, noted in the case reports above, suggest that local trauma probably mediated in large part by angiogenesis can be an important trigger of IBC. As with primary IBC, the clinical presentation is not uniform, but striking occurrences such as described for IBC patients 13 and 20 clearly link the initial appearance of IBC to the site of surgical trauma. In this report, we describe two cases compatible with secondary IBC that have been identified by our IBC Registry and two cases where IBC appeared at the site of local trauma. In these four cases, the brief interval between the breast trauma and the appearance of clinical evidence of IBC (Table 1) is understandable in view of the rapidity with which IBC advances. It is reasonable to hypothesize that latent cancer cells remain after surgery and usually do not manifest clinical signs unless stimulated by local angiogenesis. Not only can surgery promote shedding of tumor cells from the malignant tissue into the blood and lymphatic system but it could also eliminate the distant anti-angiogenic effect associated with the primary tumor's presence (carried by factors such as angiostatin and endostatin) and thus promote the survival of microscopic cancer foci [8]. The tissue damage and subsequent inflammatory response induced by surgery can also lead to elevation of pro-angiogenic factors and growth factors (e.g., EGF) [8]. We propose that consideration be given to focusing on possible parallels between "surgery-driven enhancement of metastases" and secondary IBC to identify opportunities to further understand the mechanism for this phenomenon. The possibility that trauma can be etiologically related to cancer is raised primarily by a number of case reports. In 1933, Coley and Higinbotham reported 360 cases of bone sarcoma, of which 181 (50%) had histories of trauma, and 205 cases of breast carcinoma of which 70 (34%) were associated with trauma [12]. An impressive series has suggested trauma as a cause of bone cancer [13], and a review of post-surgical bone cancers associated with metal implants identified 22 various bone tumors, 17 since 1980 [14]. Several studies have also observed an increased relative risk of carcinoma associated with a history of nasal trauma or injury [15]. A number of studies have reported brain tumors to be significantly associated with trauma [16] and the role of angiogenesis in the healing process has been suggested as an important contribution to tumor aggressiveness [17,18]. This symposium [8,9] follows previous reports [11,19,20] noting the possible contribution of surgery to aggressive breast cancer, but the major focus in these reports has been the systemic contribution of excising a tumor to promoting the acceleration of distant latent micrometastasis [8,9,20]. Surgery remains an effective therapy for solid tumors in the U.S. and dramatically improves survival rates. Recurrences remain the most important challenge; almost one third of surgical patients will ultimately recur locally and/or systemically [21]. The attribution of a malignancy/metastasis to local trauma by searching for a reason for the disease (recall bias) is always a possibility but in view of the hypothesis that trauma in the form of a surgery can stimulate angiogenesis which can accelerate tumor growth, the documentation of IBC appearing following a surgical event and precipitated by it (Case 13 and 20) merits consideration. Conclusions The evidence presented in this symposium and in careful reviews [8,9,11] indicating that surgery can facilitate the appearance of metastatic disease requires considerable attention. While surgery is clearly an important tool in curing breast cancer and is not questioned in the initial treatment of this disease, perhaps a discussion of the potential risks of surgery in breast reconstruction is warranted, particularly if the patient is apparently disease-free after treatment for IBC. In view of the hypothesis that trauma can stimulate angiogenesis which can accelerate tumor growth, the documentation of IBC appearing at the site of a traumatic event merits consideration. Our experience with IBC, noted in the case reports above suggest that local trauma probably mediated in large part by angiogenesis can be an important trigger of IBC. Studies on human-murine xenograft models like the Mary-X [22] and WIBC-9 [23] have provided insights on the biology of inflammatory breast cancer. These models can be used to define our hypothesis of surgically induced angiogenesis promoting metastasis at the histological and molecular level. We would therefore suggest that secondary IBC be considered for investigation of one possible mechanism for post-surgical tumor dissemination. A major question is how to identify patients at increased risk for this possible complication. Further attention to animal models and a more systematic study of the risk factors in patients with secondary IBC could be helpful.
2017-01-07T08:35:44.032Z
2012-02-20T00:00:00.000
{ "year": 2012, "sha1": "2268b9c37ff8b2558cac8d319bc76e864bf32282", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/4/1/156/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2268b9c37ff8b2558cac8d319bc76e864bf32282", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
240119578
pes2o/s2orc
v3-fos-license
Exploring the Influence of Touch Points on Tourist Experiences at Crisis Impacted Destinations Customer journeys in tourism are becoming more complex, often including multiple touch points that can influence expectations, experiences, and travel behaviors. The management of these different interactions is further complicated if tourist destinations face natural or man-made crises (e.g., financial crises, COVID-19). The current research takes a comprehensive look at how negative word-of-mouth (WOM) shapes pre-consumption expectations that drive actual tourist experiences and subsequent satisfaction behaviors. Using partial least squares structural equation modeling (PLS-SEM), findings from 188 tourists confirm the influence of uncontrollable, negative WOM on destination image. Yet an actual, positive experience negates these negative pre-trip influences. Tourism managers are rewarded with satisfied and loyal tourists in response to creating positive experiences even at crisis impacted destinations. Introduction Tourism destinations and businesses increasingly focus on designing and managing strong customer experiences (Lunardo and Ponsignon 2020). In fact, over 72.0% of businesses incorporate customer experience optimization in their strategic positioning (Kranzbühler, Kleijnen, and Verlegh 2019;Lemon and Verhoef 2016). This trend acknowledges that businesses and customers engage via multiple touch points (i.e., moments of customer interaction and contact with a firm) throughout the duration of the experience (Becker and Jaakkola 2020). These individual touch points together yield a customer journey across various channels, such as online channels including social media or mobile applications, that can lead to satisfying or dissatisfying post-purchase outcomes (Kranzbühler, Kleijnen, and Verlegh 2019). Customer satisfaction, as a critical component in assessing travel experiences, remains a focus of destination marketing organizations (DMOs) to succeed in an increasingly competitive tourism industry (Ribeiro et al. 2018). In managing these tourist experiences, limited research has examined the impact of negative information in shaping pre-travel consumption and, consequently, post-consumption satisfaction and loyalty tendencies (e.g., Nam et al. 2020). Yet unfavorable information about destinations in general and crisis impacted destinations specifically influence the actual experience; crises can range from natural disasters to financial crisis, pandemics, and regional conflicts (Ghaderi, Som, and Henderson 2012). While sharing of negative information is often associated with traditional media such as TV and print, word-of-mouth (WOM) is another common tool. Indeed, previous findings identified both positive and negative WOM as drivers of beliefs and knowledge formation about a destination (Reza Jalilvand et al. 2012). Still, DMOs primarily focus on positive WOMs influence in promoting destinations rather than on uncontrollable, negative WOM by travelers (Reza Jalilvand et al. 2012). Prior research confirmed the detrimental and long-term impact of crises on countries as well as corresponding tourism industries, leading to a continuous investigation of these effects over decades (Khalid, Okafor, and Shafiullah 2020). One prominent example remains the global financial crisis starting in 2007 and its significant, long-lasting impact on countries like Greece, Croatia, and Italy (Dogru and Bulut 2018). Across various types of crises, being prepared and monitoring market trends can assist in minimizing risks, staying competitive, and surviving future crises (Khalid, Okafor, and Shafiullah 2020). Therefore, understanding the impact of negative information related to a financial crisis or financial elements of a crisis can assist in responding to its long-term effects by managing tourist experiences. To date, the majority of studies have focused on internal firm perspectives when managing negative WOM associated with a crisis (e.g., Zheng, Liu, and Davison 2018). Limited research has explored the effects of negative WOM within the destination management context. Additionally, while gender moderated effects in prior destination image studies (Assaker et al. 2015; Huang and van der Veen 2019), the crisis impacted destination context remains largely unexplored. In light of the current global economic situation in response to COVID-19, the importance of assessing uncontrollable sources of negative information and providing insights on how DMOs can proactively manage these addresses timely concerns. Considering the pandemic's current stage, it can be challenging to fully understand and examine its prolonged economic impact on the tourism industry at this time (Xiang, Fesenmaier, and Werthner 2020). Subsequently, adding new insights on handling crises in general can benefit tourism marketers and corresponding regions in dealing with new crises by learning from previous catastrophes (Assaker and O'Connor 2020;Avraham 2015). The current study context of the global financial crisis of 2007 mirrors the economic and financial ramifications of the current pandemic, both spanning across numerous countries (Lederer 2021; The World Bank 2020). Therefore, using the global financial crisis as a proxy for the current pandemic allows us to draw insights from actual tourist experiences at a crisis impacted destination rather than relying on anticipated experiences as travel restrictions and limited mobility of tourists rendered required data inaccessible (Lim 2021;Xiang, Fesenmaier, and Werthner 2020). By positioning this research within the customer journey framework, the study aims to examine multiple interactions between tourists and companies across the different consumption stages. Rather than evaluating the objective financial situation of a destination, the current research assesses tourists' subjective perceptions of the travel experience and, subsequently, of the destination. Specifically, the assessment focuses on the influence of negative WOM targeting a destination impacted by a crisis on pre-consumption expectations. Moreover, the influence of these expectations on the actual experience, namely disconfirmation, and succeeding postconsumption outcomes is further investigated. The global financial crisis from offers a suitable study context considering that the current global pandemic displays comparable financial hardship and economic ramifications (Lederer 2021; The World Bank 2020). The contributions of this study hinge on introducing a crisis context to customer journeys in tourism. More specifically, the assessment of various touch points representative of the pre-, during, and post-consumption stages of the customer journey offer compelling insights in light of crisis impacted destinations. Contributions offer guidance to DMOs who face negative, uncontrollable information such as negative WOM during or after a crisis. In combating these negative influences, DMOs need to focus on creating positive internal touch points in the form of successful tourist experiences which negate these negative pre-consumption influences. Subsequently, tourists will express satisfaction and loyalty toward the business as well as the destination in general during post-consumption, which could lead to the next pre-consumption phase. Customer Journey in Tourism Tourism is becoming more complex and interactive through the integration of multiple touch points allowing consumers to engage with a company through different channels and media, particularly prior to a consumption journey (Lemon and Verhoef 2016). Touch points can be internal (e.g., the hotel a tourist is staying in) or external (e.g., reviews about the hotel) based on the company's level of control over these touch points (Becker and Jaakkola 2020;Kranzbühler, Kleijnen, and Verlegh 2019). External touch points remain outside of a firm's control such as customer goals, peer influences, and independent information sources (Lemon and Verhoef 2016). In contrast, internal touch points exist within a firm's immediate reach and control including company employees, check-in policies, and promotional materials (Becker and Jaakkola 2020;Yachin 2018). According to the customer journey framework, this culmination of experiences is a dynamic process that spans across all three consumption stages (i.e., before, during, and after the service purchase), and needs to be carefully managed to ensure a coherent image and positive holistic journey over time (e.g., Becker and Jaakkola 2020;Siebert et al. 2020;Yachin 2018). The pre-consumption stage includes all activities, influences, and searches prior to the actual experience (Lemon and Verhoef 2016). Thereafter, the actual purchase or consumption involves the service delivery making it the shortest stage (Lemon and Verhoef 2016). Attitudes, behaviors, and perceptions in response to immediate or prior purchases reflect the post-consumption phase; this often feeds into a loyalty loop of consumer loyalty or alternative consideration (Becker and Jaakkola 2020;Siebert et al. 2020). Within the context of tourism, Chon (1990) proposed a traveler buying behavior framework exploring travel experiences. These experiences include stages of primary destination image construction, actual experience, and post-trip evaluation. In the digital era, information source and media touch points represent essential components across all consumption stages of the customer journey (Lemon and Verhoef 2016). Customers utilize media touch points to receive firm information via company-controlled "paid" compared to customer-or peer-driven "earned" encounters (Klein et al. 2020). Paid media include company driven marketing activities, while earned media reflect external sources such as WOM or consumer reviews (Klein et al. 2020). These earned touch points occur either online or offline. Lemon and Verhoef (2016) suggested that the pre-consumption stage has received less attention across literatures than the actual service delivery stage. Consequently, this study explores the role of negative WOM as an external, earned media touch point as part of the pre-consumption stage of a travel experience. Specifically, destination image, disconfirmation, satisfaction, and loyalty are examined along the travel experience at a crisis impacted destination. Expectancy Disconfirmation Model One of the most commonly studied frameworks assessing consumer post-trip evaluations is the expectancy disconfirmation model (Oliver 1980). The divergence between expectations and actual experience where the disconfirmation of the actual experience compared to expectations leads to positive or negative outcomes remains the core focus of the theoretical premise (Bigné, Andreu, and Gnoth 2005;del Bosque and San Martín 2008;Oliver 1980). Specifically, satisfaction as a post-consumption outcome behavior remains a core concept grounded in the expectancy disconfirmation model (Bigné, Andreu, and Gnoth 2005;Narangajavana Kaosiri et al. 2019). Prior research has acknowledged the importance of considering cognitive and affective components within the disconfirmation framework driving satisfaction and subsequent intentions (del Bosque and San Martín 2008; Narangajavana Kaosiri et al. 2019). However, while research has partially addressed the interplay of disconfirmation and affective, as well as, cognitive elements leading to satisfaction and loyalty, conclusive findings remain sparse (Bigné, Andreu, and Gnoth 2005). Specifically, considering that customer journeys can consist of positive and negative travel experiences (Siebert et al. 2020), incorporating the perspective of tourism companies managing negative touch points seems essential. The Role of Negative Touch Points Limited studies have examined the negative influences of touch points and have predominantly focused on these influences when controlled by a company (Lemon and Verhoef 2016;Rapp et al. 2015). Yet, companies do not always remain in control of every touch point and corresponding outcomes; potential negative ramifications can be especially difficult to manage in these situations (Lemon and Verhoef 2016). One of these uncontrollable influences is WOM and specifically negative WOM. External information including WOM from family, friends, and social media sources (e.g., media, newspaper) can influence consumer perceptions and image creations during the pre-consumption stage of the customer journey (Lemon and Verhoef 2016). In addition, findings show that extreme crises negatively impact customer experiences (Assaker and O'Connor 2020;Lemon and Verhoef 2016). Thus, research is needed to examine how negative WOM focusing on crises impacts customer experiences as an uncontrollable, external touch point within the customer journey, and if negative effects prevail throughout the entire journey. As mentioned by Becker and Jaakkola (2020), the current literature remains unclear about potential additive effects of various external and internal touch points. This research addresses these concerns by incorporating external and internal touch points to examine the overall effect on satisfaction and destination loyalty across various consumption stages within the context of a crisis ( Figure 1). Negative Word-of-Mouth During Pre-Consumption WOM, defined as information exchange among consumers, influences customer attitudes and behaviors as an informal information source during a traveler's decision process (Hernández-Méndez, Muñoz-Leiva, and Sánchez-Fernández 2015;Nam et al. 2020). Sun, Ryan, and Pan (2015) explored the role of blogging on destination image and concluded that it increased tourist awareness and motivation to travel to a specific destination. In line with this finding, discussion has centered around the influence of social media on the decision-making process and consumer experience (e.g., Power and Phillips-Wren 2011). Specifically, in the absence of personal experience, consumers seek external information sources such as family and friends as part of their prepurchase search process (Scholl-Grissemann, Peters, and Teichmann 2020). Negative WOM utilized by customers as an earned media represents an external touch point in influencing tourists' experiences during the pre-consumption stage (Klein et al. 2020;Lemon and Verhoef 2016). Moreover, negative WOM, such as unfavorable comments about a destination, greatly influences destination image suggesting a stronger impact of negative information than positive information (Nam et al. 2020;Reza Jalilvand et al. 2012). Consequently, the interplay of new communication mechanisms, such as negative WOM, and their influence on destination image continues to increase in importance due to the destination's role in shaping tourists' decisions and experiences (Choi, Lehto, and Morrison 2007). Destination image, the sum of beliefs, knowledge, emotional thoughts, and expectations about a destination, plays an influential role in the buying decision process (Chon 1990;Foroudi et al. 2018). From a more defined perspective, destination image encompasses cognitive image and affective image to capture both beliefs and emotional responses toward a destination (Kim, Lehto, and Kandampully 2019). Cognitive image consists of an individual's beliefs and opinions about a destination that are shaped by tangible physical attributes including natural scenery, facilities for activities, and entertainment options (Lin et al. 2007;Stylidis, Shani, and Belhassen 2017). In contrast, affective image represents a person's emotional response toward a destination, which further influences the evaluation and choice of a destination (Stylidis, Shani, and Belhassen 2017). Consistent with previous research (e.g., del Bosque and San Martín 2008; Lin et al. 2007;Tan and Wu 2016;Wang and Hsu 2010), the current study continues the operationalization of two destination image components and incorporates both cognitive image and affective image. As information generated from WOM can be positive or negative, Tasci, Gartner, and Cavusgil (2007) acknowledged that the negative image portrayed by media or family and friends can negatively influence tourists' destination preferences. While companies can implement communication strategies to assist with positive image restoration, events outside of a firm's control make it challenging to fix the tarnished destination image (Avraham 2015). For example, events including natural catastrophes, terror attacks, or financial crises are autonomous image formation agents that can construct a negative brand bias associated with the tourism destination (Tasci, Gartner, and Cavusgil 2007). Therefore, understanding the impact of negative information is crucial in preparing tourism destinations and businesses with efficient strategies in responding to crises. One of these events is the global financial crisis in Greece and the corresponding negative coverage in international media that led to an uproar in other European countries (Bickes, Otten, and Weymann 2014). With the wide media coverage, the topic remains popular among individuals as well. Negative WOM about a crisis associated with a travel destination generated from personal and impersonal sources can further influence cognitive image and affective image. Based on the above discussion, we propose the following hypotheses: Hypothesis 1.1: Negative word-of-mouth has a negative influence on cognitive image. Hypothesis 1.2: Negative word-of-mouth has a negative influence on affective image. Cognitive Image/Affective Image and Actual Consumption The mental representations or images related to a destination shape expectations and anticipations of the experience prior to the visit (Chon 1990; del Bosque and San Martín 2008). As previously discussed, destination image is often conceptualized as two-dimensional consisting of cognitive image and affective image. The image formation process outlines how cognitive and affective image influence the anticipation of a traveler's experience prior to the actual visit; thus, the subsequent evaluation of the experience is affected also by cognitive image and affective image (Chon 1990;Reza Jalilvand et al. 2012). Previous studies further established the influence of cognitive and affective image on tourists' pre-consumption, actual experiences, and post-consumption evaluations (Foroudi et al. 2018;Reza Jalilvand et al. 2012;Tasci et al. 2021). posited that both destination images are shaped by local residents at the destination, which differs by visitor segment based on emotional solidarity. Considering the customer journey framework, various touch points impact a traveler's overall experience, specifically destination image (Lemon and Verhoef 2016). Based on the expectancy disconfirmation model, disconfirmation results from comparing expectations and actual experiences, where expectations represent an individual's beliefs of an object or event (Oliver 1980;Nam et al. 2020). In tourism, cognitive image and affective image are compared to the actual travel experience in influencing the outcome (Chon 1990;Foroudi et al. 2018). Prior research has established the influence of cognition and affect (e.g., Bigné, Andreu, and Gnoth 2005;Loureiro 2010) and, more specifically, cognitive as well as affective image (del Bosque and San Martín 2008; San Martín and del Bosque 2008) on tourist expectation and subsequent experiences tied to a destination. However, according to Afshardoost and Eshaghi (2020, 1) meta-analytical results, the influence of destination image varies "in terms of direction, magnitude, and statistical significance due to variety of the research context, research approach, research strategy, sampling method, and methods for measuring different components of destination image." Consequently, further research is necessary to clarify the effect of cognitive and affective image on the disconfirmation of travel experiences, especially within a crisis context (Afshardoost and Eshaghi 2020). Accordingly, we hypothesize that: Hypothesis 2: Cognitive image has a positive influence on disconfirmation. Hypothesis 3: Affective image has a positive influence on disconfirmation. Disconfirmation and Post-Consumption Behaviors Within the proposed model, disconfirmation represents the consumption phase of the travel experience in line with the previously discussed expectancy disconfirmation model (Oliver 1980). From a tourist's perspective, satisfaction is a "pleasurable fulfillment" resulting from the outperformance of the actual experience in a destination compared to the pre-trip expectation through disconfirmation (Deng and Pierskalla 2011;Oliver 1980Oliver , 1999. According to Pestana, Parreira, and Moutinho (2020), individuals rate satisfaction on a continuum ranging from dissatisfaction to satisfaction in an attempt to explore tourists' fulfillment of needs and desires as part of their travel experience. This view of satisfaction reflects its cognitive nature (standard and feedback) and its affective nature (feeling of pleasure) that simultaneously contribute to the overall level of satisfaction (del Bosque and San Martín 2008). Another important influence has been social factors, such as communications of others that can impact perceived realities associated with a destination and subsequent satisfaction (Narangajavana Kaosiri et al. 2019). Importantly, satisfaction can be examined after each tourist experience allowing for a comprehensive assessment within a customer journey (Ribeiro et al. 2018). Prior findings confirmed the influence of actual experiences (i.e., disconfirmation) on tourists' level of satisfaction associated with a service (Narangajavana Kaosiri et al. 2019). Indeed, Petrick (2004) identified disconfirmation as one of the best predictors of satisfaction within tourism research. Disconfirmation also impacts the experience evaluation and positively affects satisfaction by generating positive judgments and feelings of pleasure (Bigné, Andreu, and Gnoth 2005). Furthermore, del Bosque and San Martín (2008) proposed that tourists generally judge their experiences more positively if an experience exceeded expectations (e.g., exaggerating their evaluation). Therefore, disconfirmation of an experience is suggested to lead to higher levels of satisfaction. In the current study, we therefore postulate that: Hypothesis 4: Disconfirmation has a positive influence on satisfaction. Destination loyalty remains an important success indicator in tourism as it reflects a positive attitude toward a destination and a commitment toward the tourism service or destination (Li et al. 2020;Ribeiro et al. 2018;Tasci et al. 2021). Often defined as the willingness to recommend or revisit a destination, destination loyalty incorporates behavioral and attitudinal facets post consumption (Ribeiro et al. 2018;). So, the success of a travel destination is largely dependent on tourists' behavioral intentions including intentions to revisit and willingness to recommend the destination to others (Ahrholdt, Gudergan, and Ringle 2017). further posited that intentions to revisit promote the competitiveness of a destination as a sign of success. Multiple studies have incorporated intentions to recommend as a measure of destination loyalty (e.g., Cossío-Silva, Revilla-Camacho, and Vega-Vázquez 2019). Satisfied tourists express destination loyalty by recommending the destination to friends and family members Sun, Chi, and Xu 2013). These recommendations from family and friends act as a credible information source and, subsequently, assist other tourists in selecting a suitable destination (Yoon and Uysal 2005). With regard to disconfirmation, Bigné, Andreu, and Gnoth (2005) argued that perceived disconfirmation and pleasure, which are satisfaction-mediated factors, also directly impact destination loyalty. As disconfirmation reflects the positive or negative evaluation of the actual experience, this performance evaluation subsequently affects attitudes and future behaviors (Baloglu et al. 2004). Enjoyable experiences and positive performances lead to positive communications about the experience and future intention to repeat the visit (Baloglu et al. 2004;Bigné, Andreu, and Gnoth 2005). Hence, we propose: Hypothesis 5: Disconfirmation has a positive influence on destination loyalty. Satisfaction as a Mediator Satisfaction is one of the most significant indicators of tourism experiences as it leads to loyalty (Ahrholdt, Gudergan, and Ringle 2017;Lee, Kyle, and Scott 2012). Empirical evidence suggests that tourists' satisfaction drives destination loyalty due to its impact on destination choice and revisit intentions (Ribeiro et al. 2018;. Satisfied tourists are more likely to return to the same destination and are more willing to share their positive travel experience with others (Lee, Kyle, and Scott 2012). Therefore, prior research established a strong relationship between satisfaction and destination loyalty (Ribeiro et al. 2018). Satisfaction mediating properties on behavioral and attitudinal outcomes, such as loyalty, have also been established within the marketing and tourism literature (e.g., del Bosque and San Martín 2008; Deng and Pierskalla 2011). Ribeiro et al. (2018) discussed the well-established positioning of satisfaction as a mediator between various factors and loyalty. Additional empirical research supported the mediating effect of overall satisfaction on the relationship between destination performance and destination loyalty (Baloglu et al. 2004;Deng and Pierskalla 2011). As satisfaction develops from the disconfirmation of a tourist's actual experience compared to the expectations, it mediates the effect of disconfirmation on destination loyalty indicating immediate post-consumption responses (Loureiro 2010). As a result, we hypothesize that: Hypothesis 6: Satisfaction mediates the influence of disconfirmation on destination loyalty. Gender as a Moderator Gender has been found to be a strong moderator within previous tourism and destination image research. Ribeiro et al. (2018) revealed that gender is one of the most influential drivers in selecting a tourist destination and often determines future purchase behaviors. Generally speaking, previous research positions female tourists as more emotional, socially oriented, interactive, and sensitive to social interdependence than male travelers (Hwang, Han, and Kim 2015;Ribeiro et al. 2018). Moreover, female tourists tend to be more susceptible to external information during the overall decisionmaking process (Ribeiro et al. 2018). Šegota, Chen, and Golja (2021) confirmed that these differences also prevail in WOM assessments. Huang and van der Veen (2019) identified that gender can explain differences in the image formation of tourism destinations and behavioral intentions. Focusing on loyalty perceptions, Assaker et al. (2015) found that male tourists develop less destination loyalty yet express strong destination image toward Australia. Meng and Uysal (2008) looked at gendered differences within nature-based tourism settings, revealing significant differences in travel attributes and values between male and female tourists. Finally, Ribeiro et al. (2018) concluded that gender moderates the effect from satisfaction toward loyalty whereby the effect was stronger for male tourists. The aforementioned discussion leads to the following hypothesis: Hypothesis 7: Modeled relationships are moderated by gender (male vs. female tourists). Study Context The study was conducted on the Greek island of Crete. Tourism has been and remains a key component of the Greek economy contributing an estimated 15.10 billion Euros to the country's GDP in 2018 alone despite financial challenges (Luty 2020;Thompson 2017). These challenges emerged from the financial crisis in 2007 that caused severe instability across markets and gradually escalated into a global crisis (Abboushi 2011). By 2009, Greece's economy and overall financial standing drastically declined (Amadeo and Boyle 2020). The country continued to deal with the impact of the global financial crisis until 2018 with the ending of the European Union bailout program (Amadeo and Boyle 2020). In fact, Greece emerged as one of the worst impacted European countries during this crisis which threatened the viability of the Eurozone and associated trade worldwide (Abboushi 2011;Thompson 2017). Despite being a popular tourist destination, media across Europe has often portrayed Greece in a negative image focusing on their financial difficulties and the need for a European bailout (Papathanassopoulos 2015). Thus, tourists intending to visit Crete are exposed to negative information about the impact of the global financial crisis in Greece. Considering the suggested long-term effects of crises, the continued impact of these negative communications is especially of interest in the present study given the current global situation (Dogru and Bulut 2018;Khalid, Okafor, and Shafiullah 2020). As mentioned earlier, the COVID-19 pandemic has a significant financial impact on countries around the world (The World Bank 2020). Greece, specifically, has spiraled into another economic and financial crisis similar to the financial distress faced during the global financial crisis of 2007-2018 (Hazakis 2021). Exploring how people perceive a destination based on communications about associated crises is important in understanding tourists' experiences related to crisis impacted destinations. Data Collection and Measurements The study site was Crete. The destination remains a favorite vacation place for British tourists who represent 40.0% of the total inbound tourist market of Crete (SETE 2020; Stylos and Bellou 2019). With regard to tourist characteristics, Crete is a popular destination for families with children (42.0%), couples (38.0%), and singles (20.0%) (Marti and Puertas, 2017). Traditionally, younger tourists (18-45 = 71.0%) tend to seek out the destination more than older tourists (46+ = 29.0%) (Andriotis 2011;Bellou and Andronikidis 2009). The vast majority of tourists vacation in Crete between eight and nine days (Nikolopoulou 2019). Data collection included British tourists in various resorts on Crete from September to October 2016. A systematic sampling technique was implemented by approaching every fifth British tourist during the check-out of these resorts. Previously trained hotel employees explained the purpose of the study and answered potential questions. Data collection took place seven days a week during that one-month period. The sampling approach focused on English language native tourists to avoid language barrier and potential cultural bias imposed by administering the paper-pencil survey in English (Ford, West, and Sargeant 2015). Upon completing the data collection, a total of 208 surveys were collected. Once the data was assessed for incomplete responses and failed attention checks, 188 valid responses remained. As summarized in Table 1, the sample contained slightly more female (59.0%) than male (41.0%) participants. Most respondents were 18-29 years of age (41.0%), followed by 30-39 years of age (23.4%). The majority of the tourists were either married (43.6%) or single (42.0%). With regard to their current vacation stay, the most common trip length was seven days (55.8%). Therefore, the current sample represents common characteristics of the usual British tourist vacationing in Crete. The paper-pencil survey included various measures representing the constructs of interest reported in the literature and adapted for the specific context of the study. Drawing on previous conceptualizations, six cognitive image items and four affective image items assessed each corresponding construct (e.g., Lin et al. 2007;Papadimitriou, Apostolopoulou, and Kaplanidou 2015;Wang and Hsu 2010). Negative WOM encompassed three items that were adapted from previous WOM and information source scales (Hernández-Méndez, Muñoz-Leiva, and Sánchez-Fernández 2015; Tan and Wu 2016). To more accurately reflect the crisis scope of the current study, an experienced tourism professor in crisis research served as an expert and assisted in the reformulation of the items to accurately capture the context of the financial crisis. All measures utilized 5-point Likert scales ranging from 1 (strongly disagree) to 5 (strongly agree) or 5-point semantic differential scales. The survey concluded with demographic questions. Please see Table 2 for items and corresponding scale assessment. As the collected data are of self-reported nature, common method bias (CMB) could pose a threat to the findings' validity. Therefore, a Harman's single-factor test was performed to determine whether the data variance was explained by one single factor (Podsakoff et al. 2003). With the first factor accounting for less than 50.0% of the total variance (i.e., 39.7%), results suggest that CMB did not likely affect the findings of the research. Partial least squares structural equation modeling (PLS-SEM) was the method of analysis. PLS-SEM is a suitable approach considering the relatively small sample size of the current study (Ahrholdt, Gudergan, and Ringle 2017) and the inclusion of two-item constructs (Ahrholdt, Gudergan, and Ringle 2017;Tan and Wu 2016). Furthermore, the method allows for assessment of multigroup analysis (PLS-MGA) Measurement Model Assessment The analysis first focuses on quality assessment of the measurement model by evaluating internal consistency, indicator reliability, convergent validity, and discriminant validity of the reflective constructs (Hair et al. 2019). Based on Cronbach's alpha and composite reliability (CR) values ranging between 0.81 and 0.91, all values exceed the common cutoff of 0.70 confirming internal consistency reliability (see Table 2). Indicator reliability draws on average variance extracted (AVE) and supports convergent validity with values exceeding 0.50 for all constructs (Hair et al. 2019). In addition, all indicator loadings are highly significant (p < .001) and load on their corresponding construct. Lastly, skewness and kurtosis values for all scale items were within the acceptable range (±2.00) indicating normal data distribution (Taheri et al. 2020). Discriminant validity assessment relies on Fornell-Larcker Criterion (Fornell and Larcker 1981) and the recently established Heterotrait-Monotrait Ratio (Hair et al. 2019). All squared construct correlations are smaller than the corresponding AVEs providing support for discriminant validity according to Fornell-Larcker criterion (Fornell and Larcker 1981). These results are further supported by the Heterotrait-Monotrait Ratio (HTMT) analysis, as all HTMT values are below the conservative threshold of 0.85 (Henseler, Ringle, and Sarstedt 2015), and confidence intervals for each construct combination relationship do not include 1 (Table 3). Overall, measurement model results provide support for reliability and validity. Structural Model Assessment Hypotheses tests involve one-tailed tests with 0.05 significance level and 5,000 bootstrap subsamples. An overview of path coefficients, t-values, p-values, R 2 , and Q 2 values follows in Figure 2. All path coefficients express significant relationships (lowest p-value < .001) and of expected direction. The structural model evaluation first involves potential collinearity issues. Results show that all VIF values of the predictor variables are below the conservative threshold of 3.00 with values ranging from 1.00 to 1.90 suggesting the absence of multicollinearity issues (Hair et al. 2019). All R 2 values are greater than 0.14 and thus exceed the suggested threshold of 0.02 supporting good predictive accuracy (Krey et al. 2019). Furthermore, Stone-Geisser's Q 2 values for endogenous variables surpass the cutoff value of zero indicating predictive relevance of the model (Hair et al. 2019). Lastly, assessing f 2 to measure the magnitude of the effect sizes shows that most variables reflect medium effect sizes (0.12-0.94) based on Cohen's (1988) guidelines where values of 0.02, 0.15, and 0.35 represent small, medium, and large effects respectively (Krey et al. 2019). With regard to hypotheses assessment, all structural relationships express significance and importance through magnitude of their standardized values (Table 4). Specifically, findings support all proposed hypotheses. Negative WOM exerts a significant negative effect on cognitive image (β = −0.37, p-value = .000) and affective image (β = −0.43, p-value = .000), supporting H1.1 and H1.2. In turn, both cognitive image (β = 0.37, p-value = .000) and affective image (β = 0.33, p-value = .000) positively impact disconfirmation consistent with H2 and H3; the effect is slightly stronger for cognitive image. In line with H4, disconfirmation drives satisfaction (β = 0.70, p-value = .000). Similarly, disconfirmation positively influences destination loyalty (β = 0.37, p-value = .000) as proposed in H5. Results also uphold the proposed mediating effect of satisfaction (H6; indirect effect β = 0.30, p-value = .000). Lastly, bootstrapping analysis results of the specific indirect effects including t-values and the confidence interval (CI) are listed in Table 5. The results indicate that negative WOM does not indirectly The final step of the PLS-SEM analysis involved predictive validity assessment of the PLS path model applying PLSPredict with 10 folds and 10 replications . The root mean squared error (RMSE) values of the endogenous constructs in the model express overall smaller values for the PLS-SEM method in comparison to the linear regression (LM) approach. In addition, all Q 2 values exceed zero providing further support for the model's out-of-sample predictive power. Multigroup Analysis PLS-MGA was administered to assess the moderating effect of gender (male = 77, female = 111) on the previously discussed model (Hair et al. 2019;Taheri et al. 2020). Prior to performing PLS-MGA, metric invariance was tested applying the measurement invariance of composite models (MICOM) procedure (Henseler, Ringle, and Sarstedt 2016;Taheri et al. 2020). MICOM examines configural invariance, compositional invariance, and equal composite mean values and variances. Results of measurement invariance assessment indicate that full measurement invariance is achieved for gender. Therefore, PLS-MGA can be applied to examine potential gender differences. The PLS-MGA results do not support significant differences between gender across all path coefficients. Contrary to H7, gender does not moderate the proposed relationships in the model. Male and female tourists do not express different expectations or outcomes related to travel experiences at a crisis impacted destination. Discussion and Conclusion This study explored the influence of negative WOM as an external, earned media touch point in the pre-consumption stage of travel experiences. Furthermore, destination image, disconfirmation, satisfaction, and loyalty were assessed Note: NWoM = negative word-of-mouth; CI = cognitive image; AFFEI = affective image; DC = disconfirmation; SAT = satisfaction; DL = destination loyalty. along the travel experience by estimating a structural model using PLS-SEM. The unique crisis context of this research offers insights into the proposed and tested relationships among these key constructs beyond some of the previous literature (e.g., del Bosque and San Martín 2008;Loureiro 2010;Reza Jalilvand et al. 2012). Specifically, these new findings on crises influencing tourist responses prepares DMOs to successfully manage future disasters or long-term effects of crises, such as the aftermath of the current global pandemic, by learning from previous catastrophes (Assaker and O'Connor 2020;Avraham 2015). With regard to negative WOM about a crisis destination in the pre-consumption stage of the travel experience, findings confirm its adverse impact on cognitive and affective destination image of tourists. These contributions provide further insights into the influence of negative WOM on consumer evaluations and judgments as prior findings remain inconclusive (Ishida, Slevitch, and Siamionava 2016). Despite negative WOM's influence on destination image, these effects do not negatively impact the actual tourist experience as confirmed by the current study. Therefore, cognitive and affective image continue to positively influence disconfirmation. These findings relate to prior research by del Bosque and San Martín (2008) who confirm cognitive and affective image's influence on tourists' expectations of destinations, mediating the path to disconfirmation. The current research also takes an extensive look at the customer journey in tourism and corresponding factors that influence the consumption and post-consumption phases. Specifically, the disconfirmation framework provides a theoretical underpinning to assess how negative WOM tied to a crisis destination impacts a traveler's actual experience. In turn, this experience further influences subsequent post-consumption behaviors. Previous studies have explored the effects of media coverage on the global financial crisis (e.g., Papathanassopoulos 2015); however, the impact on destination image, actual travel experience, and tourists' attitudes or intentions has remained unexplored. As supported in the present study, positive experiences translate to a satisfactory post-purchase assessment that is accompanied by loyalty intentions. Furthermore, the mediating effect of satisfaction on destination loyalty follows previous research (e.g., del Bosque and San Martín 2008; Deng and Pierskalla 2011;Marques et al. 2021), supporting the importance of creating satisfying and pleasant experiences to foster revisit intentions. Satisfaction, as a comprehensive assessment of a tourist journey (Ribeiro et al. 2018), impacts tourists' behavioral intentions to recommend or revisit. Ultimately, while increased importance should be placed on opinions from friends and family, the wider social network, and online media when it comes to the creation of positive or negative images, the primary focus remains the actual experience at the crisis destination. Finally, a multigroup analysis assesses potential gender differences within the destination crisis context. The results show no differences between male and female tourists across pre-, post-, or actual consumption experiences tied to a crisis destination. Theoretical Contributions The current study leads to various theoretical contributions. First, we apply the customer journey framework to the tourism context by focusing on holistic consumption experiences across the three distinct phases: pre-consumption, consumption, and post-consumption. Most importantly, the specific crisis context provides a novel approach to identifying various intersections of engagement between tourists and companies, namely touch points. Second, specifically by integrating negative WOM and actual tourist experiences, this research acknowledges the varying level of control companies have to counter information tarnished by crises. Also, while online WOM such as reviews (cf. Yang, Park, and Hu 2018) and traditional WOM including print or family sources are predominantly examined separately, this study assesses the impact of negative online and offline WOM from both mass media and personal perspectives. Considering the enormous importance of WOM, this study contributes to the literature on the negative effect of media coverage and personal opinions on the recovery of tourism destinations after a crisis. Specifically, WOM is positioned as an external, prepaid touch point that influences tourists' image formations about destinations prior to actual tourist experiences. In light of COVID-19, the current study provides insights on the impact of negative WOM compared to actual experiences in diminishing the unfavorable image regarding destinations suffering from crisis hardships. Third, while disconfirmation measures the evaluation of the actual experience, satisfaction provides the immediate post-consumption assessment. This research extends knowledge on satisfaction and confirms a mediating effect of satisfaction on the relationship between disconfirmation and destination loyalty. Therefore, findings highlight the importance of managing each touch point in the customer journey to capitalize its full potential. Administering specific indirect effects allows a deeper assessment of satisfactions importance among the proposed relationships beyond previous research (e.g., del Bosque and San Martín 2008; Deng and Pierskalla 2011). As no indirect effects are confirmed between WOM and loyalty, the necessity of satisfaction as a precursor to destination loyalty is solidified. In addition, this study confirms the significant impact of disconfirmation on satisfaction and destination loyalty. Our findings contradict an earlier study by del Bosque and San Martín (2008) who failed to support the relationship between disconfirmation and satisfaction within a Spanish tourism context. Considering our research, it is evident that the crisis and negative information sources contribute to the importance of disconfirmation as part of the consumption image formation process. In this particular setting, images were unfavorable due to negative WOM pre-trip exposure. As a result, tourists may have expressed more positive perceptions of the actual experience than during a usual vacation. Finally, we extend knowledge on gender differences within the context of crisis impacted destinations and identifies crises as an equalizing force in eliminating gender differences. These findings are novel considering that previous research (e.g., Huang and van der Veen 2019; Hwang, Han, and Kim 2015;Ribeiro et al. 2018) has supported a moderating effect of gender within destination loyalty studies. However, most of the prior empirical findings remained outside of a crisis scope which could be one explanatory factory of the current implications. This suggests that crisis situations equalize potential gender influences in travel behaviors. Consequently, we contribute to the literature on gender differences in the travel industry by revealing that destination loyalty or satisfaction post tourists' travel experiences as well as negative WOM and destination image formation remain free of gender influences within the context of crisis impacted destinations. Managerial Implications Previous research provided insights on positive effects of WOM or other personal information on tourist experiences. However, while marketers keep investing resources in promoting destinations, uncontrollable, negative information can influence the pre-trip image and actual tourist experiences. Most importantly, tourist destinations can be further impacted by natural and man-made crises adding another level of uncertainty DMOs have to manage (Avraham 2015;Lim 2021;Xiang, Fesenmaier, and Werthner 2020). Therefore, companies should consider the non-commercial information from both public and personal sources in influencing visitors' attitudes and destination choices. Our findings show that DMOs need to focus particularly on strengthening media coverage and building a strong social media presence to ensure that tourism "unrelated" news does not impact the actual decision to travel to the tourism destination. Moreover, visitors' pre-trip expectations, negative or positive, play a critical role in evaluating the actual experience. As DMOs have no control over these external touch points in the pre-consumption stage, to meet or exceed existing expectations and change future expectations of tourists relies on the performance of internal touch points controlled by companies. Thus, companies need to carefully monitor their interactions with customers before, during, and after consumptions in creating long-lasting, positive customer journeys. For DMOs, tourism and hospitality businesses, this offers opportunities in terms of overcoming challenges with regard to negative WOM. While negative WOM can represent information related to the destination in general, companies can still change a tourist's evaluation of the actual experience. Exceeding expectations can help create a positive image, satisfy tourists, and, consequently, foster intentions to return and recommend the destination. Lastly, since the COVID-19 pandemic is replicating the financial recession from the 2007 global financial crisis, managers can learn from the crisis insights and apply strategic responses combating negative WOM related to crisis impacted destinations in the future. Limitations and Future Research As with any study, the current research reflects some limitations. The crisis scope and data of the present research represent tourist behaviors in Greece influenced by the global financial crisis from 2007 to 2018. As such, data collection and analysis were completed prior to COVID-19's global impact. While these findings contribute to the general knowledge of dealing with crisis situations, further research is recommended to validate the current model once the prolonged economic impact of COVID-19 on the tourist industry can be empirically assessed (Xiang, Fesenmaier, and Werthner 2020). Replicating the study during or after the COVID-19 pandemic might reveal differences associated with travel behavior, as would be the case with any crisis. Therefore, the robustness of the present study should be expanded by incorporating additional crises such as natural disasters and terrorism as well as timings of these crisis (i.e., beginning, during, or right after a crisis). Differentiation between natural and man-made crises would provide further insights on how negative information impacts tourism. Another limitation is the focus on British visitors during the data collection in addition to the relatively small sample size. These factors contribute to limited generalizability of the current findings beyond the scope of this study. Therefore, additional research should incorporate more diverse samples to identify potential deviations across cultures in responding to negative information and adjusting behavioral destination preferences. Furthermore, theoretical limitations relate to the current model not including motivational considerations beyond negative WOM that influence the selection of a crisis impacted travel destination in the first place. Future studies can expand the model by exploring if push and pull motivations, such as intrinsic desires and local attractiveness (Hsu, Cai, and Li 2010;Yoon and Uysal 2005), explain prepurchase decision-making processes within the context of crisis destinations. Another approach could be the inclusion of emotional solidarity between residents and tourists in explaining destination loyalty . The crisis context could further amplify the affective bond between these parties, especially if multiple touch points over time encompass the customer journey before, during, and after the crisis. These findings would offer implications on how to draw customers to a destination impacted by a crisis and influence the decision-making process; a valuable extension of the current model in light of the current global pandemic once travel restrictions are lifted (Lederer 2021;Xiang, Fesenmaier, and Werthner 2020). Considering destination loyalty, previous findings suggest behavioral, attitudinal, or composite assessment (Tasci et al. 2021). Future research could expand the current model's loyalty conceptualization by incorporating a longitudinal perspective focusing on past loyalty behavior in addition to current loyalty. Loyalty development also differs between international and domestic tourists due to ethnocentrism or traditionalism (Tasci et al. 2021). Future studies should assess the current model with a domestic visitor sample to further generalize current findings. Also, comparing firsttime with repeat visitors could provide interesting insights considering the response to negative WOM and crisis responses. While the current study focuses on negative WOM as a source of information, additional information technology should be considered to broaden the scope of future research. For example, offering replacement vacations for high-risk countries via immersive technologies, such as augmented or virtual reality (AR/VR) devices. These new technologies would allow consumers to "travel" to high-risk or remote locations without having to leave the comfort of their homes. For DMOs, AR technologies could provide an additional touch point within the customer journey that can positively impact tourists' preferences and decision-making behaviors in the pre-consumption stage. Further research is needed to evaluate the impact of these technologies within the customer journey framework for tourists. The current research offers novel findings on how to approach crisis communication from external, uncontrollable sources. Considering the current global COVID-19 pandemic and the associated financial crisis that offers similarities to the global financial crisis in Greece, it becomes apparent that successful DOMs need to be able to manage and adapt to a changing tourism environment. The current research suggests that tourists still visit a destination when WOM about the destination is negative, even following a crisis. DMOs approaching customer journeys in tourism post the current pandemic and any future crisis that might bring upon additional change can utilize these insights. Managers should focus on delivering a positive experience at the destination no matter what information customers might be exposed to during the pre-consumption phase. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article.
2021-10-29T15:19:01.896Z
2021-10-27T00:00:00.000
{ "year": 2021, "sha1": "eeb4f21d3f26739722cd537a577413a5a9411b43", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/00472875211053657", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "b03a9d022e06a3bddffc18e62c4a5a57ff866a2a", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Medicine" ] }
252823735
pes2o/s2orc
v3-fos-license
Rib Fractures in Professional Baseball Pitchers: Mechanics, Epidemiology, and Management Abstract Pitching is a complex kinetic chain activity requiring the transfer of energy from the lower body, through the core and trunk, and finally through the arm to generate explosive acceleration of the baseball. As a result, large forces are generated in the trunk musculature and rib attachments from the late cocking phase of pitching through deceleration. The repetitive cumulative load and high pitch velocities put professional pitchers at risk of rib stress fracture. Given the potential for a prolonged recovery course and high rate of recurrence, early recognition of rib bone stress injury is critical to optimize care. Identifying torso strength imbalances, suboptimal pitching biomechanics (such as late or inadequate pelvic rotation), as well as metabolic deficiencies that may adversely affect bone health are essential to expedite safe return to play and prevent future injury. In this review, we discuss risk factors, mechanism of injury, typical clinical presentation, diagnostic imaging findings, and propose treatment and prevention strategies for rib stress fractures in overhand pitchers. Introduction Bone stress injury (BSI), including stress fracture, is common in sports medicine with reports of incidence between 1.4% and 4.4% of athletes. 1,2 They most commonly occur in the weight-bearing bones of the lower extremity. Upper extremity bone stress injuries are less common, but have been reported in many different athletes, including rowers, weightlifters, gymnasts, swimmers, golfers, and pitchers. [3][4][5][6] Bone stress injuries in throwing athletes generally occur around the shoulder girdle and trunk, and have been reported in the clavicle, ribcage (primarily 1st rib), humerus, olecranon, and ulna. 5 Case reports of rib stress injuries in pitchers indicate that the first rib may be the most common site in the ribcage, with injuries at the lower ribs less commonly reported. [7][8][9] Unlike bone stress injuries of the lower extremity, which are often related to repetitive impact from running and jumping, stress fractures in the ribs are thought to be due to repetitive trunk muscle contraction leading to tensile, angular, and torsional stresses on the bone. Over time, these forces can result in cumulative microstructural damage that results in a stress injury at the muscle-bone insertion. Clinically, this pathophysiology coincides with an insidious presentation, with slow progression over time. Athletes generally describe the pain as vague discomfort in the shoulder and upper thorax that may only occur during a brief phase of pitching, making localization difficult. Given the obscure presentation and greater incidence of other shoulder/truncal injuries in pitchers such as rotator cuff pathology, labral tears, shoulder impingement, and intercostal muscle strains, these injuries are often misdiagnosed in the early stages. In the authors' experience treating major league baseball players however, rib stress fractures are likely underrecognized due to the difficulty of diagnosis and may be more common than currently reported in the literature. Compared to intercostal and abdominal oblique muscle strains, major league baseball pitchers who sustain rib stress fractures have a prolonged recovery, generally requiring 8-10 weeks before returning to play, versus an average of 5 weeks for three weeks, after which refilling of the resorption cavities by osteoblasts occurs. Replacement, however, is a slow process, which may take months to complete. If microstructural damage occurs at a rate faster than repair can take place, a bone stress injury and potential fracture results. 5 Over a season, major league pitchers average 2800 and 1200 in-game pitches for starters and relievers, respectively. When including warm-up pitches, spring training, and off-season training, this number is easily doubled. The increased volume of pitches likely puts starting pitchers at greater risk for rib stress fracture compared to relieving pitchers, although formal epidemiological studies have not yet been performed. Further research is also needed to determine whether sidearm versus overhead throwing, and pitch type (curveball, fast ball, changeup, etc.) place a pitcher at increased risk of stress fracture. Stress injuries of the first rib have been the most commonly reported rib stress injury in pitchers. 5,8,9,24 In a study of 24 first rib stress fractures in overhead throwing athletes, three types of rib fractures were discovered. 24 The majority of the fractures (75%) occurred at the attachment of the middle scalene muscle (intrascalene type), while 12.5% were located at the subclavian artery groove (groove type) and 12.5% occurred posteriorly near the costovertebral articulation (posterior type), suggesting differing mechanisms of injury. Of note, 20% of these injuries occurred on the side of the nonthrowing arm. Further investigation is needed to understand the mechanism of injury of the nondominant arm. 91 Indwelling, fine-wire EMG analysis performed during pitching demonstrates activity of the serratus anterior during late cocking, reaching maximal activity during acceleration, and continued activity through the follow-through phase. 20 During these phases, the serratus acts as a scapular stabilizer, resulting in a downward force at its point of origin on the first rib. In contrast, the anterior and middle scalene muscles originate from the cervical transverse processes and insert onto the superior aspect of the first rib. In addition to their function in neck flexion and rotation, they also elevate and, in effect, counteract depression of the first rib. It is hypothesized that these repeated opposing forces from the serratus anterior and scalene muscles result in the majority of first rib stress fractures (groove and intrascalene types). 25 Additionally, the first rib is vulnerable at the subclavian artery groove, where the bone is thinnest and, therefore, mechanically weakest. 8 Posterior type first rib fractures are hypothesized to have a different mechanism of injury, where inferior and posterior translation of the clavicle during arm abduction and external rotation (such as -Internal/External abdominal obliques, and rectus abdominis eccentrically contract to prevent excess lumbar extension. -Serratus anterior and trapezius retract and cause upward rotation of the scapula to properly position the glenoid for external rotation. Late Cocking Begins with lead foot contact; ends with maximal external rotation of the throwing shoulder. Energy is transmitted from the lower body to the trunk/ shoulder. -Abdominal obliques and rectus abdominis eccentrically contract to prevent excess lumbar extension. -Trapezius, and serratus anterior (SA) work to position/ stabilize the scapula. SA reaches maximal activity. -Pectoralis major and latissimus dorsi eccentrically contract to stabilize the humeral head within the glenohumeral joint. Acceleration Begins at maximum shoulder external rotation; ends at ball release. The trunk continues to rotate and tilt, transferring additional potential energy into the upper extremity. -Anterior/middle scalenes eccentrically contract to stabilize the head and prevent overextension of the neck. -Pectoralis major and latissimus dorsi concentrically contract (with subscapularis) to create explosive internal rotation of the shoulder. -Serratus anterior exhibits continued high activity as it protracts the scapula, positioning the glenoid for humeral rotation. -Nondominant side rectus abdominus and internal oblique, along with dominant side external oblique contract to further rotate the trunk and cause lumbopelvic flexion. Deceleration Begins at ball release; ends at maximum shoulder internal rotation. Posterior shoulder musculature exerts large eccentric contractile forces to slow down arm adduction and internal rotation. -The trapezius, rhomboids and serratus anterior stabilize the scapula during deceleration of the shoulder girdle. Follow-through Body continues to move forward until the arm has ceased motion. Decreased joint loading and minimal forces make this phase an unlikely cause for injury. -Similar activity as in deceleration with less force. 92 Schowalter et al Dovepress during the late cocking phase of pitching), can lead to a posterior force on the first rib. 26 Further biomechanical research is needed to better define this mechanism. Though lower rib BSIs are common in rowers (ribs 4-8 accounting for around 80%), stress injuries of ribs 2-12 are rarely reported in pitchers, but may be largely underrecognized. 4,7,[27][28][29] Case reports describe injuries at ribs 7-9, as well as the floating ribs (ribs [11][12]. 28,29 Bone stress injuries occurring between ribs 7-9 have been attributed to the opposing anterior/caudad rotational force of the external oblique muscles and posterior/cephalad force of the serratus anterior. 28,30 Supporting this hypothesis, cadaveric studies of the serratus anterior demonstrate maximal tensile load at the posterolateral rib, consistent with the location of most stress fractures. 30 Pain during the late cocking and early acceleration phases, where the serratus is most active, further supports this theory. 28 Rib fractures of the floating ribs have been described at their distal, non-articulating ends. Anatomically, the external oblique muscle arises from the external and inferior surfaces of the lower eight ribs where it interdigitates with the latissimus dorsi and serratus posterior inferior on ribs 9-12. 31 The traction forces from these opposing muscles is hypothesized to be the mechanism of these distal floating rib fractures. 29 Risk Factors Currently, there is no literature investigating risk factors for rib bone stress injury, however we can extrapolate risk factors based on the more commonly affected areas that have been studied. Nonmodifiable risk factors include family history, prior history of bone stress injury, and Caucasian ethnicity. [32][33][34][35][36] Although no studies have been performed on specific genes involved in stress fracture, a family history increases the risk of athletes sustaining stress fracture, suggesting a genetic component. 36 In a prospective study investigating sex-specific risk factors for tibial stress fractures, prior fracture reflected the strongest predictor of stress fractures regardless of sex. 32 Modifiable risk factors can be broken down into biomechanical (such as overhead vs sidearm throwers, and pitch count), and biochemical risk factors. Overhead pitching mechanics may increase risk of rib bone stress injury over side arm throwers. 37 When reviewing literature for pitching injuries more broadly, starting pitchers have a higher incidence of shoulder injuries when compared to relief pitchers. 38 Additionally, starting pitchers typically throw four different types of pitches compared to relief pitchers who have one or two. The variety and volume of pitches expected from a starting pitcher may factor into an increased risk of bone stress injury. Studies in rowers have also shown that the incidence of rib stress fracture is higher in sweepers (rowers with one oar) compared to scullers (two-oared rowers). 39 This may imply that one-sided activity, such as pitching, or muscle imbalances associated with one-sided activity, may predispose athletes to sustain fractures. Deficits in the kinetic chain (upper, lower extremity and core) can play a role as well, as improper transfer of energy in one phase of pitching can cause compensatory changes that increase strain to other tissues. 14,40 External factors include a low aerobic fitness level prior to training, tobacco use, and a high intensity of physical training. 41 In studies of rowers, higher level of performance has been shown to correlate with higher levels of rib bone stress injury. 27 This may also be the case with elite level pitchers, but epidemiologic studies do not yet exist. Biochemical risk factors for rib bone stress injury include vitamin D and vitamin C deficiencies, iron deficiency, and low estrogen levels. Serum 25 (OH) vitamin D concentrations below 30ng/mL have been specifically associated with stress injuries. Although calcium supplementation has been shown to improve bone mineral density, there is no strong evidence to suggest a correlation between calcium deficiency and stress fractures. Potassium and vitamin C intake from fruits and vegetables has also been implicated in improving bone mineral status in adolescents, independent of calcium. 42 Many of these biochemical risk factors may be part of the clinical entity termed relative energy deficiency in sport (RED-S), which refers to impaired physiological function that includes metabolic rate, bone health, immunity, protein synthesis, and cardiovascular health due to energy imbalance, with deficiency of caloric intake relative to output through exercise. 43 RED-S has been shown to be an independent factor of poor bone health due to decreased IgF-1 and bone formation marker levels, which can increase the risk of developing bone stress injury. 44 While more common in females, men can also present with this syndrome and should be considered when evaluating for any bone stress injury. Clinical Presentation Although some athletes may describe a clear onset, such as a popping sensation while pitching, the majority of pitchers who present with rib stress injury describe an insidious onset of discomfort on their dominant throwing side, without obvious insult. 9,24 This discomfort may be described as a nonspecific ache at the base of the neck, shoulder, posterior arm, upper thoracic, or interscapular regions, making localization difficult. 29 A large proportion of these athletes will describe provocation of pain while pitching (resulting in decreased pitch control and velocity), or swinging a bat. 8,24,28 Notably, the pain while pitching often occurs during the late cocking and early acceleration phases, coinciding with the time of peak serratus anterior activity. 20 Worsening pain with inspiration is also a common complaint. 9,24,28 Interestingly, a history of change in pitching mechanics, training intensity or volume is often absent. 9,28 Given the vague history, these injuries are difficult to diagnose, and a differential diagnosis should include other truncal pathologies such as intercostal or abdominal muscle strain, serratus anterior muscle strain, myofascial pain, costochondritis, Tietze syndrome, slipping rib syndrome, costovertebral joint dysfunction, intercostal neuralgia, thoracic spine dysfunction (discogenic or radicular pain), and shingles. 11 Physical Examination The athlete should be examined with the torso exposed. Most often there is no visible abnormality; however, swelling and/ or rib deformity can rarely be seen at the site of stress fracture. 28 Structural abnormalities that can lead to abnormal pitching mechanics should be assessed including muscle hypertrophy vs atrophy, thoracolumbar scoliosis and kyphosis, scapular malignment, and scapulothoracic abnormal motion. 45 Evaluation of the athlete performing a pitch (live or video footage) can be helpful as well, with attention to improper mechanics, such as late pelvic rotation or early trunk rotation, which can cause less efficient transfer of energy and compensatory changes that increase trunk muscle strain and rib bone stress. 40 Most patients with rib bone stress injuries have tenderness, which can be pinpoint over the area of fracture or more diffuse in nature. 8,28 In pitchers, special attention should be given to the first rib, as it may be affected more frequently than the remaining ribs. 5,8,9,24 The first rib can be palpated in the supraclavicular fossa, lateral and posterior to the sternocleidomastoid, where the middle scalene attaches to the first rib and the majority of first rib stress fractures occur. 24 Palpation should be directed inferiorly given the course of the first thoracic rib behind the clavicle ( Figure 1A). The posterior first rib can be palpated by moving laterally from the C7 spinous process and pressing deep to the trapezius muscle ( Figure 1B). In contrast to first rib stress fractures, stress injuries of ribs 6-10 generally occur posterolaterally, while stress fractures of the floating ribs (11)(12) typically occur anteriorly at their distal tip due to mechanisms described earlier. 7,28,29 Tenderness at these specific locations should increase suspicion for stress fracture and the need for diagnostic imaging. Most athletes with bone stress injuries of the ribs will have full active and passive range of motion in the shoulders and cervical spine, but those with first rib fractures may have pain with certain shoulder movements, especially abduction greater than 90 degrees. 9,18,46,47 Thoracic spine rotation and lateral flexion may also provoke pain in rib fractures, whereas pain with forward flexion is more suggestive of discogenic back pain. Activation of musculature that attaches to the injured rib may cause pain as well, for example activation of serratus anterior with scapular protraction in injuries to ribs 1-8 (Table 1). More specifically, placing the cervical or thoracic spine in the theorized position of injury, and then proceeding with specific resistive testing of the associated musculature, is often the most effective method of eliciting the athlete's pain. For example, if a right-handed pitcher complains of pain in the right first rib during the late cocking phase of the pitching, have them sit on the table and position their cervical spine in left lateral flexion and rotation, while the thoracic spine is placed in right rotation and extension to match positioning of the spine during this phase. Resisted neck flexion (testing the anterior scalenes) and resisted posterior translation of the humerus with the shoulder flexed to 90 degrees (testing the serratus anterior) in this position, maximizes force on the first rib and is more likely to generate pain and aid in diagnosis than isolated muscle testing alone. Pain with deep inspiration is also a common finding. 9,24,28 A full neurovascular exam of the bilateral upper extremities including strength and sensation testing should also be performed, as vague exertional upper extremity pain may be due to peripheral nerve entrapment or vascular etiologies such as peripheral vascular disease, deep vein thrombosis or thoracic outlet syndrome. 45 Strength and sensory exams, as well as tests for shoulder impingement, biceps pathology, and labral injuries, should all be normal. 8,28 Diagnostic Imaging Radiographs Imaging often begins with a PA (posteroanterior) chest and oblique rib radiographs. 47,48 However, given the difficulty localizing pain in rib stress injuries, shoulder and cervical spine radiographs are often obtained at initial workup. Funakoshi et al demonstrated that 46% of the first rib is visible on a shoulder x-ray, whereas 97% is visible on cervical spine x-rays. Therefore, the authors recommend cervical spine and chest x-rays as initial screening for suspected first rib stress fracture. 24 The earliest sign of a stress fracture on conventional radiographs is the "gray cortex" sign; focal "graying" or lucency of the cortical bone. 49 As the injury progresses, the area of stress reaction will coalesce into a lucent intracortical fracture line or even a displaced fracture. 50 Faint linear foci of sclerosis may also be visualized, representing microcallus formation. 50 Given the subtlety of these findings, early radiographs are commonly negative until several weeks after injury when periosteal reaction and callus formation occur, which can be seen as a hazy opacity around the area of bony injury (Figure 2A and B). 51,52 Extrapolating from lower extremity stress fracture literature, x-ray has a sensitivity of roughly 28% at initial presentation, increasing to 54-80% at follow-up (2-6 weeks later). 53,54 Radiographic changes such as focal sclerosis and periosteal bone formation occurred, on average, 25 days after the onset of symptoms. 54 Reviews of traumatic rib fractures also demonstrate poor sensitivity, with chest x-rays missing over 50% of fractures. 55,56 Therefore, initial negative x-rays, and even negative follow-up x-rays do not rule out a rib bone stress injury. If suspicion remains high for rib fracture, advanced imaging is indicated. Ultrasound Sonographic findings may include fluid collection adjacent to the rib, with increased vascularity, as well as periosteal elevation, subperiosteal hematoma, and increased posterior shadowing. Color Doppler ultrasound is also potentially beneficial in detecting increased vascularity around the fracture callus formation, reflecting a healing fracture. 57 Studies evaluating ability of therapeutic ultrasound to diagnose tibial stress fractures have shown variable results, with sensitivity ranging from 81.8% to 86% and specificity ranging from 66.6% to 77.27%, as compared to MRI and bone scintigraphy. 58,59 Therapeutic ultrasound (TUS) involves applying a frequency of up to 3 MHz to the site of a suspected fracture. The patient reports on the level of pain perceived at each intensity, and the result is compared with the unaffected side. The positive predictive value ranged from 41% to 99% and negative predictive value ranged from 13.4% to 51%. 60 Ultrasound has advantages due to its ease of access without exposure to radiation, however it is not as sensitive as MRI for assessing stress fracture, and the diagnostic quality of ultrasound is user dependent. Anatomically, the first rib is also difficult to visualize. Additionally, retroscapular ribs and the infraclavicular portion of the first rib are difficult to access via sonography. 61 CT Scan There are limitations to consider when assessing ribs via chest computerized tomography (CT). A standard CT technique is oblique to the anatomic long and the short axes of the rib, which makes interpretation of fractures difficult. However, 96 the use of angulated thin-section helical CT offers the possibility of obtaining true axial slices of any selected rib, allowing for a view analogous to those obtained for long tubular bones. 62 Stress fracture findings include an intramedullary area of bone sclerosis or osteolysis. In lower limb stress fractures, the level of sensitivity was 32% and specificity was found to be 98%. 63 However, Gaeta et al found that CT can be superior to MRI (51% vs 41%) when analyzing cortical abnormalities such as osteopenia, resorption cavity, and striation, which may be early lesions preceding a stress fracture. In the United States, the annual effective dose from background radiation is on average 3 mSv/y; ranging from 1 to 10 mSv. 64 The radiation from a typical chest CT is 7 mSv, although a thin-sliced, focused study over the area of suspected pathology can limit radiation exposure. 65 Given the radiation exposure and decreased sensitivity, CT remains inferior to MRI and bone scintigraphy for the detection of stress fractures. 63 Magnetic Resonance Imaging (MRI) MRI of the chest wall (CW) can detect bone edema and lower grade injuries without cortical fracture. 66 The ideal sequence includes short tau inversion recovery (STIR) or fat-suppressed T2-weighted images. A T1 weighted image depicts anatomy but does not detect edema as effectively. Typical MRI features include rib periosteal or adjacent soft tissue edema and band-like bone marrow edema ( Figure 2C and D). MRI is also able to rule out other sources of bone or soft tissue pain. The best field of view for the first rib, specifically, would be in the axial and sagittal plane. Coronal views are also useful for comparing to the contralateral side, improving visualization of subtle changes. A small field of view in the area of suspected injury can provide a more targeted evaluation, however there are no studies comparing different fields of view for rib injuries. Though 1.5 Tesla (T) MRI has been shown to be comparable to 3T MRI in the assessment of BSI in the foot, the authors recommend 3T MRI CW specifically for rib BSI. 67 Anecdotally, this approach provides the highest quality visualization of the ribcage, especially of subtle bone marrow edema. There are accepted MRI classification systems (ie, Fredericson and Arendt) that can identify the severity of injury and help guide expectations for return to play. 68,69 However, it is difficult to extrapolate these leg-based systems to the ribcage as the mechanics and load patterns vary significantly. In the authors' experience, rib stress injuries with cortical disruption and fracture lines extending across one or both cortices portend a less favorable prognosis and longer recovery. Contrast studies are sometimes used to differentiate stress fracture from pathological fracture, however contrast studies are not necessary in the great majority of rib BSI in sport. 70,71 Zero Echo Time (ZTE) MRI has been studied in its clinical relevance in assessing osseous features. A study of ZTE MRI in the shoulder found that the majority of ZTE images provided superior visualization of osseous features when compared with CT. 72 In the setting of bone stress injury, MRI remains superior to CT, therefore more MRI studies of ZTE are required to assess its clinical superiority over other imaging modalities. Comparing MRI with bone scintigraphy in lower extremity stress fractures, the sensitivity of MR imaging was 100% and the specificity was 86%. 73 Fredericson et al performed radiographs, scintigraphy (technetium bone scan) and MRI scans in runners with symptomatic leg pain and revealed that the exact anatomical region of the lesion could be defined more precisely by MRI than by scintigraphy. 68 Given the high sensitivity and specificity, MRI is considered the gold standard imaging modality in the indentification of bone stress injuries. Bone Scan Historically, technetium bone scanning was an imaging modality of choice in the diagnosis of bone stress injuries due to its high sensitivity, with ability to detect stress fractures as early as 7 hours after injury. 74 In addition, the high sensitivity meant that lack of uptake on bone scan beyond 3 days likely excluded a fracture. 74 Radiopharmaceutical uptake at any area of active bone turnover, however, can also lead to many false positives. 75 These asymptomatic areas likely represent normal remodeling of bone due to stress. In addition, bone scans expose patients to radiation of approximately 4.2 mSv. 76 This can be compared to 7 mSv for a standard-dose chest CT examination, and an average annual effective dose from background radiation of 3 mSv per year. 64,77 Given the risk of false positives and radiation exposure, MRI has usurped bone scan as the preferred method of diagnosis. Treatment Similar to other bone stress injuries, treatment of rib bone stress injury begins with a period of relative rest followed by a gradual return to sport. 11 Athletes should be restricted from motions that can increase load on the injured rib such as throwing, batting, and weightlifting. Patients with pain provoked by deep breathing should also restrict activities that lead to increased ventilatory demands, such as cycling, as the muscles of inspiration can place additional strain on the ribs. If pain control with relative rest is not adequate, additional pain control can be achieved with ice, acetaminophen and nonsteroidal anti-inflammatory drugs (NSAIDs). Supportive taping of the ribs can also be used to decrease pain by reducing excursion of the ribs during inspiration and upper extremity movements. 78,79 Similarly, rib belts can be an easy and effective tool in transitioning the athlete to higher levels of activity early in the recovery process. Supportive devices should be discontinued as soon as the athlete is asymptomatic to ensure restoration of normal trunk mobility and strength can be achieved during the rehab process. Some physicians prefer to avoid NSAIDs given animal studies which showed impaired fracture healing, however these animal studies involved high doses of NSAIDs for prolonged periods of time. 80,81 Retrospective and prospective studies in humans, however, have shown that short courses of NSAIDs are not detrimental to fracture healing, and thus a short course of NSAIDs, as needed, up to 14 days is likely safe after fracture. 80,82,83 It should also be noted that pain is a good gauge of when an athlete can increase their physical activity. Therefore, care should be taken to avoid masking pain, which could lead to an overly rapid return to sporting activities and prolonged disability. As soon as pain free deep breathing is achieved, generally 2 days to a week after initiating rest, a staged rehabilitation program can expedite return to play compared to prolonged rest. 78 Training generally begins with nonimpact aerobic exercise such as stationary biking without use of a grip, aquatic therapy, arm and leg cycling, and zero-gravity running to maintain cardiovascular endurance. 39,84 Once low-impact activities can be performed for prolonged periods without pain, higher impact activities such as running and jumping can be incorporated. 85 At this point, athletes can also begin specific exercises aimed at preventing future injuries. It is important to remember that the trunk musculature works both concentrically and eccentrically during the process of overhead throwing, and that inadequate control of deceleration can be an underlying component of the athlete's injury. Consequently, eccentric muscle conditioning is an important component in promoting trunk stability and reinforcing neuromuscular control, throughout the throwing motion. Athletes with groove type fractures of the first rib should undergo stretching of the scalene muscles to decrease strain on the first rib, while those with lower rib stress fractures should undergo core strengthening and stretching of the thoracic musculature, such as the serratus anterior and latissimus dorsi, in an attempt to balance opposing forces on the lower ribs. 26,28 In addition, strengthening of specific muscles, such as the serratus anterior has been suggested in the literature. 86,87 This seems counterintuitive, given several of these fractures are partially attributed to forces generated by the serratus anterior. 88 Although there are no trials comparing specific strengthening of the serratus anterior versus a generic rehabilitation program, it could be that strengthening of the muscle balances opposing forces, and therefore neutralizes load on the affected rib. After a couple of weeks, a slow return to sport-specific exercise can be initiated, including fielding drills, batting, and a throwing program. Pitching mechanics should be evaluated at this time to fix any deficiencies, such as decreased hip internal rotation (IR) during the wind-up phase or a lack of hip IR in the landing leg during follow through. Similarly, a lack of hip external rotation in the driving leg can impact the timing of appropriate pelvic rotation. Late pelvic rotation during the stride phase can lead to compensatory truncal muscle use and rib strain. 40 In addition, assessment of the range of motion of the thoracic spine is also important, given the high rotational demands placed on the body during the late cocking and follow-through phases of the pitching motion. Specifically, the combined motions of ipsilateral rotation and side bend, referred to as rotexion in the manual therapy literature, should be evaluated in thoracic extension to ensure that adequate thoracic mobility exists as the athlete transitions from the stride to late cocking phases of the throwing motion. Similarly, ipsilateral side bend with contralateral rotation, referred to as latexion in the manual therapy literature, should be assessed in thoracic flexion to ensure that adequate thoracic mobility is present in follow-through. Limitations in these movements can be present from either soft-tissue or articular dysfunctions, so it is important to complete passive intervertebral motion and rib mobility testing, in addition to the combined motion testing described above, to determine the underlying etiology of the restriction when one is found. In the presence of normal passive vertebral and rib mobility testing, restrictions in scapular and trunk muscle flexibility should be assessed. After 4-6 weeks of a gradual return to a throwing program, most bone stress injuries will heal uneventfully. Radiographs can be useful at this time and will show bone healing with callus formation. 8 Asymptomatic patients with evidence of bone healing on radiography can return to competition at this time. First rib and complete fractures are at greater risk for nonunion, however, and may require a longer period of restricted activity (up to 6-12 months) for healing. 18,89 In a retrospective cohort of 23 throwing athletes (primarily baseball players) with first rib stress fractures, 7 (29%) developed nonunion of the first rib at 7.5 months. 24 Fortunately, documented cases of nonunion in the literature have been able to return to their previous levels of competition. 18 Physicians should also be mindful of other complications of rib fractures, including pneumothorax, thoracic outlet syndrome, brachial plexus palsy, and Horner's syndrome. 8,90 Shortness of breath, decreased breath sounds, and asymmetric lung sounds should prompt radiographs to look for a pneumothorax. 9 Nonunion or excessive callus formation can lead to thoracic outlet syndrome, and thus any patient with clinical signs of claudication, pallor, swelling or weakness in the arm should be worked up further with imaging of the brachial plexus and subclavian artery/vein. 91,92 The proximity of the first rib to the carotid artery and sympathetic chain also places patients with first rib fractures at risk of carotid injury manifesting as Horner's syndrome (classically described as ptosis, miosis, and anhidrosis), which would require additional workup to evaluate the integrity of the carotid artery. 93,94 When thoracic outlet syndrome, brachial plexus injury or other injury to surrounding structures is diagnosed, referral to surgery for resection of callus and a portion of the first rib is indicated. 90 Lastly, there has been recent interest in the use of ultrasonic (low-intensity ultrasound and extracorporeal shockwave therapies), orthobiologic injections, as well as bone stimulator units to expedite bone healing. 95 Evidence supporting these modalities is currently lacking, although there may be some data supporting bone stimulator therapy for higher grade or recalcitrant stress fractures. [96][97][98] Additionally, teriparatide has shown promise in improving fracture healing and bone strength in animal studies, and human studies are ongoing. 99,100 Prevention Since prior stress fractures are the strongest predictor of future stress injury, considerable treatment emphasis should be placed on preventing recurrent stress fracture. 32 As above, athletes should work with a pitching coach to correct any improper throwing techniques. Thorough biomechanical evaluation of the spine, shoulders and hips, with early recognition and treatment of joint restrictions can be key to reducing compensatory trunk muscle activation that may predispose athletes to rib injuries. 14,40 In addition to assessment of hip and thoracic range of motion as mentioned in the treatment section, pitchers frequently demonstrate segmental motion limitations in the cervical and shoulder regions. Glenohumeral internal rotation deficit (GIRD), described as increased shoulder external rotation and decreased internal rotation of the dominant throwing arm, is commonly discussed in the literature, and may be due to osseous adaptation to pitching with glenohumeral retroversion, or selective stretching of the anterior capsule and tightening of the posterior capsule. 101 Although adaptive GIRD may show protective effects at the shoulder, studies of kinematics on pitchers with GIRD showed significantly decreased trunk rotation, shoulder adduction, and increased shoulder rotation during pitching compared to a control group without GIRD. This suggests inefficient transfer of energy from the trunk to the upper extremity, and the possibility of increased load/injury to surrounding tissues. 102,103 To address the deficits in internal shoulder rotation, stretches such as the cross-body stretch, sleeper stretch, and corner pectoralis stretch have been demonstrated to be effective. 104 Studies also show an association between restricted neck flexion/rotation and pitching injuries. 105 Although the pathophysiology is unclear, it is hypothesized that limited cervical range of motion can interfere with the ability to maintain head stability while the trunk rapidly flexes, twists and side bends during pitching. This could contribute to increased stress on the scalene muscles as they work to maintain head stability, with consequent increased stress to the first rib, although this is speculative at this point. Furthermore, the muscles which attach to the ribs (Table 1) should also be assessed for weakness, strength imbalances, or tightness that could lead to unequal tensile, or torsional forces on the ribs. Specifically, imbalances in pull from the serratus anterior and scalene muscles are thought to contribute to first rib fractures, while imbalances in the external obliques, latissimus dorsi, and serratus anterior may contribute to lower rib fractures. If weakness or tightness is detected, strengthening and stretching are important to balance opposing muscular forces and neutralize the forces on the rib. It is also important to remember that the deceleration phase of pitching can contribute to injury given the extreme velocities obtained with pitching. During the deceleration phase, eccentric muscle contractions are required to stabilize and slow rotation of the trunk and throwing arm which can generate high forces on the muscle, and in turn, their bony/tendinous origins. Thus, eccentric muscle conditioning is important for inducing adaptive changes in muscle that can improve trunk stability and reduce injury in the deceleration phases of pitching. 106 Commonly recommended eccentric abdominal exercises include diagonal abdominal chops, cable rotations, and eccentric trunk rotation/extension ( Figure 3A-H). Furthermore, care should be taken to avoid overtraining. Athletes should also be taught to monitor for early signs of pain, as early recognition and adequate rest may prevent progression to stress fracture. 91 A metabolic workup, especially measurement of 25-hydroxy vitamin D level, is often helpful given the association between low vitamin D levels and stress fracture. 107,108 If vitamin D deficiency is discovered, supplementation has been shown to decrease the risk of stress fracture. 109,110 Supplementation with calcium, on the other hand, has less evidence, but may also be beneficial in preventing the risk of stress fracture. 109,111 This is supported by studies linking low bone mineral density (BMD) and stress fracture, as well as the association of increased calcium intake and higher BMD. [111][112][113] Finally, in females, low ferritin and iron levels have been found to correlate with a higher risk of stress fractures, however this association has not yet been demonstrated in males. 114,115 In cases of repeat stress fracture, bone density scans can be helpful in ruling out low or excessively high BMDs, which may benefit from an endocrinology referral and further evaluation of the hormones involved in bone homeostasis. 108 Despite the association of low BMD and stress fracture, prophylactic treatment with bisphosphonates has not been shown to be effective in reducing stress fracture in military recruits. 116 Additionally, animal studies have shown impaired healing of stress fracture with the use of bisphosphonates. 117,118 Finally, athletes should be examined for relative energy deficiency in sport using tools such as the RED-S clinical assessment tool, which assesses several risk factors for stress fracture such as low body mass index (BMI), caloric intake, low BMD, and amenorrhea. An assessment of 323 college female athletes using this tool to stratify athletes into low, moderate, or high risk of stress fracture, found that moderate-risk athletes were 2.6 times more likely to develop bone stress injuries, and high-risk athletes were 3.8 times more likely. 119 In a study of male college athletes using a modified risk calculator, each 1 point increase in the cumulative risk score was associated with a 37% increased risk of a bone stress injury. 120 Therefore, these risk assessment models exemplify the importance of nutrition optimization, adequate energy availability, and appropriate BMI to maintain appropriate bone health. Consultation with the team sports dietician is recommended to ensure adequate caloric intake and individualize nutrient requirements both during recovery and for secondary prevention. Conclusion Although studies have not defined the incidence of rib stress fractures in pitchers, the repetitive forces though the core and trunk required to generate arm acceleration in professional pitching appears to put these elite athletes at risk of bone stress injury. Typically, pitchers present with poorly localized, insidious onset of pain on the dominant throwing side, worse with the late cocking and early acceleration phases of pitching. They often describe vague pain at the base of the neck, scapula, shoulder, or chest wall without a well-defined area of tenderness. As a result, these injuries are often underrecognized and misdiagnosed in the early stages, which puts the athlete at risk of injury progression and prolonged recovery. We recommend a heightened index of suspicion of rib bone stress injury in pitchers and considering advanced imaging with MRI Chest Wall (3T) early. Once diagnosed, assessment of modifiable biomechanical and metabolic risk factors is essential for guiding management, as well as secondary prevention. We specifically recommend a rehabilitation program focused on improving pitching mechanics, targeting deficits in the kinetic chain (such as decreased hip/pelvic/truncal mobility, as well as core weakness), and imbalances in opposing trunk musculature (ie, first rib: scalene vs serratus anterior, and middle ribs: serratus anterior vs external oblique). Bone health and metabolic profile should be investigated with consideration for DEXA scan, vitamin D and ferritin levels, especially in cases of recurrent injury. With increased awareness, early diagnosis and appropriate management, we can optimize care in elite pitchers and expedite safe return to play. Disclosure The authors report no conflicts of interest in this work.
2022-10-12T16:02:16.673Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "31de8ac6b2f6c8df596ee300e421f8777201a464", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=84560", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a88339e08f588be2ddf17b0038e0e7fb4f8fb28b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
55835714
pes2o/s2orc
v3-fos-license
SOCIOECONOMIC FACTORS OF DEPRESSION AMONG FEMALES VISITING OUTPATIENT CLINIC IN DISTRICT GHIZAR, GILGIT-BALTISTAN, PAKISTAN: A PILOT STUDY ABSTRACT OBJECTIVE: This pilot study was conducted to find out the socioeconomic factors leading to depression in married females of district Ghizar, Gilgit Baltistan, Pakistan. METHODS: The study was conducted at District Headquarter Hospital Gahkuch, Ghizar from November 2015 to February 2016. Depression was diagnosed using Diagnostic and Statistical Manual of Mental Disorders, 4th Edition (DSM-IV) criterion and socioeconomic status was assessed by a self-designed questionnaire. Analysis was performed with SPSS version-23. RESULTS: Out of 73 females, 53 (72.6%) were depressed according to DSM-IV. Majority of women were uneducated (n=23; 31.5%). Most females were married (n=50; 68.5%) followed by divorced females (n=8; 11%). Sixty-one (83%) women had arranged marriage. Majority of women (n=43; 58.9%) were housewives. Most females (n=37; 50.7%) had non-cordial relations with in-laws. Domestic violence was reported by (n=41; 56.2%) women.  Sixty-one (83.5%) women had land ownership of some kind.  Women who were married within the family (OR 1.386, CI .837-2.292), presence of depression in husband (OR 3.530, CI .933-13.359), non-cordial relation of women with in-laws (OR 3.657, CI 1.979-6.755) and domestic violence (OR 3.584, CI 0.717-17.921) were significantly associated with depression. CONCLUSION: Majority of the females had no cordial relations with in-laws, more than half had history of domestic abuse. Marriages outside family had inverse relation with depression. Depression in husband and bad relationship of women with in-laws were strong predictors for depression in married females of district Ghizar, Gilgit Baltistan. Small sample size and hospital-based study were the main limitations of the study. KEY WORDS: Depression (MeSH); Socioeconomic Factors (MeSH); Domestic Violence (MeSH). INTRODUCTION epression is affecting 350 million Dpeopl e worldwide annually; and projected to be the second commonest 1 cause of disability by the year 2020. Symptoms of depression encompasses sadness, loss of interest or pleasure, feelings of guilt, low self-worth, disturbed sleep, appetite, tiredness, and poor concentration with mild, moderate and severe depression as its main categories, depending on the 2 number and severity of symptoms. As far etiological factors are concerned, social factors causing depression have been studied extensively. A United Kingdom research demonstrated that common mental disorders including depression were significantly associated with poor standard of living, low house 3 hold income and economic disparities. Studies looking at depression in Pakistan demonstrated staggering statistics. In an affluent urban setting of Pakistan Niaz U, et al. found that 72% women and 44% men were suffering from depres- 4 sion. Similarly a community-based study in the Hindu Kush region of Pakistan revealed that 46% of women had depression as compared to 15% of men, it also showed that illiteracy and lower socioeconomic status were 5 associated with higher levels of distress. Gilgit Baltistan (GB) is an administrative territory of Pakistan, previously known 6 as the Northern Areas of Pakistan. GB has a total population of 1.3 million; no data on gender population breakdown is available. District Ghizar is one of the ten districts of GB with total population 7 of 0.19 million. According to GB Demographic and Health Survey 2008, it has a population of 20% with the lowest socioeconomic status index and 15% with the highest socioeconomic index, rest of the population resides in the wealth quintiles between these two. About one fourth (23%) of women with 8 age between 15 to 49 years are literate. As research on depressive illness in women residing in this area is very limited, this pilot study was conducted to find out the socioeconomic factors leading to depression in married females of district Ghizar, Gilgit Baltistan, Pakistan. In this cross-sectional study, 73 nonpregnant, married females with depression were interviewed, from November 2015 to February 2016. The study was conducted at female outpatient department; District Headquarter Hospital Gahkuch, Ghizar. ORIGINAL ARTICLE Informed consent was taken from all patients prior to the interview. The research was approved by institutional review board of University of Peshawar, Peshawar, Pakistan. Patients with severe mental and physical condition were excluded. Diagnosis was made based on Diagnostic and Statistical Manual of Mental Disorders, th 4 edition (DSM-IV) criterion. This criteria recognizes depression by identifying depressed mood for at least 2 weeks in a continuous pattern and with any five of the following symptoms present; decreased interest or loss of interest in daily activities (anhedonia), significant changes in weight or appetite disturbance, lack of concentration, sleep disturbances, fatigue and suicidal 9 ideation. Non-pregnant patients who fell into this category were selected for inclusion in the study and were then interviewed. Patients who gave history of being previously diagnosed with depression at a health care facility and were put on anti-depressant medication but later discontinued medicines on their own and did not undergo follow up visits were not included in the study. Interviews were conducted by licensed medical practitioners. Patients, who did not understand Urdu and instead spoke the local language Shina, were interviewed by a licensed medical practitioner who was well versed in Shina. A self-designed questionnaire with 35 items was used to interview patients. The questionnaire was in English and read out to the participants in local language. The questionnaire was designed after thorough literature search, focus group discussion and input from the experts. The 35 items were ramified into personal profile, family structure, economic profile, and details of deliberate self-harm (where applicable). Each category had a detailed cluster of questions investigating the socioeconomic status of the patient's house hold. Personal profile included questions probing age, marital status, education, occupation, age at the time of marriage, years of marriage, type of marriage (arranged, love, eloped, exchange, inter family, outside family), years since divorced or widowed (where applicable), history of infertility, number of male and female children, any personal disability (mental , physical), history of substance use, duration of depression (if previously diagnosed) and treatment of depression, and history of Deliberate Self Harm (DSH). Details of family set up were retrieved by questions inquiring whether family structure was joint, nuclear or extended, relation of patient with head of the family, husband's history of polygamy, any physical or mental disability in family, history of intoxicating substance use and depression in family, history of domestic violence (verbal, physical, emotional) and the perpetrator of violence (husband, in-laws, relatives). Economic profile included questions on monthly household income, freedom of spending of patient, if family is in financial debts, and is monthly expenditure within income range. The last category of the questionnaire inquired about details of deliberate self harm (where applicable) with questions pertaining to reason of DSH (psychiatric illness, economic or family issues, academic, job problems or any other), method of DSH used (over dose, hanging, near drowning, jumping from height, slashing, gun, any other), number of DSH attempts and if treatment was undertaken after DSH. Depression was taken as dependent variable and dichotomized with yes and no responses while socioeconomic factors were independent variables. A bivariate logestic regression was used to calculate the odds ratio. SPSS version 23 was used to analyse the collected data. A total of 73 females were included in the study as shown in Table I Table 2]. This research is one of the few studies which have analysed the conduit between depression and socioeconomic multiplicities that lead to depression in females in GB. Despite of being a pilot research it has revealed an intriguing pattern of association of depression with socioeconomic in female population. The results of this study show that disharmonious relations with in-laws are a major risk factor for depression in females of Ghizar. Almost 50% females reported to have been negatively affected by the antagonistic behaviour of their in-laws. These results are also consistent with previously reported findings that strained relationship with husband and extended family has a strong association with depression in 10 women. Similar association was reported earlier in women having discordant relationship with their husbands and those who are facing the DISCUSSION daily life challenges of living in an 11 extended family system. Another finding was three times more likelihood of depression among wives when their husband had depression. It is well established fact that when either spouse is depressed the whole family unit is depressed. Depression has a great toll on emotional and sexual aspect of the couple's life, leading to anger and 12 isolation. Our finding lend support similar researches conducted elsewhere, where the investigators found that living with depressed spouse caused significant more depressed moods in other partner. is in accordance with our findings. According to Human Rights Commission of Pakistan, the incidence of domestic violence ranges from 70% to 90% in the female population of the 15 country. The variegated forms of violence consist of domestic violence which includes acid attack, beating, edged tool attack, setting on fire; sexual violence which includes sexual harassment, rape and honour killings. In spite of the alarming nature of violence against women, the attention given to this crucial zone of human rights by political establishment and civil society is unsatisfactory to say the least. One of the primary causes of many forms of violence against women is dowry related which has unfortunately not been given due attention and recognition in scientific literature. Dowry related form of violence is now declared as a "socially endorsed form of violence in Pakistan", which creates a significant psychological quandary for the girl and 16 her family. This practice imposes a large economic burden on the family of the girl with deep seated social consequences like people being reluctant to embrace the possibility of having a girl child. Though in recent years, dowry related violence has received attention from electronic media but a lot more needs to be done on the platform of legislation and social reform to aptly address this issue.
2019-01-06T04:22:07.945Z
2018-03-31T00:00:00.000
{ "year": 2018, "sha1": "48117a996ded13c13e9746489c3093d571975330", "oa_license": "CCBYNC", "oa_url": "https://www.kmuj.kmu.edu.pk/article/download/17641/pdf", "oa_status": "GOLD", "pdf_src": "Unpaywall", "pdf_hash": "48117a996ded13c13e9746489c3093d571975330", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
256413591
pes2o/s2orc
v3-fos-license
Perceived climate change risk and global green activism among young people In recent years, the increasing number of natural disasters has raised concerns about the sustainability of our planet’s future. As young people comprise the generation that will suffer from the negative effects of climate change, they have become involved in a new climate activism that is also gaining interest in the public debate thanks to the Fridays for Future (FFF) movement. This paper analyses the results of a survey of 1,138 young people in a southern Italian region to explore their perceptions of the extent of environmental problems and their participation in protests of green movements such as the FFF. The statistical analyses perform an ordinal classification tree using an original impurity measure considering both the ordinal nature of the response variable and the heterogeneity of its ordered categories. The results show that respondents are concerned about the threat of climate change and participate in the FFF to claim their right to a healthier planet and encourage people to adopt environmentally friendly practices in their lifestyles. Young people feel they are global citizens, connected through the Internet and social media, and show greater sensitivity to the planet’s environmental problems, so they are willing to take effective action to demand sustainable policies from decision-makers. When planning public policies that will affect future generations, it is important for policymakers to know the demands and opinions of key stakeholders, especially young people, in order to plan the most appropriate measures, such as climate change mitigation. Introduction In recent years, people around the world are increasingly experiencing the effects of climate change in the form of extreme events such as devastating floods, storms, droughts and fires. While the long-standing problems of global warming and glacier melting were probably perceived as "far away" because the latter occurred mainly in remote places on the planet or the former originated in the atmosphere, the rapid acceleration of extreme events in industrialised countries has demonstrated the seriousness of the current environmental situation, exacerbated by the global energy, economic and food crisis (Fasanelli et al. 2020;Galli et al. 2019) and triggered by an unexpected conflict in Europe's neighbouring countries. All these events may have served as triggers for climate activism and the collective mobilisation of citizens. More than adults, young people have raised the demand for more effective and urgent intervention by policy makers to protect the planet and have become protagonists of a global protest movement that has quickly captured the attention of public opinion. This movement is called Fridays for Future (FFF), after the day of the first global school strike in March 2019, which involved 1.6 million protesters worldwide (Wahlstrom et al. 2019, Martiskainen et al. 2020. Some months later, in September 2019, 7.6 million participants took part in the third global FFF Day of Protest for Climate Justice, and the strikes continue each year, leading de Moor et al. (2020Moor et al. ( , 2021 to call the FFF movement the largest globally coordinated climate protest in world history (de Moor et al. 2021(de Moor et al. , 2020. Youth are aware that they are the generation who will inhabit a planet that is already sick and whose situation is inevitably deteriorating. They protest against the negative externalities of economic development and believe that they are the main victims of the rise in temperature, water and air pollution caused by the unsustainable production of goods and consumerism responsible for the overconsumption and waste of natural resources. They are ready to fight for their right to a healthier and sustainable future and to make their voices heard to raise public awareness of this issue. They want to shake up those who still believe that environmental problems are far away from them, are not really addressed by them and therefore do not feel the need to incorporate ecological practises into their lifestyles (Rathzel and Uzzell 2009). To investigate young people's engagement with the dangerous effects of climate change, a survey was conducted in February 2020 among 1,138 high school students in Southern Italy who were interviewed via an online questionnaire. The research question was based on investigating awareness of the main environmental issues, their opinion on the environmental plans of local authorities and whether students were willing to make a paradigm shift based on concrete actions that can reduce or stop the waste of natural resources and the pollution of the planet. Based on these premises, they committed to sharing the goals of the FFF movement, actively participating in the strikes and making a positive contribution through everyday environmental practises. To perform the statistical analysis, we used a tree-based method since, in our opinion, it is easier to interpret. For example, using a classical model such as POM (Proportional Odds Model), we get as many coefficients as there are categories for both the response variable and the predictors, all of them minus 1. The latter, applied to our data, would return 57 regression coefficients for the additive model (main effects) only. In this case, interpretation is difficult and we find that the classification tree for ordinal responses is much more interpretable when a data set contains many categorical variables with many categories. In our analysis, we applied an ordinal classification tree with the original impurity measure proposed by Morrone et al. (2019). The novelty of the ordinal tree methodology used in this paper allows for better discrimination of paths when the response variable is ordinal with a cut-off value that separates ordinal categories with a positive semantic meaning from those with a negative one. The results of the study could be of interest to policy makers to understand how young people see their future from a sustainable perspective and to what extent they are ready to support the ecological transformation of public services provided to citizens. At a time when European countries need to put into practice the strategic guidelines to achieve the Sustainable Development Goals SDGs of the UN agenda (United Nations 2015), this study could be a contribution that gives some suggestions to public administrators and policy makers. It is not trivial to mention that the Italian government has put almost 250 billion euros in the reconstruction plan after COVID-19, the so-called National Recovery Plan (PNNR), to carry out projects and initiatives that will influence the destiny of the country not only towards repairing the damage caused by the COVID-19 pandemic but, above all, to leave a more sustainable country for the next generation. Among the six missions of the PNNR, the green revolution, ecological transition and sustainable mobility infrastructures occupy an important place. Great attention will be paid to supporting young people and the south of Italy, traditionally poorer than the north, which will receive more than 50% of the infrastructure budget to reduce the mobility gap (Agenzia Nazionale Stampa Associata-ANSA 2021). The paper is organized as follows: Sect. 2 reviews the literature. In Sect. 3, we illustrate the survey on the perception of environmental risks in young people and their means of reaction. In Sect. 4, we describe and formalise the statistical methods used for the data analysis. The main results are illustrated in Sect. 5. The paper ends by summing up the results and discussing our main remarks. Climate activism and mitigation strategies Climate change activism has been discussed by scholars from different angles. Roser-Renouf et al. (2014) addressed the question of cognitive and affective underpinnings to understand the genesis of climate change activism. Kleres and Wettergren (2017), seeking to understand how core emotions influence activists' motivations and mobilisation strategies, showed that fear plays a key role in raising awareness of the dangerousness of climate catastrophes and that hope drives collective movements, sharing the theme theorised by Nairn (2019). O'Brien et al. (2018) explored youth activism on climate change by arguing about dutiful, disruptive and dangerous dissent, three different types of behaviours that can be adopted by young people, and, following Corner et al. (2015), they expressed concern that their personal engagement may decrease if young people perceive their self-efficacy as limited. Fisher and Nasrin (2021) debated on a specific form of activism called civic engagement, a form of activism aimed to pressure different kinds of actors who might address the issue of climate change by adopting different tactics. In particular, they argued a different level of commitment depending on actors: citizens participating to influence communities, politicians and businesses (through strikes) can directly engage in lifestyle change by modifying their individual behaviour and consumption patterns (e.g. driving and flying less, using renewable energy and eating less dairy or meat) (Büchs et al. 2015;Cherry 2006;Cronin et al. 2014;Haenfler et al. 2012;Middlemiss 2011;Salt and Layzell 1985;Saunders et al. 2014;Stuart et al. 2013;Wynes and Nicholas 2017;Wynes et al. 2018). There are still a few studies that look at the direct effects of participation in green activism movements on changes in resource consumption (Saunders et al. 2014;Vestergren et al. 2019Vestergren et al. , 2018. According to Fisher and Nasrin (2021), it is also important to distinguish between direct and indirect pathways to achieve positive impacts on climate change by putting pressure on policy makers and companies to take emission-reducing measures. The direct pathway can be chosen simply through the adoption of the above-mentioned ecological behaviours by individuals. The other way works at a higher level by asking government policies to take into account the suggestions of scientists, and this is for example the strategy followed by international environmental non-governmental organisations (Dietz et al. 2015;Frank et al. 2000;Grant and Vasi 2017;Grant et al. 2018;Longhofer and Jorgenson 2017;Pfrommer et al. 2019;Schofer and Hironaka 2005;Setzer and Vanhala 2019). Some scholars (Ayling and Gunningham 2017;Franta 2017; Grady-Benson and Sarathy 2016) have discussed the special roles that the economic sector and businesses can play. In this case, people's civic engagement has declined in the form of shareholder activism, which focuses on dissatisfied investors who, as shareholders, have the power to put pressure on the company to move towards social responsibility and environmental corporate activities and performance (Bratton and Mccahery 2015; Gillan and Starks 2007). Companies give due consideration to the expectations of their shareholders to address these issues in their strategic social responsibility documents (Hadden and Jasny 2017;Hestres and Hopke 2019;Yildiz et al. 2015). Specifically related to the issue of the impact of people's actions in addressing climate change, a growing body of literature (Mi et al. 2019;Koehrsen 2021) has addressed climate change mitigation strategies, the pathways of which can be expressed through three main climate approaches (Leifeld and Menichetti 2018). The first approach addresses conventional climate mitigation efforts that use decarbonisation technologies capable of reducing CO2 emissions, namely renewables, fuel switching, efficiency improvements, nuclear energy and carbon capture, storage and use (Bataille et al. 2018;Fawzy et al. 2020). The second direction addresses a newer set of technologies and methods that can be implemented to capture CO2 from the atmosphere, referred to as negative emission technologies. They are based, for example, on methods for removing pollutants, storing bioenergy, increasing the alkalinity of the oceans, sequestering carbon in the soil, facilitating deforestation and reforestation (Goglio et al. 2020Palmer 2019). The last direction of mitigation strategies is perhaps the most specialised, as it deals with extremely advanced technologies whose goal is to lower temperatures without altering greenhouse gas concentrations in the atmosphere (stratospheric aerosol injection, ocean sky brightening, cirrus cloud thinning and other techniques). However, as Lawrence et al. (2018) affirmed, the latter techniques are still theoretical in nature and cannot currently be included in policy frameworks. In this framework, which focuses on mitigation strategies that lie in the possibility of people's intervention, it is interesting to consider two levels of intervention: the collective level, as a city, and the individual level, as an individual citizen-activist. They comprise more than half of the world's population and are consequently responsible for three-quarters of global energy consumption and greenhouse gases (Gouldson, 2016). Many urban climate policies have been adopted to address climate change. The most important are improving energy efficiency, reducing fossil energy consumption and finding appropriate low-carbon development routes for sustainable development. All public organisations, especially local and regional authorities, need to clearly communicate their plans for environmental protection to improve citizens' commitment to collective and synergistic action. From our point of view, it is interesting to understand the extent to which young people are informed about the policies and actions of local authorities and whether they feel committed as part of the community or as individuals. Based on the aims of our study, this difference is not trivial, as they could belong to at least two main profiles of young citizens. The first considers the community as a kind of "shield" in which other citizens work to achieve environmental goals; the second firmly believes that his or her own actions have a strong positive impact and are a necessary seed for the spread of ecological behaviours for a more sustainable future of the planet. The issue of young people's climate activism has been increasingly discussed by scholars in recent years. De Moor et al. (2021) defined two recent movements, FFF and Extinction Rebellion (XR), as "new" forms of climate activism because they had the power to inject new energy into global climate politics. To study the phenomenon in depth, they compared these movements with previous climate campaigns and found that the participants had some elements in common, while the main difference was the use of a more politically "neutral" framing of climate change. Recent studies have focused on the FFF movement and have made interesting contributions to knowledge about the phenomenon, social base and strategic choices of European youth (della Porta and Portos 2021), the communicative power of youth activism (Eide and Kunelius 2021) and crossing crises (Bowman and Pickard 2021; Martiskainen 2020). In particular, della Porta and Portos (2021) addressed the background of protesters as a possible trigger for an active role, noting that their social composition is heterogeneous, as there is a cross-class coalition that forms the collective mobilisation against climate change. In particular, social background may influence their opinions, as demonstrators from the upper class are more likely to believe that governments and corporations are capable of solving environmental problems than activists from the working and middle classes. Eide and Kurnelius (2021) focused on the ability of FFF activists to build an identity based on scientific evidence that strengthens their authority among climate policy actors. They are defined as new ambassadors for climate action who use networked communication tools to link personal experiences and add value to climate science. We believe that this could help encourage young people to engage in genuinely pro-environmental behaviour, and it is in line with de Moor et al. (2021), who argued that many FFF demonstrators who turned out to be students (Eide and Kunelius 2021;de Moor et al. 2020) embodied the belief that the climate crisis can be solved by individuals taking responsibility, and called on policymakers to address global warming on the basis of some kind of intergenerational justice. Taking into account the recent literature on the subject, and with the aim of further contributing to the knowledge of the phenomenon, the basic idea of this study is to understand to what extent the only desirable scenario for a sustainable future is that citizens, together with businesses and policy makers, are willing to adopt environmentally friendly practises, even if they are more costly from a purely economic point of view or more demanding in terms of social behaviour. This can be illustrated in particular by the aim of understanding whether young people are willing to participate in strikes represented here by the FFF movement to defend the planet and adopt green practices as a way of life. In this way we can identify and compare their claimed principles with actual daily engagement. In such a globalised society, young people are most concerned about climate change (Calculli et al. 2021) and know its consequences, regardless of where they will occur on the planet, because they are its future inhabitants. A survey of young people's perceptions of environmental risks and their climate activism To discover how young people perceive the main risks of climate change and whether they are taking action to address the vulnerable impacts in their future, a survey was conducted in February 2020 among students and teachers in Puglia, a region in Southern Italy. Student participation consisted of completing an anonymous online questionnaire for which formal privacy consent was obtained. The respondents belonged to Apulian high schools participating in the National Project for a Scientific Degree in Statistics (PLS), sponsored year after year by the Ministry of Education and Research. The schools participating in the PLS program signed an agreement with the Italian universities concerned with carrying out activities that promote the acquisition of scientific and statistical skills that are more in demand in the labour market. In particular, this work draws on the experience of the University of Bari, which is one of the 14 Italian universities currently participating in the PLS and has been doing so since the 2010-2011 academic year (Ribecco et al. 2019). The PLS targeted a large number of high school students participating in the 2019-2020 PLS Project for Statistics. The respondents were surveyed using an online questionnaire containing 32 questions divided into four sections: I. Sociodemographic information 2. Knowledge and awareness of environmental issues 3. Perception of environmental risks due to climate change. 4. Environmental awareness and agreement with the principles of the FFF movement. The respondents were asked to express their responses by selecting a few options (yes; no and I do not know) or more frequently by rating their responses on a five-point Likert scale (Likert 1932), with 1 being the lowest, 5 being the highest, and 3 being neutral. A total of 1,793 questionnaires were collected, but 395 records were deleted during the cleaning and preparation data phase because they were not completed or proved unreliable or the response variable was missing; 260 questionnaires completed by adults (teachers and relatives) were not considered for analysis because this group did not fulfil the aims of the study. The final data frame contained 1,138 records and a subgroup of variables of interest were selected from a larger number, considering the responses that most closely matched the characteristics of the region of residence's exposure to climate change impacts. As shown in Table A1 (see the Appendix), the students surveyed were almost evenly distributed by gender (50.2% female and 49.8% male). Their mean age was 15.9 years (± 1.4 standard deviation). In the year in which the survey was conducted, they were representative in terms of age and gender, as the proportion of female Southern Italian students to the total number of female Southern Italian students was 48.3% and the corresponding proportion of male Southern Italian students was 51.6%. Overall, female and male Southern Italian students represented 26.1% and 26.2% of Italian high school students, respectively (Ministry of University and Research-MIUR, 2022). Table A1 also shows the percentage distribution of the variables analysed in the decision tree. From the data collected, more than 89.6% of the respondents believe that FFF can be effective in combating the destruction of the planet and has achieved important results, such as a slight reduction in environmental problems (23.9%), a global resonance to environmental issues (28.2%), and a call to policy makers around the world to take concrete action for greater sustainability (32.8%). Although the respondents showed a high level of commitment to and support for the principles of the FFF movement, only five out of 100 played an active role in environmental associations. In addition, the respondents believe they are well informed about environmental issues and are concerned about the various risks posed by climate change (extreme temperatures, flooding, fire, drought, storm surges and coastal erosion, tornadoes, etc.). Although they do not believe they live in a geographic area exposed to the most extreme phenomena, they see their country as moderately to highly vulnerable to climate change and its associated risks, especially extreme temperatures. Strong environmental awareness is demonstrated by the adoption of ecological behaviours such as recycling, reducing waste and plastic consumption, using organic products and using public transport, bicycles and electric scooters. Students participating in FFF strikes often show an even higher rate of adoption of these ecological best practices. Methodology In line with the aims of the study, the preliminary exploratory analysis allowed us to identify the distributions of the variables of interest. We hypothesized that young people who participate in the strikes of the FFF movement because they see their future in danger consistently adopt ecological practices as a lifestyle. To understand the factors that influence participation in FFF movement strikes, regressionlike techniques could be used, but parametric methods do not always produce the expected results. On the contrary, data mining techniques based on recursion, averaging, and randomization are able to discover hidden paths and build better predictive models. We have a set of 22 predictors that are ordinal and nominal variables, and the response variable is measured on an ordinal scale. To examine the factors that affect the climate activism, we used a tree-based method based on the classification and regression tree (CART) proposed by Brieman (1984) but modified for ordinal response variables. Classification tree in brief A classification tree identifies the relationships between a response variable, Y, and a set of predictor variables (X 1 ,X 2 ,…,X p ). In particular, a classification tree is a binary segmentation procedure of the data matrix to generate many more informative subpartitions. In our opinion the use of a tree-based method instead of a (generalised) linear model is more effective because Ordinary Least Squares-based regressions (with no interaction terms) return one type of best fit to the data, namely a straightline combination of the independent variables in a higher-dimensional space. Moreover, the classification tree approach is chosen among the most commonly used supervised machine learning algorithms apt to cope with a categorical target, as the flexibility and robustness it offers to analyse such kind of data, a strong tolerance to missing responses and the absence of strict constraints in terms of distributional assumptions about the data-along with the intrinsic capability of addressing in an easy way interaction, nonlinear effects, and causal priorities-coupled with the possibility of attaining a high degree of interpretability of the classification rules, makes it a very good candidate for an explorative approach to our data (Fasanelli et al. 2017;Iorio et al. 2015;Piscitelli and D'Uggento 2022). Tree-based methods are often used in data mining contexts with large datasets to study, such as social science surveys. For an extensive introduction to tree-based methods, we refer to Breiman et al (1984) and Hastie et al. (2013). Tree-based methods for ordinal response variables To take into account the ordinal nature of the response variable, Piccarreta (2008) suggested adopting of a new impurity measure based on the dispersion for ordinal data defined by Gini (1955): where F Y (i) = P(Y ≤ i). (1) The proposal was originally implemented in an R package (Core Team 2022) by Archer (2010) and was then revised and technically corrected by Galimberti et al. (2012) in a new freely available R package named rpartScore. Nonetheless, the formulation in Eq. 1 does not fit the case of an ordinal variable such as, for example, satisfaction, because we believe that the scale used to measure it has an implicit crisp cut point, which separates ratings or scores expressed by people judging themselves as satisfied from ratings or scores referred to those who are unsatisfied. In our specific case, the response variable spans over 5 ordered categories (coded from 1, Never, to 5, Every time), and in our opinion, an interviewee can be considered an active participant in FFF movement strikes if their score is higher than 3 (Occasionally or Sometimes). Therefore, the threshold equal to 3 is considered the cut-point that identifies an active participant in FFF movement strikes in our opinion. To explain with an example how to choose a suitable impurity measure for the case at hand, let us consider the following Table 1, summarizing five hypothetical frequency distributions of this response variable: It is worth noting that neither calculating the normalised diversity index of Gini for the five artificial distributions in Table 1 (1.60)-results reflected in a good choice; both formulas fail to address the substantial difference that we would see reflected in the impurity function related to a peculiar case like D4 with respect to D2 or D3. Therefore, some of the authors proposed an ad hoc formulation of the impurity function (Morrone et al. 2019), based on weighting the differences among the ratings/scores on the two sides of the crisp threshold between an active participant in FFF movement strikes and a non-active participant, so that the impurity measure used leads to nodes where the separation between active participants in FFF movement strikes and not active ones is maximised when pursuing splits. The formula for obtaining our modified Gini impurity is as follows: where the weights w i are defined as -1 if y i ≤ y * and 1 otherwise, and the sum is taken over the scores that have a non zero frequency only (as can be easily obtained . This choice enables discriminating the relevant cases as D4, as the following values can show: by setting y * = 3, our index applied to the distributions shown in Table 1, returns the values 3.750, 0.625, 0.625, 4.375 and 10.000 for D1, D2, D3, D4 and D5, respectively. This aptly targets the objective of penalizing potential splits where the frequencies peak around the threshold y * , since these splits would produce children nodes in which active and non-active participants are mixed together. For more details about the proprieties of the corrected Gini diversity index for ordinal categorical variables, we refer to Morrone et al. (2019). The Random Forest method The Random Forest (RF) method is a widely used approach for classification and regression (Breiman 2001). In brief, RF is an iterative process that builds a set of classification or regression trees (Breiman et al. 1984) using bootstrap samples iteratively drawn from the original learning data set. Observations not used to construct a tree are termed out-of-bag observations for that tree. To reduce the correlation between the trees in the forest, each split in each tree was identified by using the best among a subset of predictors randomly chosen at that node. RF was used to further harness the informative value in our data, by strengthening the identification of influent variables via resampling. Instead of resorting to the ensemble method for prediction -something we are not interested in at this stage -we exploit RF as a tool to rank variables based on their ability to predict the response which is assessed by variable importance measures (VIMs). Given an error measure M (e.g., error rate or mean squared error), VIM is defined as: where ntree is the total number of trees in the forest. MP tj denotes the error of the t tree when predicting all observations that are out-of-bag for tree t after randomly permuting the values of the j-th predictor variable. M tj indicates the above-mentioned error of tree t before permuting the values of the j-th predictor variable. The RF method has the same advantages: it is not parametric, since no specific distribution of the response variable is assumed and does not require any specification of the type of relationship (linear or nonlinear) between the response variable and the predictors. Moreover, it provides results for a more robust assessment of the importance of the variable compared to classical tree-based methods. We used RF only to conduct a variables importance study and not to obtain the minimum prediction error. For a review of RF methodology, we refer to Breiman (2001) and Boulesteix et al. (2012). Data analysis and results The ordinal classification tree was used to examine the relationship between the decision to participate in the FFF protest movement, caused by the perception of being at risk from climate change and several variables, such as: perception of being exposed to the main consequences of climate change (extreme temperatures, floods, wildfires, storm surges, tornadoes, drought, etc.), individual level of information, ecological practises carried out on a daily basis to respond to environmental degradation and, finally, commitment to the principles and achievements of the green FFF movement. Table 2 shows the 22 predictors that we believe influence young people's decision to take action to improve the environmental situation at risk. Data analysis was conducted with our own software written in R language (R Core 2022) on a computer with an Intel Core i7 quad-core processor (3.1 GHz). The dataset was randomly split into a "learning" set and a "test" set, with dimensions of 800 and 338 statistical units, respectively. The decision tree was selected via cross validation and the test set was used to estimate the prediction error in the tree pruning procedure (cf. Fig. 1). The minimum cross-validation error does not correspond to the minimum test set error, which would suggest a decision tree with 12 leaves. We chose the most conservative decision tree, which, in any case, confirms the interpretation of the phenomenon. The selected decision tree has L = 9 terminal nodes (Fig. 2). The cross-validation prediction error is equal to 0.5930 and the corresponding prediction error computed on the test set is equal to 0.5917. The tree graph allowed us to produce a partition of the sample of individuals into groups following the interactions between the predictors and the dependent variable (FFFPRT). Key information about each node is summarised in Table 3. The Perception of being exposed to extreme temperatures as one of the most frightening results of climate change is the first split (see Fig. 2) and separates the left side of the tree, with 51.1% of students who are from "not at all" to "somewhat" concerned about it, from the right one, where we find the students who considered themselves to be at a very high risk of experiencing extreme temperatures (from "moderately" to "extremely" dangerous, 48.9%). The tree in Fig. 2 shows the role of predictors in the decision to protest by participating in the FFF movement (response variable) and to obtain the composition of the corresponding subgroups of student respondents. The tree has 9 terminal nodes and 8 splits corresponding to the following variables: TMPEXT, FFFCCC, TORN, FFFRES, LEVINF, EXTPH and STOSU. The information on the terminal nodes can be better assessed by examining the distribution of the variable FFF protest participation in each of them (see Table 3 and Fig. 3). Moreover, to obtain detailed information about the specific factors that trigger the FFFPRT response, it is interesting to analyse the pathways to the terminal nodes. The first split is defined by the level of concern about experiencing extreme temperatures so that on the left side are students who have a medium to low concern and on the right are those who are highly scared. Following the path in this branch, we find respondents who believe that participation in the movement is important (the modal values of the terminal nodes range from 3 to 5) and that the positive contribution of the FFF movement is to fight environmental degradation by raising people's awareness. A detailed analysis of terminal nodes number 4 and number 5 allows us to understand the combined effect of not being too scared about extreme temperatures exposition and to believe that FFF movement is able to produce effective results in contrasting climate change. On the right side of the tree are respondents who are generally very concerned about the risk of being exposed not only to extreme temperatures, but also to other dangerous effects of climate change related to the physical and orographic characteristics of the area in which they live. This concern is closely related to an equally strong commitment to the results achieved by the green movement protest. In fact, the modal values of the response variables in the final nodes range between 4 and 5, indicating a high commitment to protest events in defence of the planet, as shown by all the terminal nodes created by the split at internal node number 3. In particular, splitting at internal node number 6 separates students who perceive extreme temperatures, tornadoes, and storm surges as serious threats (terminal node 13), and who view the FFF's main outcome as a call for policymakers to take concrete action on sustainability, from those who focus on the remaining options. Returning to the main path leading to the top of the tree, other triggering factors seemed to additionally characterise the students interviewed. They strongly believe in the effectiveness of protest actions (split at internal node number 12) and have a medium level of information, but the split in node 24 highlights the respondents who consider themselves very well informed about environmental issues and see storm surges as the most likely feared impact of climate change occurring in their neighbourhood, which is not considered to be affected by extreme phenomena anyway. The pathway leading to nodes 48, 99, 196 and 197 describes students who believe that the FFF is capable of drawing public attention to environmental problems and is thus an effective tool to reduce the destruction of the planet by raising people's awareness. This is consistent with the literature, as recent research reports that emotions such as fear and anger can trigger positive action on climate change (Kleres and Wettergren 2017;Martiskainen et al. 2020;Wang et al. 2018). It is worth noting that the impact of potential risks emanating from the sea is felt more strongly than the others suggested (river floods, forest fires and drought), which is not surprising given that students live in close proximity to the Adriatic Sea. Finally, an overview of the distribution of the level of engagement in the FFF protests in the 9 terminal nodes might help to better understand the paths just drawn (see Fig. 3). The paths leading to these distributions are shown in detail in the last column of Table 3. The methodology of decision trees for ordinal variables provides terminal nodes that have the most powerful interrelations with the selected predictors. The paths to the terminal node are determined by the splitting criterion based on the impurity reduction approach of the CART algorithm. It allows us to look at the variable importance measures that led to the final solution. Figure 4 shows the normalised importance plot for all predictor variables used in the tree. It confirms that extreme weather events, specifically storm surges, tornadoes, extreme temperatures, droughts and floods, have the same importance as the positive outcomes that can be achieved by people protesting to focus policy makers' attention on sustainability. Fig. 4 Predictors ordered according to their importance in the tree It is clear that these predictors are more effective than the others which are at the bottom of the list and have decreasing importance, probably because they are mandatory behaviours and because they are seen as incapable of stopping environmental degradation at this time. It is known that variable selection bias can occur with CART method, resulting in predictors with many cut-off points being selected without taking into account the information they provide (Hothorn et al. 2006;Loh and Shih 1997). To check for the presence of bias in our analysis, the RF technique was used (Brieman 2001). This technique was used to examine the importance of predictors in determining the levers that led the student respondents to protest and make their demands for a greener future in the interest of a global community. A forest of 5,000 trees was created to provide a robust ranking of predictors by importance. As can be seen in Fig. 5, the most important variable reducing overall uncertainty coincides with the first split, which relates to perceived levels of vulnerability to extreme temperatures due to climate change. Moreover, the most important predictors are the same as those shown in Fig. 4 in relation to the tree, albeit in different positions in the rank and this result confirms that the tree is robust. Overall, it is interesting to confirm that, among the ten main predictors in both analyses, six of them, namely STOSU, TORN, TMPEXT, DREXP, FLOEXP and FIREXP relate to the perception of vulnerability to natural disasters, the frequency of which has increased in recent years due to climate change. The results that the FFF movement has achieved in raising awareness among public opinion, and especially among policymakers, about the dangerous consequences of the increasing destruction of the planet, as well as the belief in the Final remarks Straining natural resources and the entire environmental system are increasing anthropogenic pressures on the Earth (Rockstorm et al. 2009) and are expected to have catastrophic consequences for humans if they are not stopped. Among the various approaches proposed by researchers to address the problem and find the best solution, Bengquist et al. (2019) focused on individuals changing their behaviour towards environmentally friendly behaviour to achieve a more sustainable future. A pro-environmental behaviour can be defined as one which "harms the environment as little as possible or even benefits the environment" (Steg and Vleck 2009, 309). Relatedly, a growing body of literature provides evidence that current mitigation efforts and future emissions commitments are unable to meet the temperature targets set out in the Paris Agreement ; therefore, new ways of reducing emissions must be adopted. We share the theory of these scientists and hypothesise that, in theory, all measures taken by governments to protect the environment can benefit our planet in the long run, but many recent climate disasters have shown us that we have little time and we have to stop climate change by taking vivid actions. There is an urgent need for citizens and businesses to incorporate environmentally friendly practices into their lifestyles for the sake of a sustainable future. Many people are aware of the seriousness of the environmental problem, but some probably still believe that it is happening in such a remote place that the negative consequences are not perceived as urgent (Rathzel and Uzzell 2009). Since 2018, a global movement of students called FFF has drawn the attention of politicians and public opinion to the seriousness of the environmental situation, turning their fear of an uncertain future into a call for activism (Nairn 2019). FFF was able to mobilise many students who had experienced activism for the first time and felt the need to put pressure on politicians to listen to science (de Moor et al. 2021). The effectiveness of the FFF is more likely due to the fact that it is a global mobilisation wave composed of young people who are best informed about disasters and ecological risks through social media and traditional media, which contribute to increased concern. Many scholars (Wood 2020) believe that studies of young people's engagement with climate change are "a matter for closer investigation" about climate activism (de Moor et al. 2021) and provide a better understanding of the younger generation (Bowman and Pickard 2021). This study aims to contribute in this direction by examining students' perceptions of environmental issues and subsequent actions. We assume that young people who are so concerned about their future should take action, either by participating in school strikes or by acting ecologically in their everyday lives, to reduce or even stop the causes of climate change. To this end, approximately 1,100 high school students in a large city in Southern Italy were surveyed. CART trees for ordinal data (Breiman et al. 1984;Galimberti et al. 2012;Morrone et al. 2019;Piccarreta 2008) were used to analyse the relationship between participation in the FFF movement and the willingness to adopt ecological practices as a lifestyle. Specifically, we used the CART method modified by introducing a new impurity measure for a distribution-free tree-based supervised classification method for ordinal response variables (Morrone et al. 2019). The proposed methodology is based on the assumption that the impurity measure must account for either the diversity or the order of the categories of the response variables. The novelty of the ordinal tree methodology proposed in this paper is its greater ability to distinguish groups based on the semantic value of the response categories. The classification system rewards individuals who give answers with the same semantics and distinguishes individuals with semantically opposite answers by using the central neutral answer as a cut-off. The results obtained in previous research (Morrone et al. 2019) confirm the suitability of the proposed approach (see, e.g., Diener et al. 1995 and the comments therein). In this study, both the response variable and the predictors are ordinal, and their categories exhibit diversity that must be handled correctly because they correspond to ranks that have the same absolute distance from each other but opposite semantic meaning on the left (negative) and right (positive) sides of the cut-off. Through the responses collected, we were able to understand the levers that lead young people to act within the framework of climate activism. It has been shown that the levers that drive students to protest and make their demands for a greener future in the interest of a global community are initially based on their concern about the negative impacts of climate change (natural disasters) and the awareness that the worsening situation will spiral out of control if we do not take immediate action. The responders are well informed about what is happening to the planet and view the environment as a whole system whose events can negatively affect their future. This awareness feeds into their environmentally friendly behaviours and lifestyles as they adopt key ecological practices, namely recycling plastic, glass, paper, and organic waste; using organic products and biodegradable materials; reducing waste (water, energy, food, etc.) and reducing plastic consumption. The students interviewed believed that global green movements such as FFF and climate activism in general could help effectively combat climate change by alerting policy makers to the urgency of environmental degradation. Perception of being exposed to floods (FLOEXP)
2023-02-01T05:05:54.012Z
2023-01-30T00:00:00.000
{ "year": 2023, "sha1": "943cdda341ec5dfdd870f20f95d6465e8246325d", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "943cdda341ec5dfdd870f20f95d6465e8246325d", "s2fieldsofstudy": [ "Environmental Science", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
249359279
pes2o/s2orc
v3-fos-license
Comparative-effectiveness research of COVID-19 treatment: a rapid scoping review Objectives The COVID-19 pandemic has stimulated growing research on treatment options. We aim to provide an overview of the characteristics of studies evaluating COVID-19 treatment. Design Rapid scoping review Data sources Medline, Embase and biorxiv/medrxiv from inception to 15 May 2021. Setting Hospital and community care. Participants COVID-19 patients of all ages. Interventions COVID-19 treatment. Results The literature search identified 616 relevant primary studies of which 188 were randomised controlled trials and 299 relevant evidence syntheses. The studies and evidence syntheses were conducted in 51 and 39 countries, respectively. Most studies enrolled patients admitted to acute care hospitals (84%), included on average 169 participants, with an average age of 60 years, study duration of 28 days, number of effect outcomes of four and number of harm outcomes of one. The most common primary outcome was death (32%). The included studies evaluated 214 treatment options. The most common treatments were tocilizumab (11%), hydroxychloroquine (9%) and convalescent plasma (7%). The most common therapeutic categories were non-steroidal immunosuppressants (18%), steroids (15%) and antivirals (14%). The most common therapeutic categories involving multiple drugs were antimalarials/antibiotics (16%), steroids/non-steroidal immunosuppressants (9%) and antimalarials/antivirals/antivirals (7%). The most common treatments evaluated in systematic reviews were hydroxychloroquine (11%), remdesivir (8%), tocilizumab (7%) and steroids (7%). The evaluated treatment was in favour 50% and 36% of the evaluations, according to the conclusion of the authors of primary studies and evidence syntheses, respectively. Conclusions This rapid scoping review characterised a growing body of comparative-effectiveness primary studies and evidence syntheses. The results suggest future studies should focus on children, elderly ≥65 years of age, patients with mild symptoms, outpatient treatment, multimechanism therapies, harms and active comparators. The results also suggest that future living evidence synthesis and network meta-analysis would provide additional information for decision-makers on managing COVID-19. INTRODUCTION The current global pandemic of COVID-19 has resulted in a high burden of disease and mortality worldwide. 1 2 The lack of effective treatments for COVID-19 has resulted in the almost constant production of studies and evidence syntheses evaluating potential treatment options, as illustrated by thousands of study protocols in clinical trial registries and hundreds of review protocols in systematic review registries. 3 4 Attempts to synthesise this evidence thus far have resulted in various scoping reviews focusing on single drugs or isolated drug classes. [5][6][7][8][9] Better understanding of the characteristics of study populations, treatments and outcomes of this research STRENGTHS AND LIMITATIONS OF THIS STUDY ⇒ Broad literature search and study selection yielded 915 study reports, including 616 relevant studies (188 randomised controlled trials) and 299 evidence syntheses. ⇒ Detailed charting of study populations, interventions and outcomes of included studies and reviews were conducted to analyse characteristics and trends in the included literature and to elucidate lessons for future research. ⇒ Practical implications for future research with respect to study design, populations, interventions, comparators, outcomes and methodological approaches were identified. ⇒ Semiautomation approach to study selection, allowing for a very broad literature search and screening approximately 290 000 titles/abstracts in about 40 person-hours over 2.3 weeks. ⇒ This is a scoping review and as such, we did not assess the risk of bias of the included studies and evidence syntheses. Open access is a prerequisite to the design and conduct of future comparative-effectiveness research. The objective of this rapid scoping review was to provide an overview of the characteristics of studies examining COVID-19 treatment. METHODS The conduct of the rapid scoping review was guided by the JBI (formerly Joanna Briggs Institute) guide for scoping reviews, alongside the World Health Organization (WHO) guide to rapid reviews. 10 11 Compared with a scoping review, we used streamlined methods in this rapid scoping review (eg, single reviewers conducted study selection). An integrated knowledge translation approach was used to engage with the knowledge users from Health Canada (MK) and Public Health Agency of Canada (MP) throughout the conduct of the rapid scoping review, including during: research question development, literature search, study inclusion, interpretation of results and draft report. The protocol for the review was registered using the Open Science Framework (https://osf.io/ypz7x). The discussion section includes minor amendments that occurred to the conduct of the review from the original protocol. Reporting of results was guided using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension to Scoping Reviews statement. 12 Our research question was 'What evidence exists on the treatments for COVID-19 in primary studies and reviews', which is appropriate for the scoping review methodology. 13 Patient and public involvement Since this work was carried out as part of a rapid response to the COVID-19 pandemic project, timelines did not allow for participation of any patients or members of the public in this rapid scoping review. Literature search Comprehensive literature searches and citation screening were used in combination to gather relevant evidence from MEDLINE, EMBASE and preprint servers (biorxiv/ medrxiv). 14 The literature was initially searched from inception to 21 May 2020 and subsequently updated to 15 May 2021. Titles/abstracts were identified for screening using the Continuous Active Learning (CAL) tool, which uses supervised machine learning (see online supplemental appendix 1 for the description and performance of the tool). 14 For archives that could be retrieved in their entirety (eg, MEDLINE, preprint servers), the CAL tool applied broad relevant search terms (online supplemental appendix 1). This search was supplemented by a literature search conducted by an experienced librarian in EMBASE (online supplemental appendix 2). The literature search was not restrict by language or publication status. Eligibility criteria The eligibility criteria followed the PICOS framework and consisted of: ► Population: Individuals of any age who were clinically and/or laboratory diagnosed with COVID-19. ► Intervention: Any compounds under investigation in human clinical trials as potential COVID-19 therapies (online supplemental appendix 3). Chinese medicine and complementary and alternative medicine-either alone or in combination with these medicationswere excluded. ► Comparator: Any of the interventions listed above, no intervention or placebo. ► Outcomes: Any reported outcome. ► Study designs: Primary studies of any design with a comparator group. Evidence syntheses of such studies were included, including systematic reviews, scoping reviews, rapid reviews, meta-analysis and overviews of reviews. Study selection A streamlined approach to study selection was used for the rapid scoping review. In combination with manual screening by reviewers, the CAL tool was used to identify and rank the titles and abstracts most likely to meet the inclusion criteria. This process continued iteratively until none of the identified articles met the inclusion criteria. For manual screening, a screening form based on the eligibility criteria was prepared for reviewers to aid in making consistent judgements on article relevance. A pilot-test was conducted using a random sample of 10 titles/abstracts until reviewers reached at least 75% agreement. Subsequently, screening was completed by single reviewers. Data charting and coding A charting form was developed and calibrated among the entire review team using two randomly selected full-text articles to ensure a standard approach to data collection. Following successful completion of the pilottest, included studies were charted by single reviewers and verified by a second reviewer to ensure accuracy. Methodological quality or risk of bias appraisal of included studies was not conducted since this is a scoping review. 10 The items collected included study characteristics (eg, study duration, study design, country of conduct), patient characteristics (eg, type of diagnosis, mean age), intervention and comparator details (eg, type of intervention, dose, frequency, duration) and outcome measures details (eg, mortality, viral clearance and hospital admission). Pharmacological agents were grouped by their therapeutic category. 15 Study primary outcomes were grouped together to reflect the clinical, virology, respiratory, inflammatory, cardiology and olfactory status and measures of COVID-19. 16 17 The numbers of effect and harm measures were derived by counting the outcomes from the description of study outcomes. Authors' conclusions were coded into the following categories: favour treatment, favour control, indeterminate and other. 18 Pairs of reviewers conducted the data coding independently, with Open access discrepancies reviewed and resolved through discussion by a pair of reviewers. Synthesis The charted and coded data were summarised descriptively for all patient population, interventions, comparators, outcomes and conclusion statements. The data were stratified by study design (randomised controlled trials vs non-RCT) and review type (review conducted according to a review protocol or otherwise). Data repository All material related to this review, including EndNote databases, extracted data in MS Excel, coding categories and analysis procedures written in the statistical software R are available at https://knowledgetranslation.net/ comparative-effectiveness-research-of-covid-19-treatmenta-rapid-scoping-review-data-repository/. Figure 2 displays the timing when the studies were available online; on average 48 primary studies per month were published from July 2020 to April 2021. Table 1 displays the characteristics of the 616 included studies of varying design, including randomised controlled trials (188 studies (31%)), retrospective cohort studies (304 (49%)) and prospective cohort studies (70 (11%)), among others. The median study duration was 28 days and the median sample size was 169 participants. Public sources provided funding for about one-third of the studies; RCTs were funded often by private funding sources (27% relative to 3% for non-RCT). The primary studies were conducted in 51 countries, including the USA (26%), China (17%), Italy (8%), Spain (7%), France (6%), India (4%), Iran (3%), UK (3%) and Brazil (3%), among others (online supplemental table A1, online supplemental appendix 6). Characteristics of included evidence syntheses The evidence syntheses evaluated 518 treatment arms against 299 control arms (table 4). The treatment arms consisted of 115 unique treatment options (online supplemental table A6, online supplemental appendix 6). The most common treatment options were hydroxychloroquine (11%), remdesivir (8%), tocilizumab (7%), steroids (7%), convalescent plasma (6%) and lopinavir/ ritonavir (5%), among others (table 4 and online supplemental table A6, online supplemental appendix 6). Table 5 displays the results of the treatment evaluation according to authors' conclusion. Among the included primary studies and evidence syntheses, the conclusion was in favour of treatment in 50% and 36% of the evaluated treatment arms, respectively. With respect to study population, existing studies put much emphasis on adult patients admitted to acute care hospitals. Future studies need to focus on children, older adults aged ≥65 years and patients with mild symptoms in community settings. Future study populations will need to reflect a broader range of age groups as the current pandemic evolves to affect younger age groups. 19 20 With respect to treatment, many studies and reviews evaluated antimalarial agents. Existing studies emphasised preventing and treating cytokine surge with steroids and non-steroidal immunosuppressants, including interleukin-6 inhibitors (eg, tocilizumab, sarilumab), interleukin-1 antagonist (eg, anakinra), anti-IL-1β monoclonal antibody (eg, canakinumab), TNF-alpha inhibitor (eg, adalimumab) and Janus kinase inhibitors (eg, baricitinib, ruxolitinib). Future studies may need to explore treatment for patients not responding to these agents, such as immunomodulators (eg, thymosin-α1). Existing studies put much emphasis on monotherapy; future studies need to evaluate combination therapy that addresses the multiple aspects of COVID-19, such as virology, respiratory, inflammatory and cardiology. Future studies may also Open access need to explore outpatient treatment for patients with mild symptoms, and treatment options not frequently evaluated in existing studies, such as therapeutic anticoagulants. Treatment evaluation according to authors' conclusion With respect to comparators, most existing randomised controlled trials used placebo comparators while most observational studies used standard of care as comparator; future studies may consider active treatment as comparators, especially when evaluating treatments aiming to produce incremental improvement against effective treatments. Methodological issues related to the selection and delineation of comparators in studies evaluating combination therapies deserve attention. For example, a study evaluated multimechanism approach with medications targeting early immunomodulation, anticoagulation, and viral suppression to prevent catastrophic cytokine release syndrome encountered large variation in clinical characteristics of study participants and standard-of-care comparators in the five participant hospitals in two countries, including differences in disease severity and different doses of colchicine and types of steroids used across comparative groups. 17 With respect to outcomes, about one-third of the included studies used mortality as the primary outcome. Tracking this outcome may require sufficiently long study duration, perhaps longer than the median duration of less than a month observed among existing studies, especially in patients with prolonged respiratory problems, suggesting longer follow-up duration for future studies. Of note, few existing studies used composite endpoints involving death, including endpoints such as intubation and intensive care admission. This use seems to be particularly suitable to capture the respiratory, immunology and cardiovascular aspects of COVID-19, as well as mortality. Few existing studies focused on harms due to treatment and among those that evaluated benefits and harms, the median number of reported harms was only one; future studies need to put more emphasis on harm evaluation. Existing RCTs put much emphasis on the use of clinical status/measures as primary outcome measures. Future trials may consider other primary outcomes that are relevant to patients, such as pneumonia, acute respiratory distress syndrome, multiorgan failure and septic shock, among others. With respect to study design, our results showed a breakdown of 30% and 70% for RCTs and observational studies, respectively. Future trials are needed for evaluating combination therapies. Observational studies will remain pertinent in the evaluation of combination therapies, especially when rich data becomes available with their use in practice. Our review excluded qualitative studies, but we wish to emphasise the importance of these studies in elucidating the experience of COVID-19 patients. With respect to evidence synthesis, we identified a small number of meta-analyses conducted without the associated systematic review and review protocol (n=13). This practice needs to be scrutinised because of the associated high risk of bias in the results, which could be wrong, but appeared to be convincingly precise. 21 Existing evidence syntheses mostly evaluated monotherapy; future evidence syntheses will need to include data from the evaluation of combination therapy. The number of existing network metaanalyses was low (n=4); future network meta-analyses are needed to identify effective treatment given a plethora of treatment options, as well as to identify effective component treatment options addressing multiple aspects of COVID-19. 22 Given the growing literature, there is a definitive need for living evidence synthesis, in which the synthesis is updated regularly as new studies become available. 23 The results suggest that monthly updates may become necessary. With respect to the growing literature, the use of automation tools like CAL for study selection will become essential to ensure a highly sensitive yield of relevant studies, responsive timelines for decision-making and reduced workload for reviewers. In this rapid scoping review, we used a continuous active learning approach that integrates machine learning with feedback instructions from reviewers. This approach allowed the screening of approximately 290 000 titles/abstracts in about 40 person-hours over 2.3 weeks. We believe this approach is indispensable for future reviews involving large body of literature. This approach called for slight changes in our review conduct and reporting, of note the reported number of the titles/abstracts excluded by the automation tool in the flow chart (see figure 1). There are several limitations of this review. This is a scoping review, and as such, we did not assess the risk of bias in the included studies and reviews. Initially, the review protocol called for a borrowing strength of evidence approach, including studies evaluating treatment for SARS and MERS. The initial literature search in May 2020 included electronic databases, trial registries, Cochrane Library and other grey literature sources. Given the growing literature on COVID-19 by May 2021, the current review was focused only on COVID-19 treatment, with relevant studies identified from MEDLINE, EMBASE and preprint servers. In this scoping review, the evaluated treatment options appeared to attain a reasonable chance of being more effective than their comparators, approximately 50% and 30% according to the authors' conclusions from the included studies and reviews, respectively. However, we did not extract outcome data or combine them to verify the authors' conclusions. To provide a broad overview of the comparative effectiveness research on COVID-19 treatment, we included reports from preprint servers, but these reports had not gone through peer review. Despite these limitations, the methods used in this review were carefully selected to address the needs of our knowledge users from Health Canada and Public Health Agency of Open access Canada. In addition, we made the material from this rapid scoping review available in an online data repository as the data may be useful for conducting systematic reviews of specific therapies or for updating the current review. 24 CONCLUSIONS This rapid scoping review characterised a growing body of comparative-effectiveness studies and evidence syntheses evaluating hundreds of monotherapy and combination therapy options addressing the multiple sequelae of COVID-19. The results suggest future studies in children, elderly (eg, ≥65 years of age) and patients with mild symptoms, with additional data on outpatient treatment, multimechanism therapy, harms and active comparators. The results also suggest that future living evidence synthesis and network metaanalysis would provide additional information for decision-makers on managing COVID-19. Author affiliations 1
2022-06-05T15:12:42.289Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "23c0125627fd91c2affae8c0a564591cc41e36d0", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/12/6/e045115.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "c7e870b9a2afe39871d403074f981e297b668085", "s2fieldsofstudy": [ "Medicine", "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
42877937
pes2o/s2orc
v3-fos-license
Kiwi fruit (Actinidia chinensis) quality determination based on surface acoustic wave resonator combined with electronic nose In this study, electronic nose (EN) combined with a 433 MHz surface acoustic wave resonator (SAWR) was used to determine Kiwi fruit quality under 12-day storage. EN responses to Kiwi samples were measured and analyzed by principal component analysis (PCA) and stochastic resonance (SR) methods. SAWR frequency eigen values were also measured to predict freshness. Kiwi fruit sample's weight loss index and human sensory evaluation were examined to characteristic its quality and freshness. Kiwi fruit's quality predictive models based on EN, SAWR, and EN combined with SAWR were developed, respectively. Weight loss and human sensory evaluation results demonstrated that Kiwi fruit's quality decline and overall acceptance decrease during the storage. Experiment result indicated that the PCA method could qualitatively discriminate all Kiwi fruit samples with different storage time. Both SR and SAWR frequency analysis methods could successfully discriminate samples with high regression coefficients (R = 0.98093 and R = 0.99014, respectively). The validation experiment results showed that the mixed predictive model developed using EN combined with SAWR present higher quality prediction accuracy than the model developed either by EN or by SAWR. This method exhibits some advantages including high accuracy, non-destructive, low cost, etc. It provides an effective way for fruit quality rapid analysis. Introduction Kiwi fruit (Actinidia chinensis) is a valuable source of vitamins, 1,2 fats, proteins, amino acids, dietary fibers and rich minerals (such as calcium, iron, pectin, etc). It is widely cultured in south Asia and Southeast Asia and China is its main producing area. Apart from its edible and medicinal values, the emodin extracted from its root has broad applications in medicine, health care, and other fields. 3 The leaching solution extracted from its branches is a good source of adhesive colloid. 2,[4][5][6] Moreover, due to its unique ability for human to regulate emotion and enhance appetite, Kiwi fruit is very suitable for patients suffering from gastric, hypertension and other diseases. Ripe Kiwi and other fruits are more vulnerable to many factors from both environment and itself. As a result, fruit quality decline or even rot occurs during the storage. 7,8 Human sensory evaluation method can directly discriminate Kiwi fruits with different qualities. However, its result is often affected by various factors, such as individual preference, health situation, physiological age, etc. Physical/chemical examining method, such as firmness, microbial, etc, effectively reveals fruit's quality condition. Nevertheless, these measurements cover some drawbacks including fuzzy operation, low repeatability, high cost, and too much time consuming, etc, which makes it unsuitable for rapid quality analysis. [9][10][11] Instrument analysis method, such as gas chromatography (GC) and gas chromatography-mass spectrometry (GC-MS), presents food quality with precise analysis abilities. However, some disadvantages exist in these methods, such as high cost, time consuming, etc. In addition, only skilled operators are required to perform the instrumental analytical experiments. 12 So, there is an urgent demand for developing a rapid quality analysis method with quick response, high accuracy, and low cost in fruit area. Surface acoustic wave (SAW) is first proposed in 1970s and it provides the basis for making highly integrated devices with small size and high sensitivity. [13][14][15][16][17][18] So far, there are many reports about detection applications about SAW in terms of many fields especially in food analysis, such as monitoring of the growth of bacteria, detection of pancreatic lipase, and biomedical analysis, etc. [19][20][21][22] Currently, more and more researches focus on the chemical/biological modification about surface acoustic wave resonator (SAWR). If specific reaction (such as antigen-antibody, receptor-ligand interaction, etc) occurs during the processing, some information about wave (velocity, for example) will change accordingly when SAW passes the piezoelectric substrate resonator, which realizes the characterization about test sample's species and concentration. However, there are also some defects within this technique, to illustrate, SAW devices are often used only one time after modifying, which results in high testing cost. [23][24][25] EN technique, simulating human's olfactory system, is a method regarding odor fingerprint detection. 26 A traditional EN system consists of 3 main functional parts: gas sensor array, signal preprocessing and pattern recognition. It chooses gas as analysis object and obtains characteristic signal. It could real-time capture and examine aromatic substances from specific positions, so it is called EN figuratively. Because of its particular functions, it has applied to food, makeup, petrochemicals, packaging material, environmental inspection, clinic, chemistry, etc. [27][28][29] In this study, EN combined with a 433 MHz SAWR was utilized to examine the responding signals of Kiwi fruit with different storage time. Human sensory evaluation and weight loss examinations were performed to characteristic Kiwi fruit's freshness. The PCA method could qualitatively discriminate Kiwi samples with different storage days. Both SR and SAWR frequency analysis methods could discriminate all samples with high regression coefficients. The validation experiment result demonstrated that the built predictive model based on EN combined with SAWR present higher prediction accuracy than the model built based on EN or SAWR. The proposed method in this research takes some unique advantages including fast response, non-destruction, high accuracy, etc. The proposed method is promising in fruit quality analysis. Results and Discussion Human sensory evaluation Kiwi fruit's human evaluation result is shown in Figure 1a. Kiwi fruit sample's initial score is set at score of 5 and score of 3 is regarded as the limit of overall acceptance. There is no obvious change observed in Kiwi fruit samples within the first 4 days. After that, it exhibits a significant quality decline trend in the following days. In day 8, the preference score is 2.98 § 0.13, which indicates that Kiwi fruit is severely spoiled and it loses commercial and edible values. Owing to the influences of microbial infection and self-physiological metabolism, Kiwi fruit's quality changes significantly with the increase of storage time, including increasing losses of moisture, color, roughness and glossiness, softer touch, severer rot degree and much more cracks. Therefore, human sensory evaluation can classify Kiwi fruits with different qualities. Weight loss Kiwi fruit's weight loss result is shown in Figure 1b. There is no significant change can be seen within the first 2 days. After that, it shows a continuous increase trend and reaches approximate 5% in day 12. During the storage, living cells in Kiwi fruit still precedes strong respiration along with some internal physical/chemical reactions. As a result, considerable moisture loses in Kiwi fruits, which results in Kiwi fruit's weight loss increase with the increase of storage days. Some researches have also reported similar results in other fruits. [30][31][32] Freshness predictive model based on SAWR measurement result SAWR frequency measurement is performed by following processing: first, connect Kiwi sample with SAWR system, then use frequency meter to collect its frequency value, next, transfer the data to a computer through a RS-232 communication interface. The data can be read real-time by self-PC software. The SAWR frequency detecting result is shown in Figure 2a.With the increase of storage time, Kiwi fruit's SAWR frequency increases continuously. The initial frequency value is about 260 MHz, while it reaches about 440 MHz in day 9. Different frequency values are obtained, corresponding to different storage days. The result shows that the SAWR detecting system has high sensitivity toward Kiwi fruit samples with different storage time. Due to the different dielectric characteristics of Kiwi fruits with different qualities, samples would significantly influence SAWR current frequency when it is in serial with SAWR circuit. According to equation (1), SAWR's frequency value finally rises due to the significant increase of Re and decrease of conductivity (G e D 1=R e ). Although the dynamic capacitor parameter (Ce) also changes during the whole processing, it has weak impact on SAWR frequency responses than Re. So it can be neglected. According to the result shown in Figure 2a, the relationship between output frequency (Freq) and storage time (Time) is obtained after linear-fitting and shown as equation (1): After one-time conversion to equation (1), Kiwi fruit's freshness predictive model is acquired based on SAWR and expressed as equation (2): With the help of equation (2), it can realize the prediction about Kiwi fruit's storage time based on SAWR system. To validate the robustness of the predictive model, a batch of Kiwi fruit samples with unknown storage time was examined using SAWR system and the detected frequency value was set as true value. The predicting values were obtained by inputting the detected values into equation (2). The linear fitting result between predicting value and true value is shown in Figure 2b with regression coefficient R 2 D 0.865, which demonstrates that SAWR cannot efficiently discriminate Kiwi fruit samples and partial samples bring about major errors. The PCA result, freshness predictive model, and the validation experiment results based on EN EN original responses to Kiwi fruit samples are shown in Figure 3a. The volatile gases existing in the headspace of samples are inhaled into EN gas chambers and sensed by the functional materials settled in gas sensors. The specific absorption of function materials for specific gas species induces materials' changes in their electrical characteristics. And the responses rise accordingly with the growth of gas concentration. So signals induced by electrical changes can be used to characterize gas concentrations. What's more, 8 gas sensors give different responses due to their different sensing abilities for specific gas species. So EN sensor array forms different responding pattern for Kiwi fruits with different storage days. EN system's unique functions have been confirmed such as analysis of aroma compounds of commercial cider vinegars, 33 composition of commercial truffle flavored oils, 34 detection of adulteration in cherry tomato juices, 35 etc. All sensors' initiative responses to Kiwi fruit are close to zero. All sensors' response values increase gradually and finally reach individual stable value. Sensor S4 presents the maximal stable value (about 0.095 V). The final stable value of S1, S5, S7 and S6 is about 0.058, 0.028, 0.020 and 0.010 V, respectively, while the rest 3 sensors (S3, S8, and S2) present weak responses to all samples. Kiwi fruit's PCA result is shown in Figure 3b. The first principal components (PC1) and the second principal components (PC2) capture 91.06 % of data variance. Five sensors' response values (S1, S4, S5, S6 and S7) to Kiwi fruit are chosen as sample's whole freshness eigen values. Kiwi fruit samples with different storage days can be well distinguished by the PCA method. However, this method is not suitable for quantitative discrimination. 36,37 Kiwi fruit's SNR spectrum calculated by SR as function of external noise intensity is shown in Figure 3c. Derivative vales arise before the formation of eigen peaks for Kiwi fruit samples with different storage days. After that, SNR value increases gradually with the increase of stimulating noise intensity. Each eigen peak appears at noise intensity of 208 or so. Sample' SNR-Max Furthermore, a batch of Kiwi samples with unknown storage time was examined using EN system. Kiwi fruit's SNR spectrum calculated by SR was input into equation (4) and the predicting values were calculated. The result is shown in Figure 3e with regression coefficient R 2 D 0.939, which indicates that the SNR spectrum can achieve the goal of freshness prediction about Kiwi fruits, but partial samples present low predicting accuracy. Freshness predictive model and the validation experiment results based on EN combined with SAWR Based on above 2 model's properties, a new predictive model was proposed to predict Kiwi fruit's storage time combining EN with SAWR system. Two confidence coefficients (P 1 andP 2 ) were preset. By inputtingFreq,SNR and Time values into equation (5), the result is that P 1 D 0:5153; P 2 D 0:4726. By inputting P 1 and P 2 values into equation (5), a mixed predictive model based on EN combined with SAWR is built and the result is shown in equation (6) Next, a batch of Kiwi fruit samples with unknown storage time was examined using EN and SAWR system. A series of SNR and SAWR frequency eigen values were recorded and input into equation (6). The linear fitting relationship between predicting value and true value is shown in Figure 4. The regression coefficient RD0.998, which suggests that EN in combination with SAWR system could better predict Kiwi fruit's quality and freshness. In addition, the combination of EN and SAWR technique applied in many other areas also exhibits some significant benefits. 38,39 From the aspect of non-destructive detection, 3 predictive models were built to predict Kiwi fruit's quality. SAWR detection result reflects Kiwi fruit's internal information, while EN analysis result reveals Kiwi fruit's external message, thus this combination could better deliver Kiwi fruit sample's whole change during storage. This technique is promisingly used to guide fruit's optimal harvest time, which is of great use to reduce economic loss due to fruit's nutrition decreases. What's more, we decide to conduct further research work in the near future and apply this technique for fruit's best picking period judgment. Materials and Methods Kiwi fruit samples Kiwi fruit samples were purchased from Gouzhuang wholesale fruit market (China, Hangzhou province). The samples were nearly in the same level of quality (such as size, weight, and Human sensory evaluation As the method previously described, 40 Kiwi fruit sensory evaluation was evaluated by 6 experienced panelists in our lab. Voting number is set at k, k 2 (1,10). Kiwi fruit's quality is divided into m levels, and the score of a specific level is set at h, j 2 (1, m). Kiwi fruit attributes are divided into n elements, and a specific element is set at u, i 2 (1,n). The contributory weight is determined by pairwise comparison of contribution weight of attributes is set at x( P x i D 1). If there is a specific relationship between 2 objects of hand u, the relation set (matrix) of f is calculated as follows: Thus, the overall acceptability of Kiwi fruit is calculated by the weight grade method as follows: Kiwi fruit's sensory evaluation method is shown in Table 1. SAWR series of testing device SAWR system and its load circuit were self-developed in our lab. As it shown in Figure 5a, ST cut type quartz was taken as the piezoeletric base material. And precision photolithographic process was conducted to make a 433 MHz high-frequency and single-ended SAWR. It's outer size was 4.5mm£11mm. Vacuum pack was adopted to package SAWR. Figure 5b is a schematic diagram of detecting system, which consists of SAWR and its load circuit, stabilized power supply (DF1741SB3A, Ningbo CSI Electronics Co., Ltd), and universal counter (EE 3386, Jiangsu New Union Technology Co., Ltd). The experiment was performed as following process: connecting Kiwi samples to SAWR circuit, putting them into a metal shield box, then capture the load frequency from SAWR by the universal counter and transfer it to the computer via RS232 communication interface for subsequent analysis. Figure 5c, where Co is a static capacitor, Ls, Cs and Rs represent dynamic inductance, capacitance and resistance of SAWR, respectively. When Kiwi frit sample connect to SAWR, Ce is an equivalent dynamic capacitor and Re is an equivalent dynamic resistance of sample. The frequency of SAWR loaded with Kiwi fruit sample can be calculated by following formula: Equivalent circuit model of SAWR in serial with Kiwi fruit samples is shown in In equation (9), SAWR's unloaded frequency F 0 D 1=2p ffiffiffiffiffiffiffiffiffi L s C s p , Y is amplification circuit's phase parameter, analyte's conductivity G e D 1=R e , electrode capacitance C e D ke C C p , where e is a permittivity, and C p is parasitic capacitance between wires. These parameters keep highly stable, so R e , C e and e become decisive factors to oscillation frequency. So, if Kiwi fruit samples with different quality are connected to the circuit, the changeable parameters (including R e , C e and e) will directly lead to the differences of SAWR's working condition and frequency. With the rapid development of SAW technique, it has been widely employed including the distinct detection of ammonia, 41 high-speed gap measurement, 42 hydrogen detection, 43 etc. EN detection system In recent years, EN detection technique has aroused increasing research interest, especially in food, such as vinegars, 44,45 truffle flavored oils, 34 cherry tomato juices, 46 Chinese green tea. 47 As it shown in Figure 6, EN system's structure consists of 3 main components: signal control and its collection (U1), sensory arrays (U2) and gas supply device (U3). The experiment was performed as following processing: first, open clean pump and valve 2 to absorb clean air for washing all sensors. Next, close the clean pump and valve 2 when all sensors' responses stabilize at baseline. Then put Kiwi fruit sample into a clean vial and seal them with parafilm. After standing for 30 min, sampling probe and pneumatic balancer were simultaneously inserted. Then turning on EN system, the sampling time lasted for 45 s. Pneumatic balancer insulated impurity gas and absorbed clean gas into the vial via active carbon, which realized pressure balance. Eight metal oxide semiconductor (M.O.S) gas sensors were adopted to constitute array units. And their detailed parameters are shown in Table 2. Weight measurement A Mettler Toledo AL104 electronic balance was used to measure Kiwi fruit sample's weight during 12-day experiment. And sample's weight loss percentage (%) was reported with respect to the initial weight. PCA Principal component analysis (PCA), as a pattern recognition technique, has proved to be effective for discriminating between the responses of e-nose to complex gases. [48][49][50][51] So PCA method was used to analyze the EN data. SR Stochastic resonance (SR) is a non-intuitive phenomenon, which provides enhancement in a nonlinear noise system and attracts more and more attention in the field of signal processing. 52-54 SNR from the output signal is used to describe SR usually. There are 3 factors existing in SR system: a bistable system, an input signal and an extra noise source. Currently, overdamped Brownian motion particle driven by cycle power in a bistable potential well is used to represent the characteristics of the system. www.tandfonline.com real parameter, V .x/ D 0:25ax 4 ¡ 0:5bx 2 (11) Thus, equation (10) can be rewritten as: Nowadays, what can reflect SR's characteristics most commonly is SNR, and here we define SNR as: S.v/ and S N .V/ represent signal spectra's density and noise intensity within the extent of signal frequency, respectively. Conclusions A rapid freshness predictive model for forecasting Kiwi fruit's storage comes up in this study. Kiwi fruit's weight loss percentage increases with the increase of storage time, which indicates moisture loss in samples is significant. Human sensory evaluation also demonstrates that Kiwi fruit's overall acceptance declines significantly during the whole experiment. Three freshness predictive models about Kiwi fruit based on SAWR, EN, and EN combined with SAWR, correspond to Time SAWR D Freq (R 2 D0.998), respectively. Compared with 3 models' prediction accuracy, it is clear that the mixed predictive model presents higher prediction accuracy than the model developed based on EN or SAWR and the validation experiments also validate this fact. Furthermore, the proposed technique lowers the detection cost for SAWR. The SAWR detection method proposed in this study has following advantages: test sample works as a SAWR load, while SAWR device works as a stable frequency supply, which reduces one-time use waste. From another aspect, the variations of working frequencies must exist in most SAWR devices even produced in the same bath. Therefore, this method eliminates some basic errors due to the replacement of SAWR, which contributes to improving experiment accuracy. SAWR detection result reflects Kiwi fruit's internal information, while EN analysis result reveals sample's external message, so this combination could real-time monitor and deliver Kiwi fruit real change during storage. This method is promising for judging fruit's best harvest time including rapid response, good repeatability, low cost, etc. Disclosure of Potential Conflicts of Interest No potential conflicts of interest were disclosed.
2018-04-03T05:59:34.612Z
2015-01-02T00:00:00.000
{ "year": 2015, "sha1": "f1de103ab15cb91e78c80e5ba7e4f129a8757a4d", "oa_license": null, "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21655979.2014.996430?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "f0705ad9d0147ec02a92fc06cfdff77995aa24b4", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
18463932
pes2o/s2orc
v3-fos-license
An unusual pulmonary complication of cytomegalovirus infection in a renal transplant recipient Bronchiolitis obliterans organizing pneumonia (BOOP) is a clinicopathological entity occurring in the clinical setting of interstitial pneumonia [1]. Occurrence of BOOP in the post-transplant period has been well described in lung [2] or bone marrow transplantation [3] but remains rare in renal transplantation [4]. We report here, to the best of our knowledge, the first case of BOOP secondary to CMV infection in a renal transplant recipient. Intravenous (IV) ganciclovir was effective in eradicating the virus but ineffective in improving the pulmonary status of the patient. After prednisone therapy, the patient's pulmonary symptoms and radiographic findings rapidly improved. Introduction Bronchiolitis obliterans organizing pneumonia (BOOP) is a clinicopathological entity occurring in the clinical setting of interstitial pneumonia [1]. Occurrence of BOOP in the post-transplant period has been well described in lung [2] or bone marrow transplantation [3] but remains rare in renal transplantation [4]. We report here, to the best of our knowledge, the first case of BOOP secondary to CMV infection in a renal transplant recipient. Intravenous (IV) ganciclovir was effective in eradicating the virus but ineffective in improving the pulmonary status of the patient. After prednisone therapy, the patient's pulmonary symptoms and radiographic findings rapidly improved. Case report A 59-year-old woman with end-stage renal failure due to systemic lupus erythematosus was admitted to our unit in October 2003 for her first renal transplantation. Her past history was uneventful. She was a non-smoker. The donor was a 31-year-old woman and there were five HLA donor-recipient mismatches. Initial immunosuppression consisted of sequential quadruple therapy using induction by antithymocyte globulins followed by steroids, mycophenolate mofetil and tacrolimus (FK). The cytomegalovirus (CMV) status was donor positive and recipient negative. Prophylactic treatment by valacyclovir was then given for the first 3 months post-transplant. Apart from an acute episode of functional renal insufficiency, the posttransplant period was uncomplicated. Baseline creatinine level was 0.95 mg/dl (84 µmol/l). Seven days after the introduction of IV ganciclovir, the patient developed dyspnoea, a severe dry cough and fever (39 • C). Hypoxaemia was noted (SaO 2 85%). The clinical examination of the patient showed acute respiratory insufficiency with diffuse crackles. Chest X-rays showed diffuse interstitial patchy and nodular opacities involving all the pulmonary lobes. Chest computed tomography revealed that these opacities were bronchocentric and predominantly located at the peripheral part of the lung and showed small nodules and thickening of the wall of multiple bronchi ( Figure 1). Laboratory tests revealed an inflammatory syndrome (C reactive protein: 179 mg/l). Lymphocyte count was 3100/mm 3 . Plasma creatinine level remained normal. The results of the latex test, antinuclear antibodies and antineutrophil cytoplasmic antibodies were all negative. CMV antigenaemia was still decreasing at 152/200 000 cells. Repeated sputum and blood cultures, and urinary test for legionella antigen were negative. Empirical antibiotherapy using ceftriaxone and ofloxacine was then started. This treatment was inefficient. Thus, flexible bronchoscopy and bronchial washing were performed. Cellularity analysis of the bronchoalveolar fluid showed 650 000 cells/ml with 72% of lymphocytes, 8% of neutrophil polynuclears, 13% of macrophages and 7% of unidentified cells. Special staining showed no acid-fast bacilli, Pneumocystis carinii or other microorganisms. However, the polymerase chain reaction (PCR) for CMV was positive in bronchial fluid. Diagnosis of CMV-induced BOOP post-CMV pneumonitis was suspected. To confirm this diagnosis, transbronchial biopsies were performed. Unfortunately, biopsy specimens were not contributive because of their small size and open-lung biopsy was not considered because of the respiratory state of the patient. Antibiotics were stopped and corticotherapy immediately started at 1 mg/kg per day. The clinical status of the patient then dramatically improved. Apyrexia was obtained in 24 h and normal respiratory function was recovered within 10 days. Inflammatory syndrome disappeared in 10 days and CMV antigenaemia was negative after 22 days of ganciclovir therapy. Steroids were progressively tapered off and stopped after 1 month of treatment. Four years later, no other pulmonary problems have occurred, and the patient has done well. Discussion CMV infection is a relatively frequent complication after renal transplantation with an incidence of 10-25% [5]. Clinical manifestations of the disease are numerous including fever, flu-like syndrome, leucothrombopaenia, hepatitis and colitis. CMV pneumonitis is a rare entity in renal transplant recipients [6] with some fatal cases described. BOOP is a major reparatory response of the pulmonary tissue to an aggression consisting of an incomplete resolution of inflammation in the alveoli and the distal terminal bronchioles. BOOP may be idiopathic or have varied aetiologies as summarized in Table 1. Histologically, BOOP appears in the peri-bronchiolar areas with alveolar filling by loose fibroblastic tissue. Progressively, the lesions may become more organized and diffuse but with preserved adjacent areas and preserved lung architecture. Thus, a lung biopsy is necessary to confirm the diagnosis of BOOP. A transbronchial biopsy is frequently inadequate and an openlung biopsy is needed. The occurrence of BOOP in the post-transplant period has been well described in lung [1,2] or bone marrow transplantation [3] but remains rare in renal transplantation [6,7], with only a few cases described, most of them following the use of proliferation signal inhibitors [8]. As transbronchial pulmonary biopsies were not contributive in our patient, we cannot confirm the diagnosis. However, clinical set-up, radiological findings, cytology of bronchoalveolar fluid, the absence of improvement of pulmonary symptoms despite effective therapy for CMV infection (eliminating sim- ple CMV pneumonitis) and the dramatic improvement of the patient's pulmonary status after steroid therapy were clearly in favour of this diagnosis. Moreover, the search for a differential diagnosis of interstitial lung injury was twice negative in this case. An open-lung biopsy was not performed due to respiratory distress syndrome and the probable need for mechanic ventilation after biopsy. Physiopathology of BOOP after solid-organ transplantation remains to be clarified. If the implication of CMV has been suspected in the physiopathology of BOOP after lung transplantation [9], nothing has been published so far for kidney transplantation. We report here, to the best of our knowledge, the first case of BOOP secondary to CMV pneumonitis in a renal transplant recipient. In our observation, BOOP occurred in the days following CMV infection, and PCR for CMV in the bronchoalveolar fluid was positive. Simultaneity between CMV infection and the occurrence of lung injury in this particular case seems to be in favour of post-viral BOOP disease. Moreover, no other cause of BOOP was found at the time of the investigation. The dramatic improvement of pulmonary symptoms and chest CT after steroid therapy is consistent with the results previously published. In a recent publication concerning 57 patients with BOOP, Krishnamohan et al. found 59% of complete resolution and 30% of partial resolution after steroid therapy [10]. In conclusion, BOOP is a rare entity after renal transplantation and its exact incidence and prevalence is not known. Diagnosis must be prompt because of the need of specific treatment and of its potential severity. However, BOOP may be overlooked by physicians because of unfamiliarity, nonspecific presentation and the need for biopsy to diagnose the condition. Therefore, clinicians should be aware that unexplained and atypical pulmonary manifestations in a renal transplant patient could be due to BOOP.
2017-06-19T22:09:02.273Z
2008-05-21T00:00:00.000
{ "year": 2008, "sha1": "4a6ec8f9675a6d1e8a9233c00d36c2aab8f22672", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/ckj/article-pdf/1/4/236/958519/sfn055.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4a6ec8f9675a6d1e8a9233c00d36c2aab8f22672", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119385294
pes2o/s2orc
v3-fos-license
Study on space-time structure of Higgs jet with the HBT correlation method in e+e- collision at $ \sqrt{{s}}$s = 250 GeV The space-time structure of the Higgs boson decay are carefully studied with the HBT correlation method using e$^+$e$^-$ collision events produced through Monte Carlo generator PYTHIA 8.2 at $\sqrt{s}$=250GeV. The Higgs boson jets (Higgs-jets) are identified by H-tag tracing. The measurement of the Higgs boson radius and decay lifetime are derived from HBT correlation of its decay final state pions inside Higgs-jets in the e$^+$e$^-$ collisions events with an upper bound of $R_H \le 1.03\pm 0.05$ fm and $\tau_H \le (1.29\pm0.15)\times 10^{-7}$ fs. This result is consistent with CMS data. I. INTRODUCTION The Standard Model (SM) of particle physics [1][2][3][4] has been tested by many experiments over the last four decades and has successfully described high energy particle interactions. However, the mechanism that breaks electro-weak symmetry in the SM had not been verified experimentally by then. In 1964, a new mechanism was proposed by several research groups to give the origin of the mass of elementary particles. This mechanism [5][6][7][8][9][10] implies the existence of a scalar particle, the SM Higgs boson. The search for this particle is a dominant part of the history of collider experiments in the last few decades. It has been through Large Electron Positron (LEP) Collider at CERN, the Tevatron at Fermilab, and the Large Hadron Collider (LHC). In the summer of 2012 the ATLAS Collaboration and the CMS Collaboration at CERN announced a new particle [11,12], which is a Higgs-like boson. The discovery of the Higgs boson has ushered in a new era of high energy physics. The Standard Model has been proved to be essentially correct, at least as a low-energy effective field theory, in its description of electroweak symmetry breaking as due to a light, weakly coupled scalar boson. However, the physics giving rise to the Higgs potential remains completely unclear. We expected that the three possible future colliders: the ILC [13], FCC-ee (formerly known as TLEP) [14], and CEPC (http://cepc.ihep.ac.cn) can offer clues to electroweak physics including the Higgs boson. In particular [15]. The CEPC e + e − collider will bring a major leap in the precision measurement of the Higgs boson, and enable electroweak measurements with the Z and the W bosons. In the paper, we study the property of Higgs boson in e + e − collisions at √ s = 250 GeV using the PYTHIA 8.2 generator and calculate the size of Higgs boson by HBT correlation. HBT correlation as proposed by Hanbury Brown and Twiss [16,17], the (angular) diameter of stars and radio sources in the universe was successfully determined by measuring the intensity correlations between separated telescopes. Likewise, in particle physics, one can in principle use Bose-Einstein correlations between identical particles to assess the spatial scale of the emitting source in high-energy collision. Bose-Einstein enhancement of identical-pion pairs at low relative momentum was first observed in pp collisions by Goldhaber, Goldhaber, Lee, and Pais 50 years ago [18]. A. HBT correlation As we know, Hanbury Brown-Twiss analysis(HBT) has been successfully applied in e + e − [19], hadron-hadron and lepton-hadron [20], and heavy-ion [21] collisions. The HBT correlation also called the Bose-Einstein correlation, which is the main method to measure the emission source size of final particles in high energy collision. Two-pion Hanbury-Brown-Twiss (HBT) interferometry is a powerful tool to study the space-time structure of particle-emitting sources produced in high energy collisions [22,23,24,25], most of the final state particles produced in e + e − collisions are π mesons. so we choose π mesons (π + , π − or π 0 ) as the identical particles to study. The two-particle Bose-Einstein correlation function C 2 (k 1 , k 2 ) is defined as the ratio of the two-particle momentum distribution correlation function P (k 1 , k 2 ) to the product of the single-particle momentum distribution P (k 1 )P (k 2 ). For an expanding source, P (k) and P (k 1 , k 2 ) can be expressed as where A(k, x) is the amplitude for producing a pion with momentum k at x, ρ(x) is the pion-source density, k 1 , k 2 , x ′ 1 , x ′ 2 is the momentum and detecting points of two identical pions. If we introduce effective density function ρ ef f , the P (k 1 , k 2 ) will be expressed as or Here, where q = k 1 − k 2 is the four-dimensional momentum difference, the ρ ef f is the Fourier transform of ρ ef f . So the correlation function C 2 (k 1 , k 2 ) can be written as If the effective density function of the source is parameterized to Gaussian form, we have where q = k 1 − k 2 is the four-dimensional momentum difference. If only assuming the distribution of the source is isotropic, i.e. the distribution function of the source is taken as spherically symmetric, the correlation function can be simplified as [26,27] Here λ is the incoherence parameter, in the range 0 ≤ λ ≤ 1, λ denotes the correlation strength; R is the source radius, and In this paper, we study the average radius R and decay-life of the Higgs bosons through the correlation function of the pion source production from Higgs decay in e + e − collisions, which is taken as spherically symmetric. Then, the information about the average size and the decay-life of the emitting source for the final state π mesons can be obtained. The two-particle correlation function with statistical method is defined as the ratio where the two particles with momenta difference Q and energy difference q 0 are from one same event, A(Q, q 0 ) = ∆N /∆q corre is the four-dimensional distribution function of the identical particles with HBT correlations, and B(q) = ∆N /∆q is the four-dimensional distribution function of the identical particles without HBT correlations. The momentum difference of the π meson pairs are calculated. The correlation among identical particles with large momentum difference is quite weak, the distribution here with the HBT correlation should be the same as the distribution without the HBT correlation. B. Particle trace The particular importance of HBT correlation method is to identify identical particles from the same emitting source. Various methods have been proposed, among which the Higgs-tag that originated form b-tag method [28] is a good choice. We use Monte Carlo generator PHYTHIA 8.2 to simulate e + e − collision events both with and without Bose-Einstein effect, and then select suitable events for study. We trace the Higgs decay process and all daughters of Higgs boson and select all pions of the daughters as the identical particle from the same emitting source. Then, identical π mesons are selected from the final state particles to make pion pairs after any two π mesons are grouped with each other. C. Monte Carlo simulation We produce e + e − collision events at √ s = 250 GeV using Monte Carlo generator PYTHIA 8.2. All production and decay channels of the SM Higgs boson are taken into account, setting the parameters of branching ratios as described in SM. The Higgs width value is 0.00403 which is based on LHC data [29]. The other parameters are fixed on the default values given in PYTHIA. Specifically, for the parameters of hadrons production we set Parton Level : MPI = off, PartonLevel : ISR = off and PartonLevel : FSR = off. A. Higgs Jet property Based on the MC simulation sample of one million e + e − collision events at √ s = 250 GeV using PYTHIA 8.219, we analyze among various SM Higgs boson production channels. We select different decay processes from total 1000,000 e + e − collision events then analyze the property of jets in each Higgs boson decay process. All observable final-state particles, i.e. excluding neutrinos and other particles with no strong or electromagnetic interactions, are considered for analysis from the Monte Carlo events. We use cluster algorithms to predict the property of jets from Higgs boson in c.m. frame of e + e − collisions events. The usage of cluster algorithms for e + e − applications started in the late 1970s. A number of different approaches were proposed, We choose distance measure i.e. Lund model [30]. If the distances between the two nearest clusters is smaller than some cut-off value, the two clusters are joined into one.In this approach, each single particle belongs to exactly one cluster. Also note that the resulting jet picture explicitly depends on the cut-off value used. Based on the distance calculation, we set the jets cutting off parameter rapidity and transverse momentum is 0.01 and 10.0 respectively. Our leading jet is the one with the highest transverse momentum in each event. Our second leading jet is the one with the second transverse momentum in each event.The Dijet mass is: where m 1 and m 2 is leading jet's and second leading second jet's mass, E 1 and p 1 is leading jet's energy and momentum, E 2 and p 2 is second leading jet's energy and momentum. We calculate the Dijet mass of different Higgs decay channel. Figure 1 shows the distribution of the invariant mass of Dijet from different Higgs boson decay processes, including H → bb, H → cc, H → τ + τ − , H → W + W − , H → gg, H → ZZ. We extract the signal data published by D0 Collaboration [31]. There is an obvious peak around 125 GeV, which is the SM Higgs boson mass we choose. The distribution of Dijet mass, from our Monte Carlo e + e − events sample, has a peak at 125 GeV, that is consistent with D0 experiment. Thus, we can convince our further study of Higgs boson is based on a viable PYTHIA simulation sample. of each Higgs boson decay channel have 2000,000 e + e − collision events and equally divided the two million events of each decay process in ten groups. We select and constitute Higgs-jets using an Higgs-tag method from an e + e − collision events including Higgs boson decay and identical pions (π + , π − and π 0 ) in Higgs-jets. Then according to Eq.(10), we calculate the correlation functions of identical pions (π + , π − and π 0 ) from Higgsjets of different decay channel and their standard deviation, as shown in Fig.2. Here, we choose the momentum and energy interval region is Q = 0 ∼ 2.5 GeV, q 0 ≤ 15MeV/c and equally divide the region into 50 bins, since the correlation among identical particles with large momentum difference is quite weak. The average radius size R Hj and decay lifetime τ Hj of the Higgs boson (emitting source) can be obtained by fitting the correlation functions, the radius and decay lifetime values are filled in the table 1. The fit curves as Eq.(9) are shown in Figure 2. Figure 2 shows that the distribution of pion mesons correlation functions from different decay channel inside Higgsjets of e + e − collision events are similar. The average radius size R Hj and decay lifetime τ Hj of the emitting source of pions from different Higgs boson decay mode can be obtained by fitting the correlation functions distributions in Figs.2 through formula (9). Here the radius value R Hj representing the size of the emission source of jets decaying from Higgs boson, including the size of the Higgs and the scale of the parton cascade process from Higgs decay before hadronization. So the radius of Higgs boson R H ≤ R Hj . Similarly, the decay lifetime here τ Hj also contains in addition to Higgs after the disintegration of the secondary decay time, so real decay lifetime of Higgs τ H ≤ τ Hj . For the convenience of comparison, the radius values R Hj and decay lifetimes τ H j of the measurement difference decay channel and their average values are filled in the Figure 3, respectively. Figure 3 shows that space-time structure character i.e. the decay radius values R Hj and the lifetime value τ Hj of Higgs-jets measured from different decay channels are same within the error range. The average radius of Higgs-jets source is 1.03 ± 0.05 fm and the average decay lifetime is (1.29 ± 0.15) × 10 −7 fs. This is just fall within the scope of the CMS experimental results [32]. IV. SUMMARY AND DISCUSSION We use Monte Carlo simulation generator PYTHIA 8.219 to produce the data of e + e − collision events at √ s = 250 GeV, according to the different Higgs boson decay channel, including H → all, H → bb, H → cc, H → gg, respectively. The Higgs-jets are selected using the H-tag method. The space-time structure character of decay and evolution of Higgs boson are studied in detail by the HBT correlation method. Firstly, we calculated the invariant mass of Dijet from Higgs boson in e + e − collisions and the results are consistent with the experimental data. Then the values of the average radii and decay lifetime of Higgs-jets source are measured by the HBT correlation method. We found that the average radii or decay lifetime of Higgs-jets source are the same within the error range at the different decay channel, including H → all, H → bb, H → cc, H → gg, respectively. So the mean value of radius and lifetime of Higgs-jets source from the different decay channel are obtained in e + e − collisions at √ s = 250 GeV, i.e R Hj = 1.03 ± 0.05 fm and τ Hj = (1.29 ± 0.15) × 10 −7 fs. In addition, it is pointed out that the source radii and/or decay lifetime of Higgs-jets measured at different decay channels are the same within the error range, which would mean that the average source radii of Higgs-jets just reflect some intrinsic properties of Higgs. It is worth noting that here the radius R Hj and/or decay lifetime τ Hj of Higgs-jets source obtained do not equal to the radius R H and/or decay lifetime τ H of Higgs. Because from Higgs decay to form Higgs-jet should pass Higgs decay and hadronic secondary decay process, the radius R Hj and/or decay lifetime τ Hj of Higgs-jets source are greater then the radius R Hj and/or decay lifetime τ Hj of Higgs. So we present an upper bound of Higgs boson radius R H ≤ 1.03 ± 0.05 fm and decay lifetime τ H ≤ (1.29 ± 0.15) × 10 −7 fs using from HBT correlation of its decay final state pions inside Higgs-jets in the e + e − collisions events. This result is consistent with CMS data [32]. We also expect that this results will be tested in CEPC experiments in the future. V. ACKNOWLEDGMENT Finally, we acknowledge the financial support from NSFC(11475149).
2017-05-17T03:29:09.000Z
2017-05-17T00:00:00.000
{ "year": 2017, "sha1": "7dbbb1847b47a9d4091381d3046c66e2c17ad824", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1705.05997", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7dbbb1847b47a9d4091381d3046c66e2c17ad824", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
169305369
pes2o/s2orc
v3-fos-license
CAN YALE ENDOWMENT MODEL BE APPLIED FOR ISLAMIC PENSION FUND ? 1. Affiliation: Durham University; correspondence email: yuni.karina@gmail.com This paper examines Yale Endowment model and proposes a modified investment model to achieve an investment objective of mainstream investors and to comply with Sharia principle. The proposed model utilizes Islamic CAPM to formulate the optimal asset allocation for Islamic pension fund’s portfolio. It will offer a strong investment strong which could be adopted by government to manage the Islamic pension fund and raise the awareness of society to see the great potential of Islamic pension fund in the future. Promoting an efficient and productive investment of pension-fund assets acts as the catalyst for achieving Sustainable Development Goals (SDGs) by providing important sources of long-term finance for development, supporting financial inclusion and ensuring that poverty among the elderly is alleviated by a strong growth and resilience of income in retirement through pension systems that have broad coverage. circumstances through institutional reforms. In the field of finance, Ebrahim et al. (2016) assert that these stagnancies are caused by the strong emphasis on purely legal perspective (so-called 'illah) of legal judgment rather than the substance of Islamic jurisdiction (Maqasid al-Sharı'ah) driven by logical reasoning (wisdom). Islamic rulings should go beyond the explicit meaning of the scriptural texts by exploring the universal intent of the divine message. Taking economic rationale into consideration, we could strongly argue that Islamic pension fund needs to figure out the most suitable model of investment to be employed for the better payoffs as a means to achieve the objective of Islamic law which lies on safeguarding diin (religion), nafs (self), aql (mind), nasb (offspring), and maal (wealth) (Al Ghazali, 1937). As proven by Kenya, pension fund could contribute to an estimated 68% of the total income of retirees (Allena and Gorton, 1996) which further control wealth estimated at Kshs. 397 billion, the equivalent of 30% of the country's GDP (Coase, 2000). Concerning the importance of investment in thepension fund, this study will observe the Yale Endowment model as a leading fund management which generated an impressive result over these 30 years. This study aims to examine whether or not the Yale Endowment model is in line with Islamic finance principle and to closely resemble an Islamic investment model which will result in an impressive performance as Yale Endowment for a greater mashlahah (goodness) to be achieved by Muslim ummah. This study is conducted by integrating an Islamic CAPM developed by Ebrahim et al. (1999) and mathematical analysis from The Stochastic Programming Approach to Asset, Liability, and Wealth Management (Ton, 2001). It allows us to propose an optimal investment model for long-term investors (i.e. retirement planner) who wish to achieve certain investment objective under Islamic principle and at the same time meet future obligations under Asset Liability Management framework. Promoting an efficient and productive investment of pension-fund assets not only supports the goal of providing important sources of long-term finance for development but also supporting financial inclusion and ensuring that poverty among the elderly is alleviated by a strong growth and resilience of income in retirement through pension systems that have broad coverage. To the researcher's knowledge, there was no attempt to plan financial modeling on Islamic pension fund portfolio, although the foreseeable future is promising. This work put the effort to cover up this gap. The above introductory section provides a broader perspective of the study as well as the overall aims and motivation behind the work of this paper. Section 2 sets the scene by pointing out the economic malaise faced by Muslim world and how it connects with financial under-development which might be even worse due to the aging population if not seriously handled. Section 3 of this paper will introduce the methodology used in this paper, develops the ideal asset allocation, and provides numerical illustrations through CAPM simulation. Section 4 concludes the study by providing the limitations which constrain the adoption of the proposed model in practice. Finally, it ends with the recommendations for further development and research in this field. The Role of Islamic Pension Fund Pension fund is gaining popularity as the world's populations are aging rapidly. In Indonesia, the most populous Muslim country in the world, its population is predicted to expand to 322 million by 2050 with the elderly population which will triple to 62 million (United Nation, 2016). While in Europe, the number of people with the age of more than 64 is forecasted to be almost double (from 20% to 40%) during the period of 1990-2030(Bos, 1994Rosevaere et al. 1996). If not maintained properly, the growing number of elderly population will significantly contribute to the growing number of people just above the line of poverty and social exclusion. As seen in Table 1, by 2030 two productive citizens should be financially responsible for a pensioner as a comparison with four in 1990. There are several factors contributed to this aging phenomenon which spread across the world: better living conditions, declining fertility rates, etc. (1994) and Rosevaere et al. (1996) Having a look from many different angles of life, pension plans now becomes an essential part of our life in the field of economy and social since its impact is going to experience an uptrend in the future. Pension fund will bring a significant impact on macroeconomic as well as microeconomic factors such as size the national wealth, GDP, consumption spending, its forms levels of interest rates, stock yields, etc. The level of pension saving affects the rate of capital formation and economic growth as well as for levels of production and employment. In general, pension systems contribute significantly to the development of national financial systems (King and Levine, 1993;Bekaert, Harvey, & Lundblad, 2001). Pension fund activities may also bring capital and financial market development through their role of substituting and complementing other financial institutions, in particular commercial and investment banks. Acting as intermediary competitor either for household savings or firm financing (Impavido, Musalem, and Tressel, 2002), pension funds nurture competition and improve the efficiency of the loan and primary securities markets. This results in a lower spread between lending rates and deposit rates, and lowercost of access to capital markets. In addition to that, Davis (1995) argues that pension funds could complement the role of banks by either long-term financing on debt securities or long-term investment in bank deposits. Other potential benefits can be obtained from the growth of pension funds are the inducement toward financial innovation, modernization of infrastructure in secrutity market, and improvement in financial regulations, corporate governance, financial market efficiency as well as transparency (Davis, 1995). Those impacts are expected to stimulate higher economic growth for long period of time. Pension systems are crucial for most firms as they represent a significant part of employee compensation and a major current expense. According to data obtained from the U.S. Bureau of Labour Statistics in 2004, businesses paid employee benefits of $7.40 for each hour their employees worked so that private pensions are 14% of these benefit costs (Popkin, 2005). Many traditional companies with Defined Benefit (DB) pension plans have faced bankruptcy due to their unfilled pension obligations. Munnell et al. (2006) provides a number of possible explanations why employers take off their DB plans. They argue that the reason why companies cut off the DB pensions is to reduce workers' total compensation driven by global competition, growing health or several inherent risks (market risk, longevity risk, and regulatory risk) in DB pension plans. A large number of DB pension plans, therefore, have been replaced by Defined Contribution (DC) plans that do not have any deferred pension obligations, pension contributions continue to be an essential part of employee benefits and significant expense. On the other side, pensions with its sheer size also hold an important role as a financial intermediation in a country (Franzoni and Marin, 2006;Jin et al. 2006). We could take an example of US state pension which has liabilities in June 2009 amounted to $2.3 trillion and the annual costs of pension liability amounted to 23 billion (about 1%). Having a closer look at another perspective, according to 2010 report of Fund Management of The City UK, in 2009 global pension assets under management reached $28 trillion which is higher than all mutual fund assets under management ($23 trillion out of $105 trillion in total funds under management globally). It is in line with the 2010 Global Pension Asset Study by Towers Watson, among developed countries like the UK, the USA, and Japan,the percentage of public pension assets over total funds under management varied from 11%, 30%, and 70% respectively. Clearly, national pensions are a very large part of the financial intermediation industry. The Yale University Endowment Model Yale Endowment Model integrates asset allocation and active management by using mean-variance analysis as formulated below: (1) Where: σ 2 =Variance σ=Standard Deviation ρ=Correlation of Two Assets Yale University also moves away its endowment fund from fixed income to equity instruments due to its vulnerability to inflation. Hence, bulk of its investment (more than 90%) is highly targeted to generate returns like equity by purchasing domestic and international equities, real estate, natural resources, absolute return instruments, leverage buyouts, and venture capital. Although one should be noted that in current investment strategy, Yale Endowment lowers its dependence on the common factor of US corporate rate profitability by lowering the heavy reliance on marketable domestic securities. As a consequence, the portfolio will be exposed to a range of less efficiently priced investment alternatives due to the asset allocation changes which could create a rich set of active management opportunities. In a nutshell, Yale's strong investment results stem from the disciplined implementation of equity-oriented asset allocation policies, combined with successful exploitation of attractive asset management opportunities. In the field of portfolio management, the importance of asset allocation policy over active portfolio management has been extensively discussed. The work on this subject has been pioneered by (Brinson, Hood, & Beebower, 1995) which then followed by many other studies (Ibbotson and Kaplan, 2000;Bekaert, Harvey, & Lundblad, 2001;Vardharaj and Fabozzi, 2007). They note that policy returns are equal to more than 90% of the return of most mutual and pension fund. Nevertheless, these researches did not consider that a significant part of the two returns (fund and policy return) is driven mostly by movements of the market. Xiong, Ibbotson, Idzorek, & Chen (2010) stated that the total return (net of all expenses and fees) of portfolio has three components including the market return, the assets allocation policy returns in excess of the market, as well as the return from active portfolio management in term of higher fees, more appropriate security selection, and better market timing). Having a closer look at the study of Xiong, Ibbotson, Idzorek, & Chen (2010), stressed the importance of market movements since it becomes a crucial factor influencing around 80% of the fund. Meanwhile, after stripping out the dynamic movement of the market, asset allocation policy and active asset management accounts for almost the same weight in the global return. It, then, leads to the study of performance attribution which is conceptually understandable, and its implementation is less simple. Theoretically, the market portfolio should be the alternative portfolio that ''would be held by an investor who is devoid of investment judgment' ' (Hensel et al., 1991). Previous literature (Brinson, Singer, & Beebower, 1991;Ibbotson, 2010) highlighted that policy allocation accounts for the bulk of funds' performance which then leaves little room for active management. Nevertheless, their study show that explicitly considering market movements can turn out the results remarkably. Although the role of active management in global asset allocation is not significant, its role is much greater in describing returns to individual asset classes either traditional (equities, bonds, and cash) or alternative (real estate, private equity, hedge funds, and other strategic asset allocation). On the other hand, several studies found that active management accounts for a substantial portion of the performance, better than policy allocation. The research conducted by Xiong, Ibbotson, Idzorek, & Chen (2010), who studied a similar question of Brinson's research question produces a different result due to two main reasons. First, the previous work did not consider the contribution of market movements to funds' returns whereas the recent work by Ibbotson (2010) shows that market movementsare more likely to change the picture notably. Second, the research did not take care of the detailed performance level of individual asset classes where the potential benefits offered by active management can be distinctive. During couple years ago, the pension fund industry has been gradually diversifying its claims on the real economy, away from the real claim represented by publicly traded equities towards these alternative claims. As they have gradually taken on the idea of diversification, one investment fund has frequently been held up by advisors as an example of the potential benefits of diversifying investment portfolios away from traditional asset classes: the Yale University endowment fund. Over a period of ten years to June 2009, the Yale endowment fund had generated a net return of 11.8%pa, while over twenty years to June 2009 it had generated a net return of 13.4%pa. To put this into perspective, a passive investment in a 'traditional' sterling equity/bond portfolio would have produced a gross annual return of 1.6% and 6.7% over the ten and twenty years to June 2009 respectively. How Could Yale University Experience an Outstanding Performance? Yale University adopted a diversified technique for their investment on endowment fund for more than twenty years. Asset Allocation Source: Yale Endowment Report (2016 The overall goal of having these diversified assets is to reach "the highest expected return for a given level of risk". Clearly seen that Yale's asset allocation differs from any other mainstream portfolio. It allocates 7.5% to the most developed market in the world which at the same time acts as its own domestic equities (US). Meanwhile, 9.8% of the total allocation goes to foreign equity in which a major chunk is allocated to emerging capital markets such as Indonesia. Its allocation to fixed income which only accounts for 4.0% of the total allocation mark another significant difference compared to the allocation of other educational institutions (shown by the brown bars in the Figure 1). Overall, the top 3 asset classes which might account for nearly 100% of traditional institutional allocations are responsible for only onefifth of that. Yale endowments pursue asset allocation strategies dramatically different from those of other educational institutions. The comparison of the average asset allocation of a broad universe of educational institutions with Yale endowment shows striking patterns. Instead of counting on traditional asset classes, Yale fund prefer to invest more in the alternative one. Almost 25% of its portfolio is invested in the instruments with absolute return, in other words investment in hedge funds. This kind of investment preference is firstly applied in 1990, and still going until now due to the fact that "absolute return investments have historically provided returns largely independent of overall market moves." Consistently in line with its preference and priority in hedge funds, nearly a quarter of Yale's portfolio is allocated in venture capital and private equity through leverage buyout. Its investment in private equity, therefore, dwarfs its investment in publicly traded equity, and is an evidence of the Yale investment committee's belief that private equity fund managers can "exploit market inefficiencies." Finally, the fund has 32% of its assets dedicated to real assets. This portion of the fund is comprised of a variety of investments, including ones in real estate, oil and gas, and timberland. The main attraction of these investments for the Yale fund is their "inflation hedging properties." However, they also argue that the illiquid nature of these investments means that they can earn an additional return from these investments, effectively an illiquidity premium. Conceivably illiquid assets such as real assets, private equity and absolute return instruments account for more than three-quarter of total portfolio of Yale endowment. Yale attempted to acquire illiquidity premium by creating such complicated investment in a way that accommodates the nature of illiquid assets which brings a difficulty for other investors to make an analysis. As shown in Figure 1, for more than two decades illiquid assets have generated premium much better than that from liquid assets such as public equity. For instance, return of investments on private equity can reach as much as 30.4% pa. Yale considers them as non-short-term investors with their funds are expected to achieve a particular amount of illiquidity which will further help them adding value in a way they can acquire an illiquidity premium once it appears. The Evolution of Asset Liability Management The literature which discussed ALM model is enormous and has been started since couple decades ago by seminal work of Markowitz (1952) with mean-variance analysis which then being elaborated by the more recent extensions to include liabilities (Sharpe and Tint, 1990). Throughout the years, the idea pioneered by Markowitz has been extensively discussed by many studies across the world (see Figure 2). Harry Markowitz, the Nobel Memorial Prize winner in 1990, is widely regarded as the founder of Modern portfolio Theory which promotes the wellknown approach to portfolio allocation -"don't put all your eggs in one basket". Markowitz's portfolio theory showed how investors are ablepick and choose an optimal portfolio of assets while minimizing risk levels for certain given expected return, or maximizing expecteda return for certain level of risk. Today, this theory is what underpins multi-asset investment strategies. In the early 1950s, Markowitz introduced modern portfolio and diversification theory which might result in a lower level of risk. Nonetheless, it took more than 25 years to be applied effectively -firstly by Yale and Harvard University for their endowment fund. Asset allocation is a key factor of a successful investment strategy.Asset allocation captures the consideration of how to diversify the funds money into severaldistinct asset classes and how much amount to hold in each. To construct a portfolio to meet a specific objective, it is critical to select a combination of assets that offer the best chance to fulfill that objective with subject to the investor's preferences including investment horizon, risk appetite, etc. The combination of those assets can help to determine both the range of returns and the variability of returns for the portfolio. Built on the philosophical principles of equity orientation and diversification; asset allocation decisions provide the framework that supports the creation of effective investment portfolios. It was not earlier than 1980s that the utilization of formal mathematical models for enhancing the financial decision making rose to widespread prominence in practice (Zenios, 1993). Globalization and innovations in the financial markets are the driving force behind this development that continues unabated to this date, aided by advances in computing technology and the availability of software. Four alternative modeling approaches have emerged as suitable frameworks for representing ALM problems Ziemba and Mulvey (1998). They are mean-variance models and downside risk, discrete-time multi-period models, continuous-time models, and stochastic programming. Investment Policy And Islamic Pension Fund "Investment in children's education was regarded as highly important, as there was an expectation across all age groups that children would if needed, support their parents during retirement. Investment in property emerged as the preferred and traditional way to provide income in the future and was seen as a concrete, a material investment that could be passed down the generations." (Adele Atkinson et al., 2013) We could further note that, "Pensions, by contrast, were rarely considered or discussed, and knowledge of how they operate was low. Retirement was not at the forefront of most participants' minds; many were living month by month in terms of their spending and had not given much thought to putting money aside for their Figure 2. Evolution of "ALM" future, or even the age at which they might stop work. Few people were aware that pension contributions are invested, or that they could receive possible employer contributions. Most said they would opt out where they automatically enrolled in a pension scheme, primarily because their current financial circumstances would make contributions unaffordable." Based on its result, not many respondents spontaneously put their concern on where the funds will be allocated, even though they said it would be their consideration if one promoted. One of the reasons beyond their unawareness was that they just made a deal only with the pension provider but have no business about what they will do with the funds. However nowadays as Sharia-compliant issue becomes popular and it changes people's mind about where and how the money should be invested. Sharia-compliant pension fund refers to pension fund that is being managed under the principle of Sharia law on mu'amalah (commercial as well as financial activities). One of some important principles is ensuring interest-free transaction in a way that all parties will bear the risk. For the assurance of mentioned Islamic rulings, Sharia scholars and institutions regulated the pension funds are expected to work together in harmony in order to build and continously examine the investment policy. It ought to set clearly certain investment goals to be achieved, which are in accordance with the objective as well as the liability's characteristics of the superannuation fund. While at the same time, it also should match its tolerable risk level, plan sponsor, member & beneficiary. By considering the appropriate diversification and risk management, liquidity needs, obligation's maturity, and other law constraints on portfolio allocation, we need to figure out the best startegy for fulfilling the goals which meets the standard of prudent superannuation. In addition to that, we need to establish an investment policy which leastwise develop the tactical strategy by combining long-term assets with primary classifications, whole performance goals, and evaluation technique. Furthermore, if required, one could modify allocations and performance objectives with subject to the dynamic changes of market conditions and liabilities. The investment policy should broadly consider several related factorincluding trade execution, tactical asset allocation, and security selection. Prudent risk management process which measures and looks for control portfolio risk appropriately should be established to maintain its balance sheets (assets and liabilities) coherently. The investment policy for superannuation programs has to make sure that a proper investment chocie is presented for members who is given access to the necessary information necessary forinvestment decisionmaking. Particulary it has to categorize the investment choices in accordance with the risk should be responsible by members (Ashcroft & Stewart, 2010). Is the Yale Approach Suitable for Islamic Pension Fund? If we want to replicate the asset allocation of Yale endowment, we have to observe each asset class as follows: a. Absolute Return According to the London-based Institute of Islamic Banking and Insurance (2008), annual return of Islamic equity fund market has been able to reach a confident point of 12-15%. With this return, the global market of Islamic equity fund is estimated to have managing assets equalled to $5 billion. On the contrary, $28.9 billion of funds have been allocated by the Middle-Eastbased institutional investors to hedge funds. This fund is expected to show an exponential growth of 14% ($140 billion out of $ 1 Trillion institutional investor flows) to hedge funds by 2010. Although this future of this market looks bright, the Sharia issue of this instrument is still controversial. It, therefore, does not come up in the priority list of asset class for Islamic pension fund portfolio. b. Domestic Equity Equity markets in most Muslim countries are still underdeveloped. On the other side, in the developed countries such as the USA, the equity market shows an outstanding performance over the last 10 years as seen in Figure 3 below. This result has been triggered us to include this instrument in a portfolio of Islamic pension funds. sukuk issuances in the pipeline, new sukuk will most likely be introduced to refinance these maturing Sukuk. According to the Chairman of IIFM (2016), Sukukhold a significant role in ensuring the fulfillment of varied financing needs of the issuers including theproject and aircraft financing, monetary and budgetary management, as well as banks' capital base enhancement. He further explains that as the Sukuk market is going to evolve, there will be agrowing need for solving the various challenges that might come along as a result of the growth of any financial instrument. It, therefore, should be tackled by showing greater transparency and harmonization in the structure and the documentation of any Sharia-compliant product with respect to the Shari'ah, legal and market requirements." He further argues that "The growing confidence in Sukuk market can be seen in some of the reliable Sukuk issuing hubs, proof of which is longer dated Sukuk ranging from 30 years to perpetual issuances coming to the market". As the second largest market in Islamic finance, the future of Sukuk market looks clearly bright, there is still a need to construct an efficient structural and financial model of Sukuk in Muslim countries. For this reason, the life-cycle Sukuk will be designed and further be developed by the prominent Professor in Islamic Finance, Professor Muhammad Shahid Ebrahim. d. Foreign Equity As the domestic equity is not a prospective source of investment for Muslim countries, they need to invest more in foreign equity which is highly developed such as market index of MSCI EAFE. Especially with regard to relative performance of US equities in the bull market, it reinforced the apparent inevitability of achieving superior results through investing in US stocks. Throughout much of the period, diversifying the portfolio by investing in marketable securities other than US stocks created a drag on returns. e. Leveraged Buyouts It can be defined as corporate acquisition through the issuance of leveragebonds or notes payable in order to cover up its cost. Under Islamic principle, the most appropriate instrument to replicate is Islamic private equity. Private equity investments tend to overcome the problems associated with divergence in aspirations of shareholders and management evident in many of today's publicly traded companies. The likelihood of failure is significantly associated with higher leverage for all firms but, clearly has to be analysed in relation to interest coverage, the capacity to service debt. "Adequate leverage places pressures on managers to perform in order to service debt (Jensen, 1986) and can mitigate the problem of overinvestment in firms with limited growth opportunities (Dang, 2011). However, very high leverage may create debt servicing problems, particularly if cash flow projections are not met, predicted asset sales are not completed, or monetary conditions change. Higher leverage, therefore, has been associated with a high probability of failure. Favourable credit conditions are a major driver of leverage in private equity deals (Axelson et al., 2012) and, in the initial stages, optimal leverage may be high (Kortweg, 2010). The maximum amount of debt that can be sold against the firm's assets is greater in a boom due to lower default risks (Hackbarth et al., 2006). This implies that leverage increases insolvency risk for firms unable to adjust capital structure prior to/during the downturn or in the face of changing monetary conditions". The above explanation trigger the rationale behind the injunction of riba which has been explained by Ebrahim et al. (2014) that ribawi contract is inefficient due to three main reasons: expropriation of wealth, financial fragility, and financial exclusion. Therefore, there is a need to figure out the alternative instrument including hybrid securities and life-cycle sukuk which further will be discussed by the well-respected Professor in Islamic Finance, Prof. Muhammad Shahid Ebrahim. f. Natural Resources Several research studies show that the world economy depends on the Muslim world's natural resources' exports, in particular, those of the Persian Gulf as it holds two-thirds of the planet's discovered crude oil reserves (Abdulai & Siwar, 2011;Jason, 2013). Equity investment in natural resources -oil and gas, timberland, metals, and mining, agriculture -share common risk and return characteristics such as protection against unanticipated inflation, high and visible current cash flow, and opportunities to exploit inefficiencies. Using project finance, British Petroleum is able to develop an oil and gas platform in the North Sea by raising $ 945 million, while Freeport Minerals is able to develop the Ertsberg copper mine in Indonesiaby raising $120 million for (West Jason, 2013). Project finance gains popularity in the natural resources sector because of the easily-structured characteristic of the project since entities are legally separated from their sponsors. g. Real Estate Real estate holdings play a special role in institutional portfolios, provides protection against unanticipated increases in inflation. In the perspective of Islamic finance, real estate is widely considered as an investable as well astangible asset class on which to base its financial structures. As depicted in Figure 4, real estate takes the biggest chunk of most pension fund portfolios as 87% of all public and 73% of all private sector pension funds are currently having an investment in that asset class (Preqin Real Estate Report, 2016). Among 1,005 private sector pension funds and 772 public pension funds which has been observed investing in real estate, 19% and 15% respectively of all institutional investors aremore active in that asset class as compared to any other investor type ( Figure 5). In addition to that, real estate investments span the continuum from pure debt to pure equity, with those assets combining debt and equity attributes providing the diversifying characteristics necessary to justify identifying real estate as a distinct asset class (Swensen, 2009). Real Estate Investment Trust (REITs) are listed on the stock market, which not only increase their liquidity and lower the investment costs but also reduce the agency conflicts. Ebrahim et al. (2011) explain the critical role played by PMs (Participating Mortgages) in reconciling the conflict of interest of financiers and investors especially in the case of construction loans. Real estate investment should rely heavily on equity not debt as it is ribawi and not permitted under Islamic finance principle. Real estate cash flows might vary depending on the underlying characteristics of the property and market (Gitman and Joehnk, 1996). prove that venture capital has the potential to generate high returns relative to other equity alternatives. They further explain that "The superior private equity returns come at the price of higher risk levels, as investors expose assets to greater financial leverage and more substantial operating uncertainty. From an Islamic point of view, venture capital is based on equity financing comform with the principle of Islamic finance as long as it invests in permissible sectors and companies with a zero conventional debt capital structure. It, therefore, integrates economic viability with Islamic preference, in order to make it a promising option for Islamic financial institutions". The empirical result conducted by Yang et al. (2016) show that enterprise performance indicated by the participation of venture capital could bring a significant and positive impact as it will lead to a great improvement on the performance of that venture capital. On the contrary, financial leverage and corporate performance show a significant and negative correlation, which implies that debt financing, to some extent, might hinder the improvement of firm's performance. Furthermore, the study also finds that the negative impact of financial leverage on corporate performance incompanies backed by venture capital is more significant and greater. It indicates that the existence of venture capital will increase the negative effect of financial leverage on corporate performance. This result has been explained earlier by Myers & Turnbull (1977) that when companies had more growth opportunities, the companies would take a more conservative financial leverage policy since the debt capital might lack financial flexibility and it is a fixed financial burden for the firms. Therefore, growth opportunities and debt ratios showed a reverse relationship. i. Cash Yale endowment just holds a limited amount of cash to have enough liquidity to meet any miscellaneous expense which is in line with the economic rationale of Islamic finance principle. According to the above explanation, we need to carefully consider an appropriate asset class to be included in the portfolio of Islamic endowment fund. "Greater allocation to the risky asset leads to the higher volatility of the assets of pension funds, however it may also introduce a diversification effect at the total financial plus real asset value A T + V T level, at least when the stock index is negatively correlated. When the pension fund is sufficiently funded, pensioners have no interest in the pension fund taking risks given the collateralised nature of their claims. On the other hand, the diversification effect disappears when the correlation is non-negative, which explains why L 0 then becomes a monotonically decreasing function of ω". Constructing a Modified Yale Endowment Model for Islamic Pension Fund Since Yale Endowment model is not distinct, we need to develop our own model which would gain an impressive result and suit Islamic finance principle. As private equity in the form of leverage buyout and venture capital is mostly driven by ribawi transaction since the outstanding return of private equity is resulted from the higher risk which further encourage investors to induce assets to higher level of leverage and uncertainty (Swensen, 2000). We need to shift the heavy reliance of private equity toward the more Islamic instrument such as real estate investment trust (REIT). Private equities (leverage buyout and venture capital) are inevitably linked to ribawi instrument. Islamic pension fund should conduct a risk/return optimization in real terms. It should be linked to the liability structure. The overall impact of investment decisions on firm and pension fund value is L 0 + E 0 + D 0. With a high level of debt to pensioners, the pension fund is likely to enjoy surpluses that could eventually help the sponsors repay its debt. We assume that a firm issuing a single class of debt promises a fixed payment D at time T, which will have two basic equations for the flow of funds as follow: -For the jth asset category: for asset j, time t, scenario s, where x j,t 8 = investment in asset j r j,s 8 = return for asset j, p j,t 8 = sales of asset j, q j,t s = purchase of asset j, t j = transaction costs for asset j for time t and scenario s. -For the cash flows: (3) Where u l,t s = goal payment l y k,t s = liability decision k w t s = a cash inflow at time t, scenario s, and cash is asset category l. The investment objectives of a pension fund including the Sharia-compliant one should be linked to its liability structure. It needs to conduct a risk/return optimization in real terms. In other words, it should try to maximize the real return on the portfolio, adjusted for wage inflation, consistent with a level of risk judged to be acceptable in the short run. The acceptable risk level should be set in line with liability structure of the pension fund. Instinctively, one expects that a firm without a pension fund will optimally engage in more debt, compared toan identical firm sponsoring a pension plan since the latter firm has already issued a form of debt by promising a payment to retired employees. Martellini & Milhau (2011) found that a sponsor with a low level of outstanding debt will be the preferred choice by pensioners, as compared to a heavily indebted sponsor. The pensioners consider a more financially constrained firm will be more likely unable to afford to make additional contributions if and when needed. Andonov, Bauer, & Cremers (2011) document that the investors which invest more in alternative asset classes than traditional one less likely to reject investment in real estate. This study uses Ziemba (2003) base-case model which draws the relationship between related parties who get involved in a pension plan including shareholders of the sponsor company, bondholders, and beneficiaries of the pension fund (workers and pensioners). This model imposes the assumption of separate balance sheets between the sponsor and the pension plan. The flow of this model can be summarized into several steps. Firstly,a debt with face value D is issued by the sponsors. At the same time, they alsoissue pension claims which are treated as a collateralised form of debt with face value L, which are held by workers and pensioners. The initial capital of the firm is allocated to funding investment projects (company asset value denoted by V) and to funding the pension plan (pension asset value denoted by A). The pension fund allocates a fraction ω of the initial endowment to some performance-seeking portfolio (PSP) and a fraction 1−ω to some liability-hedging portfolio (LHP). In the case of poor state of economy when the assets of the pension fund A might be inadequate to fulfil the promised pension payment L, the sponsor would make a contribution equals to the deficit L − A. If the sponsor is failed to fulfill the contribution, the default is triggered. If the pension fund enjoys a surplus, equity-holders receive a fraction ω of this surplus, which can be used to pay back bondholders. If the debt cannot be fully repaid, bankruptcy is triggered, and equity holders will receive nothing. Other incorporated factors are tax effects, bankruptcy costs, and contributions triggered by the presence of regulatory funding ratio constraints (Edhec Risk Institute, 2012). On the other scenario, without any trigger to default, the remaining assets of the pension fund, the sponsor plus their access to surpluses will be given to the equity holders. ALM Framework for Islamic Pension Fund Referring to Asset Liability Management IAIS (International Association of Insurance Supervisors) Standard No. 13 (2006), it defines as "Asset/liability management as the practice of man aging a business so that decisions and actions taken with respect to assets and liabilities are coordinated". In another word, it is the process of maximizing the potential benefit of assets and cash flows to fulfill company obligations, which might reduce the firm's risk of loss due to the failure of paying a liability on time. To cut a long story short, ALM can be simply described as the process that deals with how to manage interest rate risk. While in the case of the pension plan, the essence of proper ALM should be an orchestrated event based on enhancing the pension plan's funded ratio (assets/liabili ties). With regard to the liability structure of pension fund, equity holds an essential role as first highlighted by Peskin (1997) who argues that there is a need for having an asset that would behave identically to the pension liability it funds. He further elaborates that equity exposure and plan-sponsor costs that the equity exposure of a pension fund is a great deal influencing its future contribution costs. Equityholders are allowed to acquire the full surplus of the investment on superannuation and expected to take an advantage from riskier strategies, especially for greatly funded plan. Numerical Illustrations 3.4.1. Data Assumptions In order to have an estimation of asset return distributions, this study used the annual historical return from 1987 to 2016 which taken from available database on DataStream Access. Equities were distinguished into US Large stocks (S&P 500 Index), US Small Stock (S&P Small Cap 600® Index) and International Stocks (represented by S&P Global 100 Index). Investment on Natural Resources is represented by S&P Global Natural Resource, while REIT is represented by US Equity REIT Index. This study calculated the correlation matrix from the historical data directly for every time series from 1987 until 2016. An Investment Example After collecting and calculating the average return as well as standard deviation of riskless as well as risky assets which is inspired by asset class of Yale Endowment model and at the same time suits Sharia principle, this study utilizes the mathematical equation which is extensively discussed by several studies including Black (1972) and Lintner (1965) where r̃s,r̃z and r̃M imply the stochastic returns on the stock, the zero-beta and the market return, respectively. Under Sharia law point of view, prohibition of riba leads us not to consider the real return on riskless assets as captured by the asset R f which represents the lowest return in investment pecking order and is usually captured by the payoff of 3-Month Treasury Notes. For this reason, (Ebrahim, 1999) proposed an Islamic version of CAPM in which the zero-beta asset is defined as Qardh Hassan asset. Qardh Hassan facility would not run parallel to the X-axis as imposed by 'Riba al-Nasi'ah' facility; rather it would run along the axis. 'Riba al-Nasi'ah' is perceived to generate a real return (i.e. an excess return over inflation), on contrast, 'Qardh-Hassan' is perceived to generate a zero-real return (i.e. a normal return equals to anticipated inflation). Under this scenario, equity in an Islamic economy can be calculated as follows: where π is the expected rate of inflation. Finally, it comes up with the optimal combinations of five (5) Contribution of The Study This study is an attempt to address the issue of the underdevelopment of Muslim economy by proposing a highly efficient investment model in line with the objectives of the Islamic law. In Islamic point of view, we are encouraged to be economically developed; however, the lack of ijtihad development has slowed down the economy progress of Muslim ummah. Most of the contemporary Islamic financial instruments do not have a strong and healthy financial strategy to impress and attract the potential investors. To move forward, we develop a modified Yale Endowment model which has been pioneered the successful portfolio management with an outstanding performance over the decades, and at the same comply with the principle of Islamic finance. We propose a shift from heavy reliance on private equity (i.e. leveraged buyout and venture capital) to real estate investment (i.e. direct real estate investment and REITs). We hope that our result will help maximizing the potential benefit of the huge amount of funds from Muslim ummah by allocating them to an efficient investment model. It, therefore, will create a new hope for emerging Muslim economies to achieve an impressive return on their investment thereby rejuvenating the growth and resilience of these economies and bridging them to a financially inclusive world. Hence, Islamic with its financial system will stand powerfully as the blessings for all over the world (rahmatan lil'alamin). Limitation of The Study Several factors might hinder the application of this investment model as summarized below: (i) Lack of the strong legal structure and the flexible regulatory system in the predominantly Muslim world which could introduce exogenous risk factors in the Sharia-compliant products; (ii) Under-developed secondary market and low protection of trading and investor; (iii) An indistinct local institutional investor base (Jobst, 2007); and (iv) Lack of expert in the portfolio in the predominantly Muslim countries. In addition to that, a more advanced mathematical analysis which considers a complex inherent risk in the Islamic financial product particularly Islamic pension funds. InnoALM model designed by Ziemba could be adopted to arrive at various forecasting models. Summary of The Study This paper is commenced by the discussion of the gap between growing numbers of Muslim population with the underdevelopment of its economy. Section 2.1 presents a link to the serious problem faced by the world including Muslim countries due to the aging population which brings up the role of Islamic pension fund. Section 2.2 discussed the investment model employed by Yale University -so called Yale Endowment Model. Following that, Section 2.3 discusses the reason behind the impressive performance achieved by Yale Endowment fund. Section 2.4 discusses the evolution of Asset Liability Management (ALM) as a widely used framework for pension fund which initially developed from mean-variance analysis deployed by Yale Endowment model. Section 2.5 provides an importance of investment policy and Islamic pension fund. Section 3.1 discusses the suitability of Yale Endowment model for Islamic pension fund. Section 3.2 draws a modified Yale Endowment Model for Islamic pension fund. Section 3.3 presents ALM framework for Islamic pension fund. Section 3.4 provides numerical illustrations by conducting a mathematical simulation of the modified Yale Endowment model under Islamic CAPM. Finally, Section 4 gives a conclusion of this paper by presenting limitations which might hinder the application of this model and combine them with the recommendation to extend the model for future research in order to cope with the more complex inherent risk in Islamic pension fund. Recommendation for Future Research The modified investment model in this study is developed under risk-neutral measure without consideration on additional risk inherited in Islamic pension fund. A more advanced model of stochastic programming could be introduced by the future research to fulfill the need of a state-of-the-art investment model in Muslim countries.
2019-05-30T23:45:04.121Z
2018-05-31T00:00:00.000
{ "year": 2018, "sha1": "c1da7705615c10f24f7b26598deb366f253c9eb4", "oa_license": "CCBYNC", "oa_url": "http://jimf-bi.org/index.php/JIMF/article/download/787/707", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "977cb08f46927f7377119913802eaee2e3899714", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
17244124
pes2o/s2orc
v3-fos-license
Metabolic networks and bioenergetics of Aurantiochytrium sp. B-072 during storage lipid formation Baffled shake flask cultivation of Aurantiochytrium sp. B-072 was carried out at in a glucose-monosodium glutamate mineral medium at different C/N-ratios (30–165) with glucose fixed at 90 g/L. With increasing C/N-ratio, a modest increase in lipid content (60 to 73 % w/w) was observed whereas fat-free biomass decreased but overall biomass showed little variation. FA-profiles were not affected to a large extent by C/N-ratio and absolute docosahexaenoic (DHA)-levels fell in narrow range (5–6 g/L). However at C/N > 64 a rapid decrease in lipid synthetic rate and/or incomplete glucose utilization occurred. Glucose and FA-fluxes based on fat-free biomass peaked at a C/N ratio of 56. This condition was chosen for calculation of the redox balance (NAD(P)H) and energy (ATP) requirement and to estimate the in vivo P/O ratio during the main period of fatty acid biosynthesis. Several models with different routes for NADPH, acetyl-CoA formation and re-oxidation of OAA formed via ATP-citrate lyase were considered as these influence the redox- and energy balance. As an example, using a commonly shown scheme whereby NADPH is supplied by a cytosolic “transhydrogenase cycle” (pyruvate-OAA-malate-pyruvate) and OAA formed by ATP-citrate lyase is recycled via import into the mitochondria as malate, the calculated NADPH-requirement amounted to 5.5 with an ATP-demand of 10.5 mmol/(g fat-free biomass x h) and an in vivo P/O-ratio (not including non-growth associated maintenance) of 1.6. The lowest ATP requirement is found when acetyl-CoA would be transported directly from the mitochondria to the cytosol by carnitine acetyltransferase. Assay of some enzymes critical for NADPH supply indicates that activity of glucose-6-phosphate dehydrogenase, the first enzyme in the HMP pathway, is far insufficient for the required NADPH-flux and malic enzyme must be a major source. Activity of the latter (ca. 300 mU/mg protein) far exceeds that in oleaginous fungi and yeast. INTRODUCTION Docosahexaenoic acid (DHA, 22:6 n-3) is required for maintenance of normal brain function and photoreceptor function in humans (11,21). DHA is largely derived from fish oils, but declining fish stocks, environmental pollution and season-dependent compositional variation in fatty acid (FA) composition have led to a search for alternative sources, such as various marine microbes. Crypthecodinium cohnii and the non-related thraustochytrids which include the genus Aurantiochytrium (29). Physiological knowledge of these organisms remains limited however, but one unusual characteristic of the thraustochytrids and possibly C. cohnii is the possession of a polyketide system (PKS) for synthesis of PUFAs. In this system, introduction of double bonds occurs by isomerization/dehydration rather than by oxygen-dependent desaturases (20). Surprisingly, especially in view of current interest in specialty FAs as well as biodiesel derived from microbial lipids, flux analysis of lipid metabolism has been largely ignored. As basically no growth occurs in the main lipogenic (N-limited phase), in silico flux analysis can be significantly simplified as carbon is only consumed in lipid formation as well as ATP generation (dissimilation) for the latter process as well as maintenance. Hence by assaying the concentrations of glucose as well as cellular FAs in time, sufficient formation is obtained to calculate the redox-and energy balance. The aim of this work was firstly to study the effect of C/N-ratio in a mineral medium by varying nitrogen levels at a fixed concentration of glucose on the specific flux of glucose and fatty acids. Secondly, for one cultivation condition the redox balance and bioenergetics formation during lipogenesis were calculated for various metabolic networks. Assay of a limited number of enzymes was used to confirm the feasibility of the various networks. Finally the results are discussed in view of more efficient DHA production, both in terms of absolute DHA-level as well as productivity. Strain and maintenance Aurantiochytrium sp. B-072 was kindly supplied by dr. Identification of strain The strain of Aurantiochytrium sp. B-072 was cultured for 3 days in an agar plate. A single colony that grown on an agar plate was inoculated to a 50 mL tube with 10 mL of liquid medium prepared with artificial seawater containing 4 g/L glucose, 2 g/L yeast extract and 1 g/L peptone. The cell was grown in the medium at 25 ˚C for 4 days with continuous shaking (150 rpm). The cells were collected by centrifugation (9,000×g, 10 min), washed twice with equal volume of sterile deionized water and dried by lyophilizer before DNA extraction and sequencing. Dried biomass was ground with glass beads by vortexing. (23). The values of bootstrap were analyzed from 1,000 replications (32). Shake-flask cultivation to study effect of C/N-ratio Preparation of a standardized inoculum was as described by Unagul et al. (25). A 5% v/v inoculum (adjusted to OD 660 of 2.0) was transferred to 500 mL baffled flasks (Bellco, USA) containing 100 mL of the test media and shaken at 25˚C and 200 rpm until glucose was exhausted. Mineral medium consisted of glucose (fixed at 90 g/L unless otherwise noted); monosodium glutamate monohydrate (MSG) 3,6,8,9,12,15 and 18 g/L; KH 2 PO 4 , 0.6 g/L; artificial sea salts (Sigma, USA, refer to manufacturers product sheet for detailed composition), 15 g/L; trace elements and vitamins (28), both 1 mL/L. The media were sterilized at 120˚C for 15 min in 80% of the final volume, followed by addition of glucose (sterilized at 110˚C as a stock solution of 50% w/v) and vitamins (sterilized by 0.2 micron filter). They were then made up to the final volume (100 mL) with sterilized distilled water. Experiments were performed in duplicate. Analysis of culture supernatant Glucose was assayed with a commercial enzymatic kit (Glucose liquicolor, Human, Germany). Glutamate was assayed via an amino nitrogen assay by the TNBS (trinitrobenzo sulfonic acid) procedure (1). Assay of biomass dry weight The culture samples (2 mL) were harvested by centrifugation (9,000×g, 10 min, 4 ˚C), and biomass was washed twice with distilled water. After freeze-drying the pellet was placed in a desiccator before gravimetrical determination. Analysis of fatty acids Briefly, 15-20 mg of freeze-dried biomass was accurately weighed and esterified by 4% sulphuric acid in methanol and antioxidant, butylated hydroxytoluene for 1 h. at 90 ˚C. Heptadecanoic acid (Sigma, St. Louis, USA) was used as internal standard. Samples were subjected to gas chromatography on a GC-17A instrument (Shimadzu, Japan) equipped with a Supelco OmegawaxTM 250 fused silica capillary column. The procedure and instrumentation has been described in detail by Unagul et al. (25). Preparation of cell-free extract (cfe) and enzyme assays Biomass (ca. 300 mg dry weight) was harvested at the late Enzyme assays were performed with a Jasco B-530 spectrophotometer at 25˚C. All assays were carried out with two different amounts of cfe and corrected for endogenous activity. ATP:citrate lyase (ACL) was assayed according to Srere (22). Carnitine acetyltransferase (CAT) was assayed according to Kohlhaw and Tan The measured specific activity of enzymes was converted into maximal cellular fluxes assuming the enzymes operate at V max in vivo according to Postma et al. (19). Definitions and calculations As MSG contains a significant amount of carbon, C/Nratio was calculated from (C glucose + C MSG )/N MSG , or in g/L: Statistics Each data point represents an average of duplicate experiments and assays. The data were subjected to statistical analysis using SPSS (SPSS Inc., 1998, Chicago, IL, USA) version 15 for windows (Duncan's test). Significant differences were reported for P < 0.05. Identification of strain According to 18S rRNA analysis (sequence deposited at the GenBank database under accession number JF266572), strain B-072 is classified as an Aurantiochytrium sp. (formerly Schizochytrium). Highest similarity to other strains in the database was found with Aurantiochytrium sp. LK4 (98% identity with 100% coverage), which was isolated in a Hong Kong mangrove forest. Growth and fatty acid content at different C/N-ratios A typical substrate consumption and product formation profile for a cultivation with at a C/N-ratio of 56 (90 g/L glucose and 9.4 g/L MSG.H 2 O) is depicted in Fig. 1a. Following exponential growth under non-N limiting conditions with low lipid formation, a linear phase of lipid formation was observed in the N-limited region until glucose was exhausted. of Aurantiochytrium sp. B-072 was 8-15 micron at a C/N-ratio of 56. However, at a C/N-ratio of 84 a clear decrease in the rate of glucose uptake with a concomitant reduction in lipid synthesis rate was observed approximately halfway the lipogenic phase, but eventually all glucose was consumed and a lipid content of 73% w/w was reached (Fig. 1b). At a C/N-ratio of 165, this decrease in glucose uptake/lipid formation rates became even more pronounced and glucose consumption terminated completely at a residual concentration of ca. 30 g/L. At this point, the biomass level and TFA (Total Fatty Acid) contents were only 29 g/L and 65.7% w/w, respectively (data not shown). Data pertaining to maximal as well as fat-free biomass and TFA for all C/N-ratio's tested have been summarized in Fig. 2. Fat-free biomass decreased, and TFA increased with decreasing amounts of N in the medium, but total biomass fell in a narrow range (30-38 g/L) with a statistically higher value for the lowest C/N-ratio tested (Fig. 2). In a further experiment the effect of fixing the C/N-ratio but at the same time increasing absolute DHA-level was attempted. As in the experiment described above a C/N-ratio of 56 resulted in the highest FA synthetic rate (see section on fluxes below) this ratio was then selected and glucose and MSG levels were increased to 150 and 15 g/l, respectively. This condition resulted in 51±2 g/L biomass with a TFA content of 71±2 % w/w, statistically similar to the value of 69±2 % w/w observed at the same C/N-ratio but with 90 g/L glucose (Table 1). From Table 1 it can be calculated that a maximal level of ca 6 g DHA/L was obtained and that DHA constituted almost 20% w/w of total biomass. At this higher initial glucose level, volumetric and specific TFA-fluxes amounted to 0.47 g/(L x h) and 0.66 g/(L x h), respectively (Fig. 3a). (Fig. 3b). Metabolic networks and energetics during the lipogenic phase In order to potentially improve product yield or specific flux, for instance by metabolic engineering, an understanding of the metabolic network(s) and energetic involved in lipogenesis and bioenergetics is important. A presumed key factor for lipogenesis in oleaginous microorganisms is the presence of a cytosolic ATP:citrate lyase (ACL) which splits citrate in acetyl-CoA and oxaloacetate (OAA) (13). Whereas the former is then used for FA synthesis, the latter has to be "re-cycled". Based on a literature survey, three models were considered for the latter process. In model I, which was originally developed for oleaginous yeasts (8) (Fig. 4). Model III is based on the observation that some non-oleaginous yeasts, i.e. Saccharomyces cerevisiae (9) and plants (14) can directly transport OAA from the cytosol to the mitochondria. The exact mechanism has not been resolved, though it is conceivable that in the case of B-072 a citrate/OAA exchange might occur (Fig. 4b). Finally, model IV lacks ACL but uses transport of acetyl-CoA from mitochondria to cytosol by carnitine acetyltransferase (CAT). In this case, no OAA is formed (Fig 4a). As an example, calculation for the redox-and energy balance employing model I is presented in Data for the other three models have been summarized in Table 3 with model I included for easier comparison. Model II has a lower requirement for NADPH provision than the other models (2.1 vs 5.5 mmol/(g FFB x h)) due to the involvement of malic enzyme in OAA recycling as shown in Fig. 4. However, it results in the highest calculated ATP requirement due to transport costs for citrate and pyruvate (cf. Fig. 4b (Table 3). To check the feasibility of a major contribution of the HMP pathway, specific activity (s.a.) of glucose-6-P-dehydrogenase (G6PDH) and 6-P-gluconate dehydrogenase (6PGDH) was assayed. In contrast to yeasts (3), a low s.a. of G6PDH.was found which would limit the flux through HMP to only 1.4 mmol G6P/(g FFB x h), equivalent to 2.8 mmol NADPH/(g FFB x h) ( Table 5.). Except for model II, this falls far short of the calculated flux, hence other sources of NADPH must be involved. Activity of malic enzyme was sufficient to account solely for the NADPH requirement (Table 5). This confirms various reports suggesting that apart from ACL, the presence of NADP-malic enzyme is another critical factor for lipogenesis (13,30 Table 2). 4). Table 5. Specific activity (nmol/(mg protein x min)) of NADPH-and acetyl-CoA-producing enzymes for selected C/N-ratio's during growth of Aurantiochytrium sp. B-072 on a glucose-MSG mineral medium. Samples were taken at the late lipogenic phase. Potential NADPH-flux (q pot , mmol/(g fat-free biomass x h)) through these enzymes assuming they operate at V max is also shown as is calculated flux. Implications for optimization of DHA production by Aurantiochytrium sp. This is thought to be due to a rapid decrease in the specific activity of particularly malic enzyme, thereby limiting NADPH supply for lipid formation. Indeed, overexpression of ME (from ca 30 to 70 mU/mg protein at the start of lipogenesis) in recombinant strains of this fungus increased lipid content to 30% w/w (33). However, when malic-and other NADPHproducing enzymes were assayed in the late lipogenic phase in B-072 at several C/N-ratio's (i.e. 37, 56 and 84), the only major difference at the high C/N was a large decrease in NADP + -isocitrate dehydrogenase(s) ( Table 5). Furthermore it is noteworthy that activity of ME in B-072 (246-330 mU/mg protein, Table 5) exceeds that reported for various oleaginous microbes by a factor of three to five (for data see (4) and references therein). The high conversion factor of glucose into TFA coupled with the high apparent in vivo P/O-ratio's irrespective of the model used (Table 4) suggest that the overall yield of lipid in B-072 is close to its theoretical maximum. Hence medium development will of limited use. However, ammonium might be a cheaper alternative to glutamate. In theory, provision of cytosolic acetyl-CoA by removal of ACL and re-routing through CAT (as in model IV) should give a small increase in lipid yield due to the more favorable energetics ( Table 3). The main aim, however, would be a relative increase in the %DHA/TFA. The latter is not greatly influenced by the C/Nratio (Table 1). From literature data, it appears that higher DHA-fractions in the TFA can be achieved by lowering the Metabolic networks and bioenergetics of Aurantiochytrium process temperature (24) and/or oxygen limitation (5). Unfortunately these factors will also reduce the specific growth and lipid synthetic rates. Overexpression of the PKS system by random mutation would be useful in this and might prevent GMO issues. It would be of great interest to attempt further analysis of metabolic pathways in Aurantiochytrium, for instance by 13
2017-06-21T02:58:49.773Z
2012-06-01T00:00:00.000
{ "year": 2012, "sha1": "635bc6b37de202b03c71da758d5bfcb71bbc64f3", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "635bc6b37de202b03c71da758d5bfcb71bbc64f3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }