id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
120431188 | pes2o/s2orc | v3-fos-license | A general method for central potentials in quantum mechanics
We focus on a recently developed generalized pseudospectral method for accurate, efficient treatment of certain central potentials of interest in various branches in quantum mechanics, usually having singularity. Essentially this allows optimal, nonuniform spatial discretization of the pertinent single-particle Schrodinger equation satisfying Dirichlet boundary condition leading to standard diagonalization of symmetric matrices. Its validity and feasibility have been demonstrated for a wide range of important potentials such as Hulth\'en, Yukawa, generalized spiked harmonic oscillators, Hellmann, Coulomb potentials without/with various perturbations (for instance, linear and quadratic) etc. Although initially designed for singular potentials, this has also been remarkably successful for various other cases such as power-law, logarithmic, harmonic potentials containing higher order perturbations, 3D rational potentials as well as confinement studies. Furthermore, a large number of low-, moderately high-, high-lying multiply excited Rydberg states such as singly, doubly excited He as well as triply excited hollow $2l2l'2l'' (n \ge 2)$ and $3l3l'3l''$ doubly-hollow resonances in many-electron atoms have been treated by this approach within a KS DFT with great success. This offers very high-quality results for both ground and higher lying states for arbitrary values of potential parameters (covering both weak and strong coupling) with equal ease and efficacy. In all cases, excellent agreement with literature results are observed; in many cases this surpasses the accuracy of all other existing results while in other occasions our results are comparable to the best ones available in literature.
I. BACKGROUND AND MOTIVATION
Study of singular potentials in quantum mechanics is almost as old as quantum mechanics itself. Many important areas in physics and chemistry, such as atomic, molecular, solid-state, nuclear and particle physics, field theory, astrophysics, etc., frequently demand quantum mechanical solutions where the governing Hamiltonian contains certain central potential (typically having a singularity). Often this also includes an extra external perturbation term characterizing the physical system under investigation. Exact analytical solution of the respective Schrödinger equation could be obtained only for a handful of idealized, model situations, such as the harmonic oscillator or Coulomb potential, which are unfortunately quite inadequate for majority of our realistic problems. Thus, leaving aside a very few privileged cases, for almost all practical purposes, recourse must be taken to approximation methods. Consequently, an impressive amount of approximate analytical as well as numerical methods have been proposed over the decades, employing a variety of attractive elegant techniques for their studies. In general, singular potentials pose more spectacular difficulties and challenges than the regular ones. Therefore, development of a general method which can offer accurate reliable results on such potentials has constituted one of the most fruitful and active areas of research for long time, and still this continues to grow in time.
Because of the difficulties concerning physical interpretation of attractive singular potentials, scientific community concluded that no significance could be attached to any singular potential as regards singularity at the center of force. Also, the mathematical difficulties in tackling these potentials made these more formidable. Historically, probably the first important observation was made for well-known Coulomb potentials in a relativistic case.
Plesset [1], while investigating the Dirac equation for an electron in a Coulomb field of a fictitious nucleus of charge αZ > 137, surprisingly found that the essential distinction between attractive and repulsive potential was lost; more precisely all potentials tend to display characteristic features of an attractive potential near the origin. They both produce large and small components of wave function behaving in that region like a power of r times a factor, exp ±i r V (r) dr , which is in sharp contrast with the non-relativistic case, where V (r) rather than V (r) appears in the exponential. The Klein-Gordon equation also shows same behavior. Non-uniqueness of the solution was also noted by Case [2], who resolved this dilemma by specifying one bound-state energy, and determining the rest of the bound-state spectrum by imposing orthogonality on wave functions.
In another stimulating paper [3] in this direction, the authors argued that physical interactions in real-world problems were more likely to be highly singular rather than regular (non-singular), and naturally their studies were more relevant than their counterparts. They also showed that singular potentials display Regge behavior in more simple terms than the regular potentials. Finding effective potentials for field-theoretical interactions by means of Bethe-Salpeter or quasi-potential equations has given further impetus to the subject. At the same time, in a pioneering work [4], a correlation between renormalizability attributes of a field theory and nature of the effective potential was established. The effective potential for super-normalizable, renormalizable and non-renormalizable theories were found to be regular, transitional and singular respectively. Further work on peratization approximation in the context of field theoretical studies of weak interactions [5,6] has generated much interest in the study of singular potentials. In the elementary particle scattering, short-range interactions between such particles are described by repulsive singular potentials. In molecular physics as well, long-range and short-range part of the inter-atomic and inter-molecular forces are represented by various singular potentials. The long-range part includes purely electrostatic forces between polar molecules or induction forces between a polar and non-polar molecule or dispersion forces between two non-polar molecules; corresponding potential is attractive singular when extended to the origin. Whereas the short-range force develops due to an overlap of their electron clouds as atoms or molecules approach each other at shorter distances. This force is typically represented phenomenologically by a repulsive singular (or sometimes non-singular) potential, such as a Lennard-Jones potential.
The preceding examples illustrate some of the broad application areas where singular potentials serve as mathematical models for certain concepts (underlying interaction forces).
Time is ripe now to mention a few words about some of the specific potentials. The literature is vast and here we restrict ourselves to only a few selective ones. One of the deceptively simple one-dimensional potentials V (x) = −e 2 /|x| has been studied by many workers, [7][8][9][10][11][12][13][14][15][16][17][18] mainly because of following reasons: (a) exact solvability (b) unfortunate formal resemblance to its three-dimensional counterpart, the H atom, which has brought it the name "one-dimensional H atom problem" (c) diverse physical applications such as in the exciton in high-temperature superconductivity, semiconductors, polymers, 1D electron gas at the helium surface and Wigner crystal, etc. Being a function of |x|, this is not analytic (e.g., x = 0 is not just a pole) and note that the independent variable spans the whole x axis including origin. Acceptable solutions must satisfy the wave equation over entire range of x.
Its exact solution first appeared in 1959 [7], where two unusual features were observed for this 1D system. First, the discrete bound-state spectrum was found to be degenerate and ground state corresponded to infinite binding energy. A later work [11] employing momentum representation ascribed these rather peculiar features to the hidden O(2) symmetry, which was criticized later. Thereafter, This system has been studied by a wide range of mathematical methods, such as generalized Laplace transform, Fourier transform, quantum phase space representation, momentum space representation, etc. For the past several decades, this apparently "simple" system, initially considered a pedagogical problem, has thrown some intriguing controversy among the researchers, such as the degeneracy in ground and excited states, hidden symmetry connection, etc., some of which still remain unresolved.
In contrast to the 1D H atom potential, the other 1D Coulomb potential V (x) = −Ze 2 /x has been explored relatively less [19,20].
Quantum mechanical solution of the celebrated 3D Coulomb problem, the H atom, was published as early as in 1926 by Schrödinger in a series of papers by solving an eigenvalue equation for energy of this system. Quantum mechanical description of H atom via a central Coulomb potential holds a unique distinction of one of the very few realistic physical systems to offer separable and exactly solvable solutions within both non-relativistic and relativistic picture [21]. Innumerable works have been done in the following years to obtain valuable insights into this simplest atomic system in 2,3 and N dimensions, from both mathematical and physical perspectives [22][23][24][25][26][27][28]. Although many textbooks on elementary quantum mechanics present H atom as a closed case, this prototypical system continues to offer many new and interesting features, such as its dynamic nature, or its evolution with time under the influence of a strong electromagnetic field or its behavior in a reference frame of arbitrary dimensions. It is now well-known that most of the 3D results actually have an N-dimensional counterpart, including Runge-Lenz vector, symmetry properties, Clebsch-Gordan coefficients, etc. Symmetry in these higher dimensional systems manifests into the separation of corresponding time-independent Schrödinger equation into a radial and angular part in hyperspherical coordinates, in distinct analogy to the spherical coordinates in 3D and polar coordinates in 2D. In other words, the angular solutions emerge as eigenfunctions of a generalized angular momentum operator, usually referred to as hyperspherical harmonics. Another interesting feature of the N-dimensional Coulomb problem is that its solution is connected to the D (= 2N − 2) dimensional harmonic oscillator potential [28]. In other words, there exists a transformation which will turn the radial equation for a generalized N-dimensional Coulomb potential into that of a D-dimensional harmonic oscillator. This link consists of a map r 2 = ρ and the two are related in following manner, Here, the constant Λ arises due to the respective eigenfunctions being normalized in different dimensions and is given by, This equation holds only for certain values of N and D, and there are also certain relations between the respective quantum numbers that must be fulfilled, such as, The singular potentials of 1/r n , with n ≥ 2 are of significant current interest. The n = 2 potential has relevance in three-body problem in nuclear physics, as well as point-dipole interactions in molecular physics [29,30]. Historically, one of the first important difficult cases in dealing with highly singular potentials was encountered in the quantum mechanical study of a strongly attractive 1/r 2 term in the Hamiltonian [2]. This potential shows numerous fascinating features rich in physics and mathematics. For example, being uniquely and interestingly placed in the borderline of so-called regular and singular potentials, this defines a transition point in non-relativistic quantum mechanics [31]. The 1/r 2 interaction (in addition to the two-dimensional delta function), is also shown to exhibit the phenomenon of anomalous symmetry breaking, wherein a symmetry present in the system at a classical level is broken by introducing quantization in to the picture [32]. Case n = 3 is used to describe the tensor force between nucleons in nuclear physics. Also in the perturbation theory of nuclear interactions [33], proper renormalization of this potential constitutes an important step. Interaction of an atom with a flat wall, at short distances, is governed by an attractive van der Waals potential proportional to −1/r 3 , while at larger distances, by the highly retarded Casimir-Polder potential proportional to −1/r 4 [34]. n = 4 potential also describes the interaction between a charge and an induced dipole [35].
Strictly speaking, inverse fourth-power potential is the only true singular potential (in the context of non-relativistic quantum mechanics, these are having a singularity at least as great as the inverse square at the center of force [31]) and one of the very few besides the aforementioned Coulomb (r −1 ) as well as harmonic (r 2 ), Morse and Pöschl-Teller potentials, etc., to offer exact analytical solutions (in terms of Mathieu functions) [31,36]. n = 5 corresponds to a perturbation correction to the tensor force in the nuclear potential [33]. Both n = 6 and 7 are connected to the London and Casimir-Polder type van der Waals forces [37]. Scattering of atoms by a conducting sphere is represented by a −1/r 6 potential for small distances and a −1/r 7 potential for large distances [38]. Inter-atomic and intermolecular forces at short distances (as strongly repulsive due to overlap of the electron clouds) are usually represented by singular potentials such as Lennard-Jones (12,6) which has a ∼ 1/r 12 behavior.
The layout of the chapter is as follows. Section II gives an overview of the distinguishing characteristics of singular and regular potentials. Necessary details of the current methodology is summarized in Section III. Section IV makes a discussion on results of some central potentials (both singular and non-singular) using this method with relevance reference to the literature. Finally we end with a few concluding remarks in Section V.
II. REGULAR VS. SINGULAR
This article exclusively deals with the non-relativistic quantum mechanical situation, while making only casual glances on its relativistic counterpart in some occasions. Since a majority of the potentials considered in this work are singular, it would be useful for our future discussion, to differentiate these from regular potentials. The motion of a single non-relativistic particle in presence of a spherically symmetrical potential V (|r|) ≡ V (r) is governed by the following time-independent Schrödinger equation in 3D space (henceforth, h = m = 1 is assumed, unless otherwise mentioned); Depending on the boundary conditions imposed on such a wave function at large distances, three different types of physical situations can be associated with this equation, namely, bound, scattering and resonant-state problems. Alternatively, a solution can be prescribed at the boundary conditions at the origin. The wave function can be resolved into a sum of products of an r-dependent term and an angular term. In case of scattering, we thus have the familiar partial-wave expansion, Where k is related to energy E by the expression k = 2mĒ h 2 and P l (cos θ) signifies the Legendre polynomial. For bound states of angular momentum l, only one appropriately normalized term would appear from this summation. The corresponding radial wave function u l (r) then satisfies a differential equation of the following form, Most of the studies of singular potentials eventually leads to the understanding this radial equation and its solutions.
Following [31], a potential V (r) is defined as regular at r = 0 if, lim r→0 r 2 V (r) = 0 (7) and singular at r = 0 if, If the limiting values in (6) and (7) are finite, V (r) is termed as transition potential. A singular potential is classified as repulsive or attractive according to whether the limiting value is, respectively, +∞ or −∞. The principal difference between a singular and nonsingular potential lies in the fact that in the latter case, solutions of the Schrödinger equation subject to quadratic integrability condition forms a complete orthonormal set, while in the former case, solutions are too numerous and hence, over-complete.
In relativistic domain, however, the scenario is much different from the above nonrelativistic situation. Much weaker infinities in the potential are "highly singular", so that within a relativistic wave equation, even a Coulomb potential is highly singular [2]. Relativistic motion leads to a different singularity criterion; viz., in the vicinity of r = 0, these potentials give an infinite value for the limit, in contrast to Eq. (8) of the non-relativistic case. The regular potentials are defined by a vanishing value of this limit, while a finite, non-zero value characterizes a transition potential, which includes all those potentials exhibiting Coulomb-like behavior at r = 0. Finally, in contrast to the non-relativistic situation, all singular potentials behave like attractively singular in the equations of motion, in the relativistic regime.
While the repulsive singular potentials offer no outstanding problems regarding their physical interpretation, corresponding attractive case causes serious concern. Physical solutions can be determined uniquely for the former, while it is not so for the latter. From a classical point of view, a particle moving in an attractive singular potential gives rise to well-defined scattering only if certain conditions are fulfilled (such as the impact parameter exceeds certain threshold critical value). For any other motions including bounded motions, the particle falls to the origin with an infinite velocity. Furthermore, both bound or scattering trajectories are ill-defined unless trajectory tangents are matched and also energy as well as angular momentum conservation is maintained at the center of force. The scattering problem for an attractive singular potential is also not resolved in quantum picture as in the classical case, and is never defined without extraneous physical assumptions (an arbitrary phase parameter remains to be assigned). In the bound-state scenario, the attractive singular potentials offer a non-unique spectrum consisting of an infinite number of bound states with no lower bounds on the energy. For pure power-law potentials, within non-relativistic quantum mechanics, transition from regular to singular behavior begins to occur at r −2 , in much the same way as this happens with the Coulomb potential in the relativistic domain.
III. THE GENERALIZED PSEUDOSPECTRAL METHOD
In this article, we are concerned with the accurate bound-state solutions within nonrelativistic quantum mechanics. As already mentioned, the radial Schrödinger equation for a singular potential can be solved exactly only in one case, inverse fourth power, where wave functions are obtained in the form of a modified Mathieu functions of generally complex arguments. Thus, approximation methods must be invoked in all other cases. A large number of approximate analytic, semi-analytic, numerical methods have been suggested over the past several decades. These are appropriately discussed in Section IV. In this section we briefly summarize the essential features of our method used in this current work in order to solve the radial eigenvalue problem. For a more detailed account on this method, see [39][40][41][42][43][44][45][46][47][48][49][50][51][52] and the references therein.
The desired time-independent radial Schrödinger equation for a single particle in a nonrelativistic case can be written as,Ĥ The Hamiltonian operator includes usual kinetic and potential energy operators (symbols have their usual meanings),Ĥ (r) = − 1 2 with v(r) = V (r) + ℓ(ℓ + 1) 2r 2 (12) and V (r) is the potential in question. Generally speaking, finite-difference spatial discretization schemes often require a large number of grid points to achieve good accuracy presumably because of the fact that majority of these methods employ a uniform mesh (non-uniform schemes are used in a few occasions as well, e.g., in [53]). The generalized pseudospectral (GPS) method, however, can give non-uniform and optimal spatial discretization accurately, allowing one to work with a denser mesh at shorter r regions and a coarser mesh at larger r. Additionally, the GPS method is computationally orders of magnitude faster than the finite-difference or finite-element methods.
The principal feature of this scheme lies in approximating a function f (x) defined in the such that the approximation is exact at collocation points x j , i.e., In what follows we employ the Legendre pseudospectral method using where x j (j = 1, . . . , N − 1) are obtainable from the roots of first derivatives of Legendre polynomial P N (x) with respect to x, i.e., g j (x) in Eq. (13), called cardinal functions, are given by the following expression, They have the unique property g j (x j ′ ) = δ j ′ j . Now the semi-infinite domain r ∈ [0, ∞] is mapped onto the finite domain x ∈ [−1, 1] by a transformation r = r(x). One can make use of the following algebraic nonlinear mapping, where L and α = 2L/r max may be termed as the mapping parameters. Furthermore, introducing a relation of the form, (18) in conjunction with a symmetrization procedure [54,55], eventually leads to a Hamiltonian in the following transformed form as below, where v m (x) is given by, The advantage is that this leads to a symmetric matrix eigenvalue problem which can be readily solved to give accurate eigenvalues and eigenfunctions. For the particular transformation used in this work, v m (x) = 0. This discretization then leads to following set of coupled equations, where and the symmetrized second derivative of the cardinal function, D j ′ j is given by, It is worth mentioning here that GPS method offers both simplicity of direct finite-difference and/or finite-element method, as well as the fast convergence of finite basis set method. In this sense, one can have the cake and eat it too! It has been rigorously proved [56,57] that the method guarantees an exponential (or alternatively termed, infinite-order ) convergence for a problem of a smooth or infinitely differentiable solution (which is often the case) provided that the orthogonal functions of a common singular Sturm-Liouville problem are used. Here the 'exponential convergence' implies that the error of approximate solution decreases asymptotically faster than the algebraic decay of any order. Moreover a GPS scheme having (N+1) grid points is usually equivalent in accuracy to a corresponding basisset expansion method involving N basis functions. Many other details pertaining to this approach could be found in the references [39][40][41][42][43][44][45][46][47][48][49][50][51][52].
Finally, a series of test calculations were done with a variety of potentials in the literature for which exact/near-exact solutions have been reported, in order to optimize its performance with respect to the mapping parameters. In this way, the following parameter set (r max = 200, α = 25, N = 300) has been consistently used throughout this work (unless otherwise mentioned), which seemed to be quite satisfactory for the purpose at hand.
IV. RESULTS AND DISCUSSION
This section presents a discussion on the results obtained by using GPS. As will be evident soon, this has been very successfully applied to a large variety of physical/chemical problems including static and dynamic situations. Here we select a cross-section of the most compelling and testing cases, while the rest of these could be found by inquisitive reader in the references [39][40][41][42][43][44][45][46][47][48][49][50][51][52]. However, before we move on to the various central potentials representing different physical systems, first we give a brief snapshot of this approach for one particular potential. For this illustrative purpose, we have chosen the 3D quartic oscillator as a prototypical case. Quartic oscillators have found many notable applications. For example, two-and three-dimensional anharmonic oscillators have drawn much interest in far infra-red and microwave regions. Four-membered ring molecules (such as trimethylene oxide, cyclobutanone etc.) are known to have out-of-plane vibrational modes which are predominantly quartic, five-membered ring compounds also have ring puckering modes with a significant quartic contribution to the potential. Therefore the pure quartic oscillator, the mixed quartic-harmonic oscillators and in general, the anharmonic oscillators have been subject to intense study in both quantum field theory and chemical physics for a long period of time and the interest still continues to grow [58][59][60][61][62][63][64]. General analytical solutions of the quartic oscillator are unknown; although some special cases [63] are reported for only certain states of a 1D quartic oscillator where solutions could be found analytically (e.g., the modified quartic oscillator where the potential depends on |x|). Thus there has been considerable interest to study the bound-state spectra of these physically as well as chemically important systems using a wide variety of mathematical methods, such as WKB method, phase-integral approach etc.
Each energy level in a 3D isotropic harmonic oscillator is 1 2 (ν + 1)(ν + 2)-fold degenerate. This degeneracy is associated with the two angular momentum quantum numbers l, m. Each lth level is (2l + 1)-fold degenerate, corresponding to the (2l + 1) linearly independent states with m = l, l − 1, · · · , 0, · · · , −l. Upon introduction of a radial perturbation to a harmonic oscillator (such as a pure 3D quartic oscillator) the degeneracy present in a harmonic oscillator is partly removed. For any given ν, levels with different values of l are split, but the m-type degeneracy of each level remains. In other words, l remains a good quantum number; each lth level being (2l + 1)-fold degenerate. Only an angular perturbation removes the degeneracy of these m levels. Table I compares some specimen results for odd-and even-parity high-lying excited states of the 3D pure quartic oscillator. Vibrational quantum numbers correspond to ν = 48, 49, and angular momentum quantum number is varied from l = 0 − 9. Note that these two quantum numbers must match in parity. The present results are quoted from [46]. It is worth mentioning that while numerous works have been presented for low-lying states (especially the ground states), similar successful attempts for accurate treatment of high-lying states such as those concerned here, have been dramatically less, because of the difficulties faced. Our results are compared with some of the carefully selected best literature values, viz., (a) linear variational calculations involving diagonalization of large order matrices (800 × 800) [58], (b) approximate analytical formulas derived from a scaled oscillator approach [59] (c) finite-difference numerical result [65] and the (d) asymptotic shooting method [66]. Clearly, amongst these literature values, [66] appears to be the most accurate, where solutions are expressed in terms of finite polynomials, requiring straightforward determination (integration) of zeros of such polynomials. As evident, our present results for all these states match exactly up to the 9th decimal place with those of [66], which manifestly demonstrates the power and efficacy of this method for higher excitations. Similar kind of tests have been made for other cases such as Morse oscillator, an anharmonic oscillator with a quartic perturbation [43], charged harmonic oscillator [42], Hulthén potential [45], harmonic potential including an inverse quartic and sextic perturbation as well as a Coulomb potential with a linear and quadratic coupling [46], etc., which offer conditionally exact solution for certain states. In all such cases, near-exact solutions were obtained from GPS method. Now we proceed for the discussion on individual potentials.
A. Spiked Harmonic Oscillator
A class of singular potentials defined by the following Hamiltonian, where p = −i ∂/∂r, has been termed as the spiked harmonic oscillator (SHO). H 0 stands for the simple harmonic oscillator Hamiltonian, whereas the coupling parameter λ and positive constant α determine strength of the perturbative potential and type of singularity at the origin respectively. Ever since the fascinating work of [67] on its ground-state energies in the mid 70s, an enormous amount of works have been published for its studies [42,52,53,65,. This is not only due to its widespread applications in atomic, molecular, nuclear and particle physics but also because of multitude of inherent interesting properties from mathematical physics point of view. This gives rise to an interesting situation recognized long times ago, i.e., no dominance of either of two terms in the interaction potential for extreme values of λ. In other words, one never deals with small perturbations [75,79].
For all λ→0, λr −α adds an infinite repulsive barrier near the origin. On the other hand, in the limit of λ → ∞, one can not ignore the r 2 term. In fact, the harmonic term can never be neglected, for it is needed for the existence of definite ground states [2,110]. Thus the potential resembles a wide valley extending to ∞ [76]. Another distinctive feature related to this potential is that they exhibit the so-called Klauder phenomena, viz., once the perturbation is turned on, complete turn-off is impossible; permanent irreversible vestigial effects of the interaction persists [68][69][70]72]. In particular, a sufficiently singular potential can not be smoothly turned off (λ → 0) in the Hamiltonian (H = H 0 + λV ) to restore the free Hamiltonian H 0 . It has been shown that [67] for an SHO, the familiar Rayleigh-Schrödinger perturbation diverges according to the relation n ≥ 1 α−2 , where n is the order of perturbation term. Accordingly, the first-order perturbation correction diverges for α ≥ 2, second order for α ≥ 5/2, etc. This potential also exhibits the super-singularity phenomenon [71] in the region of α ≥ 5/2, i.e., every matrix element of the potential becomes infinite. They have been discussed in the interesting context of degeneracy in one-dimensional systems [92].
In another work [107], it was found that, the perturbation theory in λ has an ultraviolet divergence for a range of α, which causes the perturbation series to be ordered by λ Z ; Z being a fraction less than unity.
At this stage it would be convenient for our purpose to discuss a simpler special case of α = 1, the so-called charged harmonic oscillator, before we proceed for the general stronger spikes (α = 1). This is a non-supersingular spiked oscillator characterized by a perturbative term of the Coulomb form λ/r, and has been studied in considerable detail [80]. It is possible to identify three distinct regions depending on the value of effective coupling constant, viz., [42] and "Exact" results are quoted from [80]. hypervirial relation and Hellmann-Feynman theorem. In the Coulomb case, this is given as, whereas for the strong-coupling case this becomes (µ = (2/λ) 1/3 ), It is also known that there could be an indirect, somewhat involved path to connect the strong-coupling regime (λ→∞) with the small-coupling regime, as well as to connect Coulomb regime (λ→−∞) with the small-coupling regime. However, as yet no direct link has been found to connect the +∞ and −∞ regions. Another very important characteristic of the charged oscillator is that this offers an infinite set of elementary solutions [80] for certain selected values of positive coupling constant only. These solutions are typically of the form of a polynomial multiplied by a Gaussian and are possible for both ground as well excited states. Some of these, taken from [42], are displayed in Table II. Note that "Exact" results have been divided by a 2 factor for consistency with the literature. We see excellent agreement of current method with "Exact" results for all values of λ. Ground states for general λ (both positive and negative) have been reported in [42,80], whereas first three excited states corresponding to l = 0, 1, 2, 3 have been studied in [42]. This concludes our discussion on charged oscillator; more detail could be found in [42,80].
Let us now return back to SHO. From a variational analysis [71], the eigenvalues of SHO for 2 ≤ α ≤ 3 were given by asymptotic series to first order for positive λ. But for α ≥ 3, the ground-state eigenvalues are given by, whereas for α = 3, these are obtained as, where k, k ′ are to be determined variationally. Later, a modified Rayleigh-Schrödinger series was put forth [67] by exploiting the standard WKB approximation for lowest few orders.
This success encouraged other workers to further develop a special perturbation theory, known as singular perturbation theory to obtain first few terms of perturbed λ-expansion for different values of α. To this end, asymptotic series for ground-state eigenvalues of the SHO Hamiltonian are explicitly written as, Here ν = 1/(α − 2) and c = 0.5772156649 is the Euler's constant. It is worth noting that α = 5/2 bears a close analogy with the low-density expansion of energy of a many-body boson system at zero temperature [111], viz., In another development [75], ground-state energy was obtained by making use of a functional space spanned by the solution of Schrödinger equation for a linear harmonic oscillator within a variational framework, followed by standard diagonalization of symmetric matrices.
A strong coupling expansion (α ≥ 2) for positive λ was derived for approximate ground-state energy. For example, for λ = 5/2, this reads as, E(λ) = 9 5 5 9 4/9 λ 4/9 + 9 2 For same α, the fourth-order large coupling perturbation expression is, The weak coupling expression using perturbation theory up to second order as well as the strong coupling expansions up to 10th order (in algebraic form) were obtained for the nonsingular SHO (α < 5/2) through a re-summation technique [76]. In the above work, also an attempt was made to find a path connecting ground-state energy in the weak coupling regime with that of the strong coupling regime. Similar weak-coupling expansions for ground-state energies were given for the special case of α = 2 in [84]. Ground-state energy of the particular case α = 4 was studied by using a non-orthonormal basis set satisfying the correct boundary conditions [87]. Through a logarithmic perturbation theory [108], closed-form expressions for energy and wave function correction terms were obtained for ground states.
In another analytical method, namely, pseudo-perturbative shifted-l expansion technique (PSLET) [96], it was possible to obtain both eigenvalues and eigenfunctions in one batch.
This uses 1/l as a perturbation expansion parameter, wherel = l − β (l is a quantum number and β is a suitable shift introduced to avoid the trivial case l = 0). This has been used to study bound states of D-dimensional spiked harmonic oscillator spectra [97] as well, taking into account the effect of inter-dimensional degeneracies arising out of the isomerism between angular momentum and dimensionality of the central force Schrödinger equation.
Other approximate variational methods have also been attempted [78]. These states have been studied by a modified WKB approximation [93]. Upper and lower bounds of ground as well as excited states have been developed [94] by means of envelope theory through some convenient smooth transformation. A rigorous variational method suitable for the complete set of discrete energy eigenvalues has been established [103], which also remains valid for arbitrary angular momentum quantum number, in general N-dimensional case. This was done by expressing the SHO Hamiltonian as a perturbation of the singular Gol'dman and Krivchenkov Hamiltonian H 0 . Further it was proved that zeroth order eigenfunctions generated by H 0 form a suitable singularity-adapted basis for the appropriate Hilbert space of the full problem. Ground-state bounds have been investigated by means of potential envelopes [89]. An eigenvalue moment method has been proposed for accurate estimation of ground states of singular potentials [91].
The SHO problem was numerically solved through finite-difference scheme [53,65], a Lanczos grid method [83]. In the latter, Hamiltonian was discretized on a grid with a 10th order finite-difference formula for kinetic energy and a starting function r 8 e −r 2 . Applications were made for s states for several values of α, λ. Accurate numerical solutions were found [90] by modifying the usual analytic continuation method of [66,112], which was initially applicable to potentials whose solutions do not have essential singularities. This was possible by introducing a family of non-singular potentials which depend on a parameter R c and which approach the exact potential as R c tends to zero. A non-perturbative but completely convergent algorithm [88], and formally identical to the Lanczos method was proposed for ground states. An accurate algorithm was presented where a discretized symmetric expression was derived for the transformed Schrödinger equation [79], which could be efficiently solved by standard routines for tridiagonal matrices efficiently. This involved a coordinate transformation either of the form r = Kx/(1 − x) (parametrized Euler transformation) or [53], where the adjustable parameter K needs to be chosen judiciously.
In spite of all the above attempts, a general method which can offer accurate reliable results for both these potential parameters for arbitrary states (covering both ground as well as various excited states, especially the higher ones), have been very rare. Thus, for example, physically meaningful and high accuracy results are obtainable by only a few of the methods discussed above. Moreover, some of these methods provide high-quality results for certain type of parameters, while performing rather poorly for other sets. An enormous amount of work has been reported for ground states; excited states have received much less attention presumably due to the inherent difficulties encountered with them.
Excepting a very few rare studies (for example, [42,100]), these have almost exclusively dealt with eigenvalue determination; nature of eigenfunctions has been explored only in some rare occasions. Last, but not the least, some of these methods are often fraught with rather tedious, cumbersome mathematical complexities. The GPS method provides a simple general and easily affordable scheme where all these above mentioned discomfitures are either completely removed or partly alleviated by invoking an optimal, non-uniform effective spatial grid. General applicability of the approach is amply demonstrated in Tables III, IV which is a non-trivial generalization of the SHO Hamiltonian and has also been studied in considerable detail in the literature. For example, in [79], a large-order strong coupling expansion, in terms of some ad hoc expansion parameter was proposed for large anharmonicity constants corresponding to either of b,c large or both of them large. For small anharmonicity constants, lowest order correction to the perturbed energies were also obtained.
A very important feature of this central singular potential is that it provides conditionally exact solutions for certain values of the potential parameters. From very early stage of the development of quantum mechanics, there is continual interest in solving Schrödinger equation exactly. However, as we know, exact solutions could be found for very few potentials. For systems with one degree of freedom, supersymmetric quantum mechanics together with shape invariance has been found to be one of the most successful techniques for understanding the exact solvability. Of late, due attention has been paid to different class of potentials which are quasi-exactly solvable (QES) and conditionally exact solvable (CES). In the former case, only a finite number of eigenstates can be found exactly. A more interesting case is offered by the CES potential, which is intermediate between the exact solvable and QES potential. For, exact eigenvalues are obtainable only when potential parameters satisfy certain conditions. The wave function ansatz technique is one very common procedure used for this purpose, which is purely mathematical and usually fails to resolve the physical reason for conditional solvability. However, this technique is not straightforward for arbitrary states. Recently, super-potential ansatz technique has also been proposed. Bound states in this potential for c = 0 [73] and c = 0 [74] have been investigated nearly two decades ago.
A simplified ansatz for eigenfunctions has offered exact ground-state energy in the following closed form [77], with the ground-state energy given as, However, for this to be satisfied, the potential parameters must be related as follows: Later, a similar approach was employed for first excited states [81]. The harmonic potential with an inverse quartic and sextic anharmonicity was solved numerically [85] for some lowest states using B-spline basis sets. Continuing along the same line, four sets of solutions were obtained, including one constraint equation for each set [86]; furthermore it was found that the analytical expression for energy agrees with numerical result for any one among the ground, first and second excited states, depending on the particular constraint condition. Conditional exact solutions were studied by [101,102] in the light of supersymmetric quantum mechanics and shape invariance. A Hill determinant approach was attempted [82].
In parallel to above works, recently there has been considerable interest in studying the so-called generalized spiked harmonic oscillator(GSHO), defined as, Here λ, α are two real parameters and obviously SHO is a special case of GSHO with A = 0. Both variation and perturbation methods were employed for their studies [95, 98-100, been derived [95,100] for the required singular-potential integrals (or matrix elements) of the form m|x −α |n . Variational bounds [98] have been examined on the light of above as well. A first-order perturbation series was developed [100] for wave functions in terms of generalized hyper-geometric functions. The modified perturbation theory of [67] was extended to obtain perturbation expansions for the GSHO problem [105]. These expressions remain valid for small values of coupling λ > 0 and corroborate the results of those obtained for SHO [67]. In [106], using a perturbation expansion up to 3rd order, lower and upper bounds of the GSHO ground states for small coupling parameter λ, were estimated. Weakcoupling perturbation expansions for ground states were derived in [99].
However, to the best of my knowledge, no direct results were provided in the above investigations. By means of GPS method, very accurate ground as well excited states, expectation values, radial densities have been reported only lately [52]. A wide range of interaction was considered (both weak and strong coupling) for arbitrary values of n, l.
Energy variations with respect to the two parameters λ, A are displayed at the top and bottom panels in Fig. 1. The first three eigenstates of l = 0 are studied for α = 4, 6 respectively with λ varied from 0-35. As seen clearly, energy varies rather slowly with respect to λ (monotonic increase with increase in λ) for both α = 4, 6 (smaller α produces larger effect).
In the neighborhood of zero λ, for all values of A, both α give very similar energies. As
B. Screened Coulomb Potentials
The screened Coulomb potentials, defined as, have found significant importance in many areas in physics, chemistry. This can be used to approximate the potential experienced by an electron in an atom where other electrons screen the nuclear charge. In the form of Debye-Hückel potential, it describes the shielding effect in plasmas. An enormous amount of work has been done for understanding many fascinating features these potentials exhibit. In the context of atomic systems, Z is identified as atomic number while the screening constant λ bears different significance in different branches. In what follows, we are concerned with the two simple representatives of the screened Coulomb potential. First one, the Hulthén potential [113,114], given as, is one of the most important short-range potentials. It has relevance in nuclear and particle physics [115,116], atomic physics [117,118], solid-state physics [119,120], chemical physics [121], etc. This is also a special case of Eckart potential. Other one, the Yukawa potential [122], as given below, has also found numerous applications in various branches in physics, chemistry, etc.
They show many similarities, e.g., for small r, they display Coulomb-like behavior, whereas they decay monotonically exponentially to zero for large r, Use of a scaling transformation r → r/Z leads to the following well-known relations, Here γ = δ or λ. Therefore it suffices to consider only the case of Z = 1, and develop energy and eigenfunctions as a function of screening parameter δ. A distinctive feature (in contrast to the Coulomb case) is that the number of bound states is limited because of the presence of screening parameters. In other words, bound states exist only for certain values of screening parameter below a threshold limit. For Yukawa potential, this value has been estimated quite accurately as 1.19061227±0.00000004 a.u. [123]. The former also has an additional property that it offers exact analytical solutions for l = 0, not for higher partial waves.
Great many attempts have been made to calculate the bound-state energy, eigenfunction of these potentials, as well their scattering properties. l = 0 states of Hulthén potential, correct to any order of δ, have been obtained by using an extended version of the analytic perturbation theory [126]. Using a non-perturbative approach, an extension of Ecker-Weizel approximation was used to obtain analytic closed-form solutions for eigenvalues and eigenfunctions for arbitrary angular momenta [118]. Latter authors used non-rigorous but intuitive physical arguments to determine the unknown constants involved in this approximation.
To this end, the energy was obtained as, A strong-coupling series was suggested for bound state energies of both these potentials [127] within the WKB approximation, with special emphasis on behavior of energies in the neighborhood of critical region. Computation of higher-order perturbation theory and summation methods for divergent perturbation series was also addressed [128]. Later, a pathintegral formalism [129] was put forth for s states, where the exact energy spectrum and normalized s-state eigenfunctions were obtained from poles of the Green function and their residues. A shifted 1/N expansion technique [130,131] was also used for these potentials.
Further, an algebraic perturbation method based on Lie algebra of the group SO(2, 1) which is known to be the dynamical group for a number of spherically symmetric potentials, was developed for s states by [132] and for arbitrary states by [133]. A one-parameter variational calculation has also been reported for energies, oscillator strengths [124]. These were found to be quite satisfactory in low-screening region, but proved inadequate for higher δ. The concept of kinetic potentials was used to construct a global geometrical approximation theory for the energy spectra of these potentials [134]. A very accurate generalized variational method [125] was developed for both these potentials. This utilized trial functions from a linear combination of independent functions. For Hulthén potential, the basis functions used assume the following form, where k = −1, 0, 1, 2, · · · for s states and k = 0, 1, 2, · · · for l = 0 states. β is a variational parameter determined by minimizing the energy for a given state and basis size. The constant A k = (2β + δ) k+1 / (2k + 2)! is included to prevent numerical overflow. For Yukawa potential, the basis takes the following form, ψ k = B k r k e −βr/2 , k = 0, 1, 2, · · · where B k , β are normalization constant and variational parameter respectively. An improved variational scheme was proposed [135], wherein a set of variational parameters was introduced into the trial wave function to form a family of independent functions. This potential has also been dealt using supersymmetric quantum mechanics and first-order perturbation theory [136]. Based on the small and large r behavior, threshold and asymptotic properties of energy and eigenfunctions of these potentials were examined critically in [137].
First-order perturbative calculations using Hulthén potential as unperturbed potential, was employed for s states of Yukawa potential [117]. Later, motivated by this, using solutions to a Hulthén-like effective potential as variational trial functions, nonzero l states of Yukawa potential were obtained [138] with reasonable accuracy. It was soon realized that these trial functions provide better variational energies, wave functions with fewer parameters than frequently used hydrogenic or Slater-type functions. Detailed variational results were presented for lowest 45 eigenstates [139]; also probabilities for spontaneous emission in dipole approximation was studied as a function of screening length, for transitions between six lowest states. A combined Padé approximant ([6,6], [6,7]) to the perturbation series for Yukawa potential was presented [140]. A shifted 1/N expansion [141][142][143] as well as an improved expansion was put forth [144]. These offered convergence, better than usual 1/N expansion. Through a combined perturbation theory and continued fractions-Padé approximations at large order [145], very accurate bound states were obtained. A linear combination of atomic orbitals (LCAO) calculation was performed for ground states as function of λ within a variational framework (using Slater basis functions) [123].
Accurate numerical methods have been developed for these potentials. All the 45 eigenstates 1s through n = 9, l = 8 were numerically investigated for wider λ for Yukawa potential nearly four decades ago with reasonably good accuracy [146]. Another of them [147] consists in solving the Dirichlet problem in a box with radius n by a Ritz method. The convergence to the eigenfunctions in the norm of Hilbert space L 2 (0, n) was proved to be guaranteed.
It may be mentioned that although there are decent number of high-quality results available for these interactions in the literature in weak -coupling regions as well as lower states, there is a scarcity of such results for stronger coupling and for higher states. In a GPS study [45], these potentials were calculated very accurately for arbitrary field strengths with special emphasis on these issues. Sample Hulthén potential results from GPS method are collected in Tables V. Numbers in the parentheses denote respective critical screening constants. Literature values are quoted for comparison, wherever possible. As discussed in [45], either for higher states or high screening constants, larger R is needed, whereas eigenvalues are apparently less sensitive with respect to the number of grid points N. In Table V, the s states are presented in the strong δ region. Exact analytical results (denoted by asterisks), available for l = 0 states of Hulthén potential are given as, with n 2 < 2/δ. Our calculated values for s states completely coincide with exact analytical results for all states (up to n = 17) covering a whole range of interaction. The δ c for l = 0 state is given exactly by the simple relation δ c = 2/n 2 , whereas for l = 0 this is approximated by the analytical expression [127], which offers good agreement with numerically determined values [124]. Excellent agreement with literature results has been observed for all the states. As seen, a uniform accuracy is achieved for all states encompassing a large range of interaction, unlike some previous calculations which encountered difficulties in the strong-coupling region. For weaker coupling, present results are superior to all other results except the variational work of [125]. However, in the stronger region, GPS results are superior. We have enlarged the coupling region from all other previous works and current results are so far the most accurate ones in the neighborhood of critical δ. Additionally, Figure 2 depicts the variation of energies with respect to δ for all states belonging to n = 7, 8 (a) and n = 9, 10 (b) in the neighborhood of zero energy. For small n, there is good resemblance of energy orderings with those of Coulomb potentials; however, this scenario changes with an increase in n. In that case, significant deviation is noticed as well as complex level crossing observed in the vicinity of zero energy, which makes their accurate determination quite difficult. This is more dramatic for latter (e.g., 9k, 9l mixing heavily with 10s, 10p, 10d, 10f at around δ = 0.015 − 0.017). Besides, for a given n, separation between states with different l increases with δ. Further discussion on results including radial densities, expectation values etc., could be found in [45].
Let us now turn our focus on the Yukawa potential. Similar results as those Hulthén potential, are presented in Table VI and Fig. 3. No analytical expressions are available for critical screening constants in this case; numerically determined λ c s, available from [146], are quoted, wherever possible. As in the previous case, here also very good agreement is observed for all states with the best theoretical results. Once again, a large range of interaction (both weak, strong) as well as very high states are considered. As in Hulthén potential, results of [125] are more accurate than ours in the small λ region, but for stronger couplings, present results are superior. Some of the higher states have not been calculated by any method other than those of [137,146]; our results significantly improve those. Considering all these, present results appear to be the most accurate and reliable for all the states (except 1s, 2s) in regions close to δ c . As n, l increase, accurate calculation of these states become progressively difficult; for 7s − 9l states only two attempts [141,146] are known other than the present one. In Fig. 3, dependence of energy orderings on λ is shown vividly in the neighborhood of E = 0 for n = 7, 8 (a) and n = 9, 10 (b). Very similar conclusions as in Fig. 2 can be drawn: (i) as n increases energy ordering tends to be more complex with the possibility of level crossing becoming higher (ii) energy splitting between states with different l increases with an increase in λ for a particular n. No attempts are known for any of these states with n > 9 other than the present one and these may constitute a useful reference for future studies. As an illustration, also some very high-lying states (17s) are included in this table.
Finally, we note that eigenvalues of Coulomb, Hulthén and Yukawa potentials are known to satisfy the following relation, This has been found to be true for all the states considered here. For many other interesting features on these potentials, see [45].
C. Power-law and Logarithmic potentials
The power-law and logarithmic potentials have found wide-spread applications in particle physics [148][149][150][151]. The orbital structure of logarithmic potential has been studied in the context of self-consistent modeling of triaxial systems (such as elliptical galaxies), bars in the centers of galaxy discs [152], global dynamics [153], etc. The Coulomb plus powerlaw potential also serves as a non-relativistic model for the principal part of a quark-quark interaction [154]. The potential is given by V (r) = A sgn(ν)r ν , where r = ||r||, A > 0 and ν = 0. For dimension N = 1, ν > −1 and for higher dimensions N ≥ 2, ν > −2. For ν = 0, we have V (r) = A ln(r), with A > 0. It is possible to include logarithmic potential as a limiting case of the power potentials if in place of the potential family f (r) = sgn(ν)r ν , we use V (r, ν) = (r ν − 1)/ν, whose limit as ν → 0 is V (r, 0) = ln(r). Eigenvalues of the powerlaw potential E N nl can be labeled by two quantum numbers; the total angular momentum l = 0, 1, 2, · · · , and a 'radial' quantum number n = 1, 2, 3, · · ·, which represents 1 plus number of nodes in the radial part of wave function. The eigenvalues in N ≥ 2 spatial dimensions has degeneracy 1 for l = 0 and for l > 0, the same is given by, The well-known hydrogenic atom and harmonic oscillator constitute the two exactly solvable cases in N dimensions corresponding to ν = −1, 2 respectively, Also analytical solutions for linear potential in 1D, as well as s states in 3D could be found from the zeros of Airy function [155].
Numerous attempts [155][156][157][158][159][160][161][162][163][164][165] have been made in the past years to examine many interesting properties of their solutions utilizing an array of methodologies. For example, ground and excited energy levels of the generalized 1D anharmonic oscillator characterized by the potential V (x) = x 2 + λx 2m , m = 2, 3 were calculated non-perturbatively [156] by a Hill determinant method. A WKB approximation [155] was proposed. It was proved [157] that the eigenvalues E n = E 1 n0 of power-law potentials in 1D increase with n at a higher rate for a greater ν. However for any ν, this increase never attains n 2 . In general, the dependence of E N nl on the coupling parameter A may be established by elementary scaling arguments by replacing r by σr. Then one finds that, Thus without any loss of any generality, one can limit further discussion on the case of unit coupling A = 1. Lower and upper analytic bounds for ground states were developed for power law potentials [158]. The shifted 1/N expansion technique, with some of its variants (such as a modified or large-order expansion) [159][160][161] has been quite successful. Dependence of eigenvalues on the power parameter ν has been studied by spectral geometrical arguments [162]. Bounds of eigenvalues for polynomial potentials in N dimension have also been studied semi-classically [163,164]. A variational method [165] was developed for eigenvalues and eigenfunctions; it provided good results for small ν, but suffered in the higher ν regions.
GPS method [43] has produced very accurate results for both these potentials for states with arbitrary quantum numbers n, l. A detailed comparison with literature results has been made in [43], from which it is abundantly clear that the present scheme offers results which are considerably better than the existing results available. Moreover, as in the previous occasions, here also we obtain both low as well as higher states with equal ease and accuracy.
As an illustration, Table VII the accurate values of [161]. For a more complete discussion, see [43].
pointed out long times ago. The Schrödinger equation with such an interaction Lagrangian is analogous to a zero-dimensional field theory with a nonlinear Lagrangian in elementary particle physics [156,168]. Also the 3D analogue was found to produce a sequence of energy levels which is identical to that occurring in the shell model of nucleus [169]. It may be noted that for either of the following situations, λ = 0 or λ = g = 0 or g << λ or large g, the solution behaves as the harmonic oscillator.
The potential in 1D has generated considerable interest among the theoreticians as evidenced by numerous works employing a wide range of methodologies such as variational method, perturbation theory, semi-numerical as well as purely numerical methods. It is possible to obtain exact eigenvalues and eigenfunctions of ground and higher states provided the potential parameters obey certain specific relations between them. In [170], existence of a class of solutions (in terms of terminating polynomials or Sturmians of Schrödinger equation with potential x 2 − λ/{g(1 + gx 2 )}, −∞ < x < ∞) was found out when certain algebraic relations between g, λ are satisfied. In another attempt [171], eigenfunctions were expressed as definite integrals whereas eigenvalues by means of a limiting procedure. Exact even-and odd-parity solutions in the form of products of exponentials and polynomial of x have been investigated [172]. For small λ ′ (= λ/g), the eigenvalues were given by, where [176,180]. GPS results are quoted from [51]. with n = 0, 1, 2, · · ·, and H n (x) is a Hermite polynomial of order n. Similarly, the first four eigenvalues (for large g) were given as, It is possible to supersymmetrize the non-polynomial interaction and in that case, one may find as many as exact analytical solutions (corresponding to ground states of different supersymmetric quantum mechanical system) one wishes [173]. Existence of conditionally exact solutions for 1D NPO has been studied by other authors as well [174][175][176]. Possibility of an infinite set of exact solutions of both odd-and even-parity, which could be expressed in terms of a product of exponential and polynomial functions of x 2 for specific relations between λ, g, has been explored in [177][178][179] as well.
The literature for 1D NPO is vast; only some of the most important ones are cited here chronologically. A non-perturbative method, in conjunction with a Ritz variational (with a Hermite polynomial basis) method was developed for ground and first two excited states with reasonable success [182]. Using perturbation theory, asymptotic expansions were derived [183] for energies and eigenfunctions in the range of small g and large λ. A combined variational and perturbation theory with properly scaled harmonic oscillator functions as basis set was used as well [184]. A Hill-determinant method [185] was proposed. Ground a Ref. [169]. b Ref. [200]. c Ref. [180]. d Ref. [181]. e Ref. [193].
and first three excited states of the interaction potential were obtained by forming a [6,6] Padé approximant to the energy perturbation series through a hypervirial relation [186]. A perturbed ladder operator method [187] was applied to the resolution of perturbed harmonic oscillator wave equation for cases when the perturbation is expandable in a convergent series of Hermite polynomials. Some other methods are: (i) variety of finite-difference approaches of different flavor [188][189][190] with differing accuracy (ii) quasi-polynomial solutions with the help of first Heun confluent equation or spheroidal Heun equation [191] (iii) an algebraic perturbation theory, based upon the SO(2,1) dynamical group and a tilting transformation, found to be quite successful for eigenfunctions, eigenvalues in the small g region [192] (iv) supersymmetric as well ordinary WKB method [176] (v) a mixed-continued fraction algorithm [193] (vi) an analytic continuation procedure [194] using a Taylor series, which produces very accurate energies and wave functions (vii) perturbation theory with mixed hypervirial and Hellmann-Feynmann theory [195] (viii) a composite of modified Hill-determinant as used in [172], incorporating an operator method and a vector recurrence [196], gives quite accurate solutions (ix) variational bounds via Rayleigh-Ritz theorem [197] (x) a quadrature discretization technique [198] (xi) purely numerical approach [199], etc. While most of these focus on 1D case, some (such as [193]) deal with both 1D plus 3D and/or N dimensions.
In parallel to the works in 1D, great deal of attention has been paid to investigate 3D NPO eigenvalues, eigenfunctions in the past several decades, although the amount of work is visibly and surprisingly much less compared to the 1D counterpart. Through a super-symmetry-inspired factorization method [201], it was possible to obtain exact algebraictype solutions under suitable constraints on potential parameters. Right choice of potential parameter leads to compact analytic expressions for exact eigenenergies, eigenfunctions as well the constraint relations. Exact solutions of the NPO in 2D, 3D have been studied in [202] also. Quasi-polynomial solutions in N dimension was suggested by [203] through the Heun confluent equation. Some of the successful works in 3D are quoted here. Through a shifted 1/N expansion [169,181], results of 3D NPO were obtained for 9 sets of n r , l values for n = 0 − 4. An eigenvalue moment method [200] has been quite promising in providing accurate estimates of energy bounds for the 3D NPO. A combined hypervirial and Padé approximation [204] has provided very accurate results on this potential. A unified variational treatment [180] based on the Gol'dman and Krivchenkov Hamiltonian offered very accurate bounds. However, quite unfortunately, while many accurate results for 1D NPO (for example, eigenvalues accurate up to 10 decimal place were obtained in a number of works such as [190,192,[194][195][196][197][198]) are available, similar results for general states of a 3D NPO for arbitrary sets of potential parameters have been obtained in very few of the works mentioned above. To the best our knowledge, such accurate results could be obtained by only three methods [180,200,204]. Even then, leaving aside [204], the other two works deal chiefly with bounds and not provide direct eigenvalues. Thus there is a genuine lack of good-quality results for 3D NPO. Moreover, with the rare exception of [169], virtually all these above mentioned works have focused mainly on positive λ, even though it was known for long time that equally well-behaved solutions could be obtained from negative λ provided g > 0. Negative λ case has been critically examined in detail for 1D in [196,197]. In the following paragraph, we will discuss the performance of GPS method. Table VIII gives a few eigenvalues corresponding to certain levels of 3D NPO for some particular values of g, λ, which offer exact analytical solution (denoted by asterisk). These are available for λ < 0 and presented for lowest (n r = 0) states of l = 0 − 2. Literature results are quoted wherever possible. Some of them were reported long times ago using a shifted 1/N expansion [181]. In all cases, 12-13 place decimal accuracy is easily achieved by the GPS method [51]. These results clearly outperform the best results available so far. Also calculations with λ = 0 for low as well as high values lead to expected harmonic oscillator solutions promptly (up to 13th place of decimal). This is not presented here and can be found in [51]. Next, Table IX compares the lowest two eigenstates (n r = 0, 1) of l = 0 − 3 with the best existing literature data for a sufficiently broad range of interaction (g, λ varied from 0.1-100). For small g, λ, the estimate of bounds obtained in eigenvalue moment method [200] are usually good, but they deteriorate quite badly for larger values of parameters. Significantly improved bounds have been published lately [180]. It is quite clear that our method provides the most accurate direct estimates for all these states. Figure 4 vividly displays the variation of ground-state energy against g for fixed values of λ in (a) and λ for fixed values of g in (b).
Similar plots are obtained for higher states as well. For fixed λ, energy steadily decreases as g increases; eventually approaching those of the harmonic oscillator asymptotically, for large g, as expected. With larger λ, this behavior is observed with a greater magnitude.
Negative λ shows a correspondingly opposite trend. In (b), changes against λ are seen to be more prominent for smaller g; once again eigenvalues approaching harmonic oscillator energies with an increase in g. Many other features regarding higher states (as high as up to l = 20) have been discussed, at length, in [51]. These high-lying states are inherently diffuse and consequently extend over a larger spatial region; hence a larger r max value is needed in order to incorporate these long-range contributions. To our knowledge, only two sets of results are available in the literature; one in the form of lower and upper bounds [200], other is from finite-difference calculations [204]. Our results [51] practically coincide with those from latter in all cases.
First instance of such violation occurs for (2s, 1f ) and thereafter it happens for several adjacent pairs of states. Finally, it was also noted that, the "usual" or most commonly observed ordering in the cases of λ < 0 and λ > 0 for a fixed n follow a mirror-image relationship. For instance, if n = 9, the ordering for λ > 0 is 5p < 4f < 3h < 2j < 1l whereas for λ < 0 this is reversed, i.e., 1l < 2j < 3h < 4f < 5p.
At this stage, a few words regarding degeneracy issues in 3D NPO are in order. It is well-known that all the (n r , l) states belonging to a particular n in a 3D harmonic oscillator satisfying n = 2n r + l are degenerate. For example, there are 3 degenerate states corresponding to (n r , l) pairs (2,0), (1,2) and (0,4), for n = 4. For λ = 0, such degeneracies in a 3D NPO vanishes and these are conveniently analyzed through their respective level spacings, ∆E = E nr,l −E nr ′ ,l ′ . First 4 (n ≤ 4) such splittings of positive λ were investigated in [169,193] and more recently in [51], where very similar qualitative features were observed. However, such studies for λ = 0 have been made only recently [51] through GPS method; variation of 12 such splittings possible between certain adjacent levels of n = 2 − 7 were considered with respect to the interaction parameters in potential. The first 2 splittings E 1,0 − E 0,2 , E 1,1 − E 0,3 are related to n = 2, 3, whereas the last 3 of them E 1,5 − E 0,7 , E 2,3 − E 1,5 and E 3,1 −E 2,3 correspond to n = 7. Changes in these splittings with respect to λ were followed by varying latter from −0.1 to −100 keeping g fixed at 0.1. The same for g were monitored by changing latter from 1 to 1000 for fixed λ = −100. All ∆Es tend to increase as |λ| increases (eventually approaching a constant value in the limit of λ → ±∞) and decrease as g increases. Furthermore, the splittings tend to vanish in the limit of large g for |λ|. For more detailed discussion on energies, as well as other quantities, see [51].
E. Application to Atomic Rydberg and Hollow Resonances
In this subsection, we briefly mention one recent application of GPS method for atomic excited states, with special reference to singly, doubly excited Rydberg resonances in He and triply excited hollow resonances in three electron atoms. For this, the GPS method is used to solve the radial Kohn-Sham (KS) equation within a density functional framework. This approach has been very successfully applied to a broad range of important physical processes in atomic excitations such as multiply excited states, valence as well as core excitations, highlying Rydberg states, negative atoms, etc. [39,44,47,49]. Dynamical situations have also been treated by this method quite well [40,41].
Triply excited atomic lithium containing all three electrons in n ≥ 2 shells leaving the K shell completely empty, constitutes an interesting multi-excited atomic problem. This is a prototypical case of a highly correlated, three-electron system under the influence of a nucleus and thus typifies the well-known four-body Coulombic problem (an ideal system for examining delicate inter-electronic correlation). Since one-step photo-generation of such a state requires coherent excitation of all three available electrons, they pose significantly more difficulty to be produced from ground state by single-photon absorption or electron impact excitations. Besides, their close proximity to more than one thresholds as well presence of an infinite number of open channels offer considerable challenges to both experimentalists and theoreticians. A vast majority of these hollow states are auto-ionizing and have found important practical applications in high-temperature plasma diagnostic by means of highresolution X-ray spectroscopy.
Development of third-generation, extreme-UV synchrotron radiation as well as availabil-ity of several powerful, sophisticated quantum mechanical methodologies, have inspired an overwhelming amount of work in the last three decades towards characterization of these states. Ever since the first electron-He scattering experiment [205], and 2l2l ′ 2l ′′ states in Li and highly charged ions in beam-foil experiments [206], many subsequent attempts have been made which helped identify many bound states such as 2p 3 4 S o besides some auto-ionizing ones. However, the lowest 2s 2 2p 2 P o resonance in Li was observed in a photo-absorption spectroscopy [207] through a dual laser plasma technique. Thereafter, various higher resonances were found and tentatively classified in a wide range of 140-165 eV. An enormous amount of experimental works have appeared in the literature lately for high-precision determination of these resonance positions, widths, lifetimes, etc (see, for example, [44,47], for other experimental reference on the subject).
Parallel to the experimental developments, an impressive amount of theoretical works have been reported in literature over the past years, with wide-varying range of complexity, capability, accuracy. However, due to the problems mentioned earlier, accurate and dependable characterization of these states has remained a formidable challenge, from a theoretical standpoint. Despite all these, several works are available in the literature. Some of the most successful methods are: (i) truncated diagonalization method [208] (ii) 1/Z expansion method [209] (iii) many-body perturbation theory [210] (iv) state-specific theory [211] (v) configuration interaction [212] (vi) joint saddle point and complex coordinate rotation [213] (vii) a space partition and stabilization procedure [214] (viii) several variants of R-matrix theory [215,216] (ix) a hyperspherical coordinate approach [217,218], etc.
Density functional theory (DFT) [221][222][223][224] has emerged as one of the most powerful and successful tools for electronic structure calculation of atoms, molecules, solids in the past four decades. While for ground states its success was conspicuous, it was not so for excited states until only recently. In the last few years, a DFT-based formalism has been proposed for such resonances [39,44,47,49]. This exploited a local non-variational work-function-based exchange potential [225], found to be much more advantageous computationally compared to the non-local Hartree-Fock potential. Earlier, it was demonstrated to be quite successful for general atomic excited states [226][227][228]. Some of the applications included: singly, doubly, triply excited states, low-and moderately high-lying states, valence and core excitations as well as auto-ionizing and satellite states, etc. However, these all used a Numerov-type finitedifference scheme for discretization of the spatial coordinates and solution of the relevant [220] radial KS equation, which is given as (in atomic units), where the three terms in left-hand side relate to kinetic, electrostatic and exchangecorrelation (XC) contributions.
Here v es (r) contains the nuclear-attraction and classical internuclear Coulomb repulsion as, where Z is the nuclear charge. The exchange potential is obtained through a physical interpretation, as the work required to move an electron against an electric field E x (r) arising out of its own Fermi-hole charge distribution, ρ x (r, r ′ ), and given by a line integral [225], where For well-defined potentials, work done must be path-independent (irrotational), which is satisfied for spherically symmetric systems such as those studied here. Exchange potential now can be calculated accurately as the Fermi hole is known exactly in terms of single-particle orbitals. Now, within the central-field approximation, ψ i (r) = R nl (r) Y lm (Ω). Finally, employing a suitable correlation functional (here we employ one of the most widely used Lee-Yang-Parr potential, [229]), one finally obtains a self-consistent set of orbitals, which gives the electron density as, states (n > 10), energies in 3 columns (X only, XC, and reference) are seen to be essentially identical, consistent with the fact that for Rydberg states, asymptotic long-range Coulomb potential (that arises solely because of the exchange potential) remains the dominant factor for their electronic structure determination; electron correlation plays very little role.
With decrease in n, discrepancies in these 3 columns become noticeable as correlation now plays an increasingly important role. For low n, XC state energies have fallen slightly below the literature values, most probably due to an overestimation caused by the correlation functional used. As n goes to higher values, energy spacings decrease, and present work reproduces this phenomenon very well. It is worth mentioning that for high-lying states, many commonly used quantum mechanical methods encounter a cumbersome problem of self-consistent convergence, chiefly due to the inaccuracies in potential and densities. However, through GPS we succeeded in getting converged results for all states as reported. Such results were produced for even higher states by this approach in a straightforward manner (not presented here). Finally even though the method is non-variational, anomaly in energy orderings has not been observed. Figure 5 displays the calculated radial density for 16p 2 1 D e state of He. As expected, there are 15 maxima (first peak can be seen after magnification).
Next, some selected hollow states of Li, obtained from GPS method, are reported in For convenience, an independent particle model classification [208,230] is adopted; thus the six core Li + n=2 intra-shell doubly excited states, viz., 2s 2 1 S e , 2s2p 3 P o , 2p 2 3 P e , 2p 2 1 D e , 2s2p 1 P o and 2p 2 1 S e are denoted by A,B,C,D,E and F respectively. This table compares the even-parity A, nd and D, ns 2 D e Li hollow states with literature data, arising from electronic configurations 2s 2 nd and 2p 2 ns having n up to 24,25 respectively. For the first series, to the best of our knowledge, no experimental results are available, whereas for latter, only the lowest state is detected experimentally at 144.77 eV in photo-electron spectroscopy [232]. DFT excitation energy matches excellently with the experimental value (only 0.043 eV lower with a deviation of 0.03%). The term energies are slightly underestimated in all cases with respect to the truncated diagonalization result [230], whereas for latter, some overestimation is noticed for n=2,4-7, which could occur either due to (a) non-variational nature of the exchange potential employed and/or (b) inadequacy of LYP correlation energy functional. Higher resonances (up to n=22,25) for the two series have been theoretically investigated through the R-matrix method [216]; DFT excitation energies show discrepancies in the range of 0.46-0.50% and 0.04-0.58% for them. Finally Fig. 6 depicts radial densities for some selective hollow states of Li, which show the characteristic shell structures through superposition of radial densities.
In [44], DFT calculations have also been reported for twelve 2l2l ′ nl ′′ (n≥2) triply excited hollow resonance series of Li, viz., 2s 2 ns 2 S e , 2s 2 np 2 P o , 2s 2 nd 2 D e , 2p 2 ns 2 D e , 4 P e , 2s2pns 4 having much larger widths [233]. The major difficulty in handling such hollow resonances at higher photon energies arises mainly due to a very rapid increase in the density of triply and other lower excited states of same symmetry, as well as of large number of available open channels, leading to very strong, complicated correlation effects. Nevertheless, some attempts have been made to study these states. Some of the prominent theoretical works include: (a) complex scaling method having correlated basis functions constructed from B splines [234] (b) state specific theory [235], etc. For more detailed discussion as well as available experimental and theoretical works on these, see [44]. Hollow resonances of Li-isoelectronic series have also been studied successfully by this method in [47], where 8 2l2l ′ nl ′′ (3 ≤n≤6) hollow resonance series, namely, 2s 2 ns 2 S e , 2s 2 np 2 P o , 2s 2 nd 2 D e , 2p 2 ns 2 D e , 2s2pns 4 P o , 2s2pnp 4 D e , 2p 2 np 4 D o , of all the 7 positive ions from Z=4-10 were reported.
V. CONCLUDING REMARKS
Quantum mechanics has nowadays spread applications widely in almost every imaginable area in contemporary science and technology, including physics, chemistry, materials science, nanoscience, etc. The focus of this chapter, central potentials, especially those singular or near-singular, take the centrestage in atomic, molecular, optical physics and chemistry.
However, any exactly solvable quantum system, more so for the singular potentials, are very scarce. Innumerable attempts have been made to solve the respective Schrödinger equation almost ever since the early inception of quantum mechanics. While variational and perturbative approaches remain the most commonly employed, many other attractive, elegant formalisms are available these days, which offer physically meaningful and quite accurate results for various quantum mechanical systems of interest. An enormous number of analytical, semi-numerical, purely numerical techniques have also been developed over the decades for this purpose. Nevertheless, as discussed above, there is still a great need for better approaches. Because, for all the physical systems covered in this work (and probably also true for many other situations not touched upon here) many of these methods would be satisfactory for certain ranges of potential parameters and less successful for other sets.
Moreover, a vast majority of these methods work well for lower states; outstanding difficulties and challenges are encountered for higher states. Additionally extraction of radial density, as well as other expectation values are not straightforward task. Very few methods would satisfy all these criteria. In essence, there is a great need for a methodology which can satisfy all these criteria.
Here we have presented an account of the development of GPS method in the context of central (both singular and non-singular) potentials in quantum mechanics. Motivation, background, need for such a method as well its details have been discussed at some length.
Although initially designed for Coulombic singular systems, its success for other singular systems as well as for other non-singular central potentials was realized promptly. The formalism is quite general, in the sense that it delivers uniformly accurate results for lower and higher states of a broad spectrum of potentials (describing a variety of physical sys-tems) covering a wide range of interaction/coupling. Its usefulness and applicability was demonstrated for some specific potentials in Section IV, such as spiked oscillator, Hulthén, Yukawa, power-law, logarithmic, non-polynomial oscillator, and lastly, some Rydberg and hollow resonances in atomic systems, etc. It has been successfully applied to certain other systems as well, which have not been mentioned at all (such as Hellmann potential or Coulomb potential perturbed by a linear and quadratic coupling) and these could be found in refs [39][40][41][42][43][44][45][46][47][48][49][50][51][52]. Eigenvalues, eigenfunctions, radial densities, spatial expectation values are obtained in a simple manner. A comparison with literature data reveals that in most cases, results obtained are quite competitive to the best ones or surpasses the accuracy of existing best results available. In almost all cases, it helps to estimate many new states for the first time, which could constitute useful references for future investigations. Finally, it is hoped that this method will continue to remain as a valuable tool for many other physical systems in the future. | 2019-04-18T12:14:43.000Z | 2019-04-18T00:00:00.000 | {
"year": 2019,
"sha1": "d6fcf6b36f8205f4130897d2817051a7cfb5528b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d6fcf6b36f8205f4130897d2817051a7cfb5528b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
25686771 | pes2o/s2orc | v3-fos-license | Computerized protocol of orofacial myofunctional evaluation with scores : usability and validity
Purpose: To test the usability of Computerized Orofacial Myofunctional Evaluation (OMES) protocol and analyze its validity. Methods: The study was divided into three stages: the first stage, production of the computerized version of OMES. The second stage was the validation of the user’s interface, in which 100 OMES protocols of a database, filled in printed version, were transferred using the computerized instrument. Necessary changes to the system have occurred at this stage. In the third stage, usability of the OMES protocol in multimedia version, three evaluators transferred data from other 25 printed protocols from database for the computerized version, and the time to transfer the data of each protocol was computed and compared between examiners by one-way ANOVA. Moreover, these evaluators analyzed the usability of computerized protocol according to the “Ten principles of Heuristics usability” as described in the literature. Results: The computerized protocol satisfied the principles of heuristics usability, according to the evaluation of the three Speech-Language Pathology evaluators, and the average time spent by the evaluators to transpose the data of each protocol to the software ranged from 3.1±0.75 to 3.83±0.91 minutes. Conclusion: The Computerized AMIOFE protocol is valid and had its usability/functionality confirmed.
INTRODUCTION
Technological advancement and qualification of professionals enabled the construction of electronic protocols.Various health services have implemented them, or are in the implementation phase, for clinical application and scientific research.Therefore, Speech-Language Pathology and Audiology must keep up with this moment of transition and participate in it.
Electronic protocols provide better access to information, greater security and electronic exchange of data between institutions, as well as facilitate collective research, with the possibility of retrieval and cross-checking of this information (1) .
Previously, its use was limited due to the cost of the equipment, its maintenance, and the lack of skilled labor or the possible resistance of people to computers.However, it is possible to create these protocols today, increasing the rate of accuracy of records, with low cost, reduced physical space, and minimal training of personnel (2)(3)(4)(5)(6) .
These tools can facilitate administrative and financial organization of consultations; staff time in handling procedures; retrieval of patient information, knowledge, and availability of this knowledge where and when it is necessary for adequate decision-making; and, in some cases, the generation of diagnosis and therapeutic guidance (1,7,8) .
On the basis of this, we developed a computerized version of the orofacial myofunctional evaluation with scores (OMES) (9) protocol to optimize the records for clinical use and research.
Briefly, the OMES protocol was designed to provide sufficient data for detection and grading of orofacial myofunctional disorders, without being too extensive and comprehensive.Previously, it has been validated for children (9) , youth, and adults, with good sensitivity and specificity (10) .
For a software to be considered valid and for its usage to be proper, it must go through a stage known as usability (functionality) inspection, which is a way of evaluating user interfaces (11) .
Usability is defined in ISO 9241-11 as: "the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use" (p. 3) (12) .
Therefore, it concerns man-machine interaction.To be easily accepted, the instrument should be user-friendly, easy to use (13) , and its validity is related to the perceived satisfaction and usefulness by users (14) .
This study aimed to determine the usability of the OMES computerized protocol and analyze its validity.
METHODS
The project was approved by the research ethics committee of Hospital das Clínicas of Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo (USP-HCFMRP) according to the HCRP protocol no.15602/2-12.Participants (evaluators) were informed about the objectives and methods of the study and were asked to sign a free and informed consent.
Production of the computerized version of orofacial myofunctional evaluation with scores
In the OMES computerized protocol, the characteristics of the original version were maintained, and consequently its psychometric properties (9) .Software was developed by an undergraduate student of the Biomedical Informatics course under the guidance of a teacher in the area.For its creation, the Java programming language, executable in the Windows operating system, was used in the following order: 1. entry in the system and selection of existing or new protocol; 2. identification data of the patient; 3. evaluation data on appearance and posture; 4. mobility assessment data; 5. data on functions; 6. data from the functional evaluation of the occlusion; 7. placeholder for final comments.
Validation of the user interface
In this pretest stage of the instrument, 100 OMES protocols, completed in hard copy and taken from the prior database of the research team, were used.All protocols were transferred to the computerized version by an undergraduate student.
Twenty-five printed protocols were randomly selected from the total and transferred to the computerized OMES by another team member.In order not to create duplicates, the records were entered into the system with a code before their identification.
Subsequently, each user listed the possible changes in the software related to its operation and/or errors detected.This information was cross-checked and discussed, and then presented, so that the necessary changes were made in the area of Biomedical Informatics, before the next step.
Usability of the multimedia version of the orofacial myofunctional evaluation with scores protocol
The corrected version of the OMES computerized protocol was tested for its validity as follows: A. Three Speech-Language therapists (mean age: 25±0.8 years), with prior training in the area of orofacial motricity and with different levels of training (from 30 to 66 months, average: 46 ±18.3 months) for the use of the OMES protocol (printed version), participated as evaluators of usability.They scanned the data independently and did not exchange information.
Data from other 25 printed protocols, different from the ones in the previous step, were transferred from the database to the computerized version.In order not to create duplicates in the system, each evaluator entered a different code into the system to identify each protocol.B. Time for the data transfer of each protocol was computed.C. The three evaluators also independently analyzed the usability of the system in accordance with the "Ten Usability Heuristics" proposed by Nielsen (11) .For each of the items described, each evaluator responded to one of the alternatives: does not satisfy (score 1), partially satisfies (score 2), and satisfies (score 3).The instrument to evaluate usability, containing the heuristics and their descriptions, is presented in Chart 1.
Data analysis
Descriptive statistics were performed for the variables involved.The examiners were compared in terms of time spent for the transfer of information by one-way analysis of variance test.
Validation of the user interface
During the pretest of the OMES computerized protocol, problems were found and changes needed were proposed.In general, the main errors found were related to the standardization of markers in the protocol; buttons that were not performing their functions correctly, or even that were missing; absence of items from the printed
Match between system and the real world
The system should speak the user's language, with words, phrases, and concepts familiar to the user, rather than system-oriented terms.Follow real-world conventions, making information appear in a natural and logical order.
User control and freedom
Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue.Support undo and redo.
Consistency and standards
Users should not have to wonder whether different words, situations, or actions mean the same thing.Follow platform conventions.
Error prevention
Even better than good error messages is a careful design that prevents a problem from occurring in the first place.Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
Recognition rather than recall
Minimize the user's memory load by making objects, actions, and options visible.The user should not have to remember information from one part of the dialogue to another.
Instructions for use of the system should be visible or easily retrievable whenever appropriate.
Flexibility and efficiency of use
Accelerators -unseen by the novice user -may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users.
Allow users to tailor frequent actions.
Aesthetic and minimalist design
Dialogues should not contain information that is irrelevant or rarely needed.Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
Help users recognize, diagnose, and recover from errors
Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
Help and documentation
Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation.Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large.
Search: Nielsen (11) protocol; and overlapping of some data when opening an already filled protocol.Before starting the usability test step of multimedia version of the OMES protocol, adjustments were made and problems already listed were solved.Thus, the computerized protocol, as previously mentioned, followed the pattern of the printed protocol, as shown in Figures 1 and 2.
Usability of the multimedia version of the orofacial myofunctional evaluation with scores protocol
The computerized protocol complied with the usability heuristics, according to the evaluation of three Speech-Language therapists, with scores ranging from 28 to 29, in a total of 30 points.The principle evaluated with the lowest score was (5) "error prevention".The data and amounts are shown in Table 1.The average time spent by the evaluators for the transposition of data of each protocol to the software ranged from 3.1 to 3.83 minutes.The time spent by evaluator 2 was significantly higher than that by evaluator 3 (p<0.01).Statistical comparisons are presented in Table 2 and Graph 1.
DISCUSSION
In this study, heuristic usability of the OMES computerized protocol was determined with excellent results in the ratings of the three users.
A usability problem can be defined as any characteristic, observed in a given situation, which may delay, hinder, or prevent the completion of a task, annoying, embarrassing, or traumatizing the user (15) .
In the case of the computerized OMES protocol, only the item regarding the possibility of "error prevention" was rated by two evaluators as partially satisfying.In fact, the program does not inform the evaluator if, for example, they forgot to enter any data from the evaluation.Therefore, there is no lock that can prevent continuity.But, however, if a given piece of information cannot be obtained, this does will not prevent the continuity of the evaluation and registration.The average time spent for the transposition of the data was brief, no more than 3.83±0.91minutes, and the difference between two of the evaluators, although statistically significant, did not exceed 1 minute.This time does not concern the evaluation of patients with simultaneous input of data in the electronic protocol, but, as explained, only the transposition of the printed protocols to the program.
The objective regarding the computerized OMES protocol was to make it functional.Following the principles proposed by Nielsen (11) , the information appears in a natural and logical order, with a user-friendly language, as already outlined in the original protocol, facilitating its management.
Electronic protocols present many conveniences to the user and ensure improved information management and quality of research (1) .In clinical terms, the computerized version of the protocol in question will add convenience, speed, and ease of visualization of results: with just one command ("click"), you can enter the result of the evaluated item.For each category of the protocol, such as appearance, posture, mobility of the stomatognathic system components, and functions (breathing, chewing, and speaking), the software presents the sum as soon and the evaluation is completed.When the assessment is complete, the total score is informed and corresponds to the orofacial myofuncional condition of the individual evaluated.
From this, the professional can define the need for orofacial myofuncional therapy for a given patient, comparing the numerical results of their assessment to the normal parameters previously described (16,17) .
It is noteworthy that the use of the computerized OMES protocol does not eliminate the need for knowledge in the area of orofacial motricity and the need for training in evaluation.
A careful orofacial myofuncional evaluation, especially when the instrument had been tested for validity and has good levels of sensitivity and specificity, favors the correct diagnosis and proper decision on therapy (18) .
The usability of the OMES computerized protocol for the evaluation of patients is feasible and a digital database is generated with all the information.Therefore, no more data entry is necessary after the evaluation, which will reduce the time to organize these, as well as improve information quality and accuracy of records (4) .The data relating to patients and the results can be retrieved quickly, clearly, without generating doubts (3) , and with reduced costs (5) .
The need for computerization in various areas, including in health, seems increasingly indispensable, because its advances have opened many possibilities for the use of information technology in clinical and scientific research (4) .Scientific research has especially grown, both qualitatively and quantitatively (3) .
According to our knowledge, the OMES computerized protocol is the first instrument of orofacial myofunctional evaluation with an electronic version, with proven construct and criteria validity (9,10) , as well as usability heuristics, developed in the area of orofacial motricity, in a digitalized version.On the basis of our experience, we believe that it has potential to foster advances in clinical practice and in scientific research in the area.
CONCLUSION
The OMES computerized protocol had its usability/functionality confirmed and proved useful for the storage and retrieval of orofacial myofuncional evaluation data.
Figure 1 .Figure 2 .
Figure 1.Example screenshot of the orofacial myofunctional evaluation with scores computerized protocol regarding the assessment of mobility
1 .
CoDAS 2014;26(4):322-7Mean time: mean total time spent in minutes by a Speech-Language therapist for the transfer of 25 printed protocols to the computerized version Graph Average total time spent per evaluator for the typing of the protocols, in minutes, with respective standard deviations 5
Chart 1 .
Usability Heuristics Evaluation conducted by evaluators with regard to the protocol Orofacial Myofunctional Evaluation with computerized scores
Table 1 .
(11)uation of usability heuristics of the orofacial myofunctional evaluation with scores computerized protocol, according to the principles of Nielsen(11)
Table 2 .
Total time spent per evaluator for the transfer of 25 orofacial myofunctional evaluation with scores protocols from hard copies to the computerized version Means and standard deviations, in minutes; Means with different letters indicate differences in the Tukey post test Caption: p = probability in the ANOVA test; SD = standard deviation | 2017-07-15T14:14:47.023Z | 2014-01-07T00:00:00.000 | {
"year": 2014,
"sha1": "c6cb053d1688b8f8c1842ed06e1336102ad48973",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/codas/v26n4/2317-1782-codas-26-04-00322.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c6cb053d1688b8f8c1842ed06e1336102ad48973",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
265235974 | pes2o/s2orc | v3-fos-license | Analysis of Changes in Modulation Parameters on Optical Communication System
This study aims to analyze the NRZ-OOK modulation format in free-space optical communication. Simulations were conducted to examine the influence of the SpS parameter and optical signal power at the input modulator on the average power of the modulated optical signal. The research employed a computer simulation approach, where the optical signal was transmitted through an optical path without cables, enabling fast data transmission and longer distances compared to wired media. The method used specifically uses a machine learning-based application, Python. The simulation results indicate that higher SpS values result in a more accurate and smoother optical signal representation. Furthermore, an increase in the optical signal power at the input modulator increases the average power of the modulated optical signal. However, negative optical power values do not hold any relevant physical meaning. The magnitude of the Pi_dBm value also affects the optical signal spectrum, with higher optical power generating more robust frequency components. Graphs with negative optical power exhibit significant noise due to distortion and non-linearity. The findings of this study provide a better understanding of the influence of these parameters in the NRZ-OOK modulation format for free-space optical communication.
Introduction
NRZ modulation is a modulation format that uses constant levels representing bit 0 and bit 1 [1].The high level (e.g., +1) represents bit 1, while the low level (e.g., -1) represents bit 0 [2].There is no level change within each bit period; hence, it is called "Non-Return-to-Zero" (NRZ) [3].On the other hand, OOK modulation is a modulation scheme that discretely changes the optical intensity between two levels, namely "on" (optical signal present) and "off" (optical signal absent) [4,5].There are several transmission parameters in the NRZ-OOK (On-Off Keying) modulation format, including SpS (Samples per Symbol), Rs (Symbol rate), Pi_dBm (optical signal power at modulator input in dBm), Vπ, and Vb (MZM parameters).In this simulation, the focus will be on two parameters: SpS and Pi_dBm.
SpS refers to the number of samples representing one symbol in the modulated optical signal.Higher SpS values result in a smoother and more accurate optical signal representation.A higher SpS value allows for a better approximation of rapid changes in the optical signal.The optical signal power at the input modulator (Pi) is the optical power input to the electro-optic modulator, such as a Mach-Zehnder modulator (MZM) [6].
The magnitude of this optical power affects the average power of the modulated optical signal produced.A higher Pi inputted to the modulator leads to a higher average power of the modulated optical signal.This result is related to the linear response of the modulator and the resulting optical intensity [7].
The magnitude of the optical signal power influences the frequency components in the optical signal spectrum [8].The Higher optical power results show that stronger frequency components are being generated.Higher optical power also produces a larger amplitude in the modulated optical signal.This result means that the amplitude of the optical signal waveform will increase with an increase in optical power.Additionally, using negative power at the input modulator leads to significant distortion [9] and non-linearity [10] in the resulting optical signal.Negative power at the input modulator causes undesired changes in the intensity pattern of the optical signal, resulting in significant noise and reduced optical signal quality [11].This research applies the NRZ-OOK modulation principle in optical communication, a modulation format that influences transmission quality, thus aiding in designing more efficient systems [12].
Method
This simulation process describes machine learning using the Python application.First, The process sets aside the variables that will be used and run the NRZ-OOK modulation format for free-space optical communication within a simulation box.The research uses a pre-designed optical signal to implement the NRZ-OOK modulation format for free-space optical communication within a simulation box.Based on previous research, the simulation is conducted to test the feasibility [13].NRZ refers to a modulation format where the amplitude level of the signal remains constant at the same level throughout one bit.At the same time, OOK is a modulation method where the optical signal is turned on or off to represent data bits.In free-space optical communication, optical signals are transmitted through an optical path without cables, enabling faster data transmission and longer distances than wired media.The known digital information is transmitted via optical signals using two different amplitude levels.
Results and Discussion
The NRZ-OOK modulation format for free-space optical communication has been designed, simulated, and tested within a simulation box.In a research study, the generated NRZ-OOK modulation indicated that changes in the optical signal can result in significant noise [14][15][16].Based on the simulations, the impact of the SpS parameter and the optical signal power at the input modulator on the average power of the modulated optical signal can be observed.Higher SpS values result in more accurate representation and smoother optical signal modulation.Additionally, increasing the Pi will immediately increase the average power of the modulated optical signal.This result can be observed in Table 1.In contrast, when the optical signal power parameter at the modulator input is negative, it does not hold any relevant physical meaning.This condition is due to the code provided, where the value of Pi_dBm is used to calculate the optical power at the input modulator using the formula Pi = 10**(Pi_dBm/10)*1e-3, where Pi_dBm represents power in decibel-milliwatt (dBm) units.Pi represents the optical power in watts (W) to be input into the electro-optic MZM.Therefore, if the value of Pi_dBm is negative (Pi_dBm = -15), the calculation will result in a positive value for Pi.However, negative Pi_dBm values will affect the power spectral density in the graph of the optical signal spectrum compared to positive Pi_dBm values.This spectrum can be observed from the differences in the graphs in Figure 1 and Figure 2 in the simulation using an SpS value of 10.The magnitude of the Pi_dBm value influences the resulting graph of the optical signal spectrum because higher optical power generates stronger frequency components within the spectrum.Additionally, the magnitude of the Pi_dBm value also affects the waveform generated.Higher optical power leads to a larger amplitude in the modulated optical signal.
Graphs with negative power values inputted into the modulator exhibit significant noise due to distortion and non-linearity in the optical signal.When the optical power is negative, the resulting optical signal from the modulator experiences significant distortions and non-linearities.The optical signal with negative power at the modulator input causes undesired changes in intensity patterns, resulting in significant noise.This signal from the simulation can be observed more clearly in Figure 3 and Figure 4 using an SpS value of 10.
Conclusion
This study found that the SpS parameter significantly influences the accurate and smooth representation of the modulated optical signal.Higher values of SpS result in a more precise representation of the modulated optical signal.Furthermore, an increase in the Pi leads to an increase in the average power of the modulated optical signal.The higher the optical power inputted into the modulator, the higher the average power generated by the modulated optical signal.However, using negative values for Pi_dBm in the optical power calculation yields positive optical power values.Nonetheless, negative values of Pi_dBm hold no relevant physical meaning in the context of optical communication systems.The magnitude of the Pi_dBm value affects the resulting graph of the optical signal spectrum.Higher optical power generates stronger frequency components in the spectrum of the modulated optical signal.Graphs with negative optical power inputted into the modulator exhibit significant noise.Negative optical power leads to distortions and non-linearities in the optical signal, resulting in substantial noise.The average value of optical modulation obtained is approximately 12,000 dBm if the Pi_dbm input is positive.Meanwhile, the average modulated optical value obtained is around 18,000 dBm if the Pi_dbm input is negative. | 2023-11-17T16:38:03.126Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "8f6b8063233d379971b0117fdd7a70c346c17686",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/2623/1/012019/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "407f851aec2da6911bbefc5a94f201904b316139",
"s2fieldsofstudy": [
"Engineering",
"Computer Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
55236113 | pes2o/s2orc | v3-fos-license | A software ability network in service oriented Architecture
In recent days, Service-Oriented Architecture (SOA) is used as a proficient resolution to integrate and potentially distributed in the software firm and enterprise. Architectures explore great vital role of network evaluation of the system. In a SOA-network value based environment, Pattern proven the solutions and design is one of the most important issues that must be considered because of the loosely coupled nature of SOA. However, there are many functionalities and deal with software Architect services such as flexible, speed, efficiency reliability and so on. SOA brings additional settings of proper governance of design pattern which becomes a critical issue. In this paper, we propose an Architect for Service Oriented Pattern based enterprise can play in transformation terms applying the quality conceptual for framework.
INTRODUCTION
Software architecture, Hofmeister et al. (1999), intuitively denotes the high level structures of a software system.It can be defined as the set of structures needed to think about the software system, which comprise the software elements, the relations between them, and the properties of both elements and relations.Applying the term "architecture" to software systems is a metaphor that refers to the classical field of the architecture of buildings.Garlan and Shaw, 1993, The term "software architecture" is used to denote three concepts: high level structure of a software system, discipline of creating such a high level structure and documentation Bosch (2004) of this high level structure.Software architecture exhibits the following characteristics: multitude of stakeholders, separation of concerns, quality-driven, recurring styles and conceptual integrity.Software architecture (SA) is considered to be the most importance to the software development lifecycle Outi et al. (2009).It is used to represent and communicate the system structure and behavior to all of its stakeholders with various concerns.SA facilitates stakeholders in understanding design decisions and rationale, further promoting reuse and efficient evolution.One of the major issues in software systems development today is systematic SA restructuring to *Corresponding author.E-mail: gomathyck@gmail.com.Tel: +251910185903.
Author agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License accommodate new requirements due to the new market opportunities, technologies, platforms and frameworks.
According to Pressman, Sobiesiak and Yixin (2010) "One goal of software design is to derive an architectural rendering of a system".Architectural design, detailed design and design reviews provide the most important steps in a cost effective software development process.Software engineering activities are goal directed in order to produce working software in a timely manner within some cost constraints Al Dallal, (2010).For complex computer based systems, software architecture plays a very important role in its success or failure.Software architecture is "the overall structure of the software and the ways in which that structure provides conceptual integrity for a system".Software architectural design is immensely challenging, strikingly multifaceted, extravagantly domain based, perpetually changing, rarely costeffective, deceptively ambiguous, and perilously constrained with some exceptions.Service oriented architecture modeling is performed considering various stages of network performing the functionalities and services Xu et al. (2006).This model consists of three stages: architectural analysis, architectural synthesis and architectural evaluation.The model has been extended to include two more stages, implementation and maintenance.All stages are supported by architectural knowledge.The architectural analysis stage serves to define the problems an architect must solve.An architect examines architectural concerns and context in order to come up with a set of architecturally significant requirements.
Another major issue in software systems development today is quality Frigo and Steven (1998).The idea of predicting the quality of a software product from a higherlevel design description is not a new one.During recent years, the notion of software architecture has emerged as the appropriate level for dealing with software quality (Rasool and Nadim, 2007).This is because the scientific and industrial communities have recognized that Software Architecture sets the boundaries for the software qualities of the resulting system.The aim of analyzing the architecture of a software system is to predict the quality of a system before it has been built and not to establish precise estimates but the principal effects of architecture (Abdelmoez et al., 2009).Designing architecture so that it achieves its quality attribute requirements is one of the most demanding tasks an architect faces (Taylor et al. 2009).It is demanding for several reasons including lack of specificity in the requirements, shortage of documented knowledge of how to design for a particular quality attributes, and the trade-offs involved in achieving quality attributes (Outi et al., 2009).It would be desirable to have a method that guides the architect so that any design produced by the method will reliably meet its quality attribute requirements.
Literature review
Software architecture provides the solution for which technical and operational problem can be resolve easily.Lots of researchers proposed variety of papers for the given work are given below: Pradip Peter Dey (2011), presented a strongly adequate software architecture defined along with some other software quality attributes which contributed in formative assessments of software architecture.The architectural categories were not constrained by a particular programming language, or domain.Software engineers have strived for the strongly adequate software architecture.However, software architecting was an iterative process and formative assessments guide that the architects to improve the qualitative aspects in an iterative process.The categories proposed in given paper have intended to help reviewers in formative assessments.The role of formative assessments has stressed during the development process in order to produce revised architectures from initial work or working progress.Outi et al. (2009) proposed an approach that used SA in software architecture design.A responsibility dependency graph has been given as input and architecture styles and design patterns were used as transformations when searching for a better solution in the neighborhood.The solution was analysed with regard to quality and effectiveness.The experimental results achieved with given approach showed that although extremely high quality values have achieved with given approach, their "true" quality as evaluated by examining the Unified Modeling Language (UML) class diagrams was not actually as good.However, when combining the solution achieved with SA with a GA implementation, the actual quality of the produced solutions increased as well as the calculated metric values.The proposed paper would suggest that further work should be done with studying the combination of these two algorithms in software architecture design.Studying the definition of evaluation functions for simulated annealing and genetic algorithm should be done as well, as using the same function apparently gives quite different types of solutions when using the different algorithms.Their future work attend to these questions as well as deriving real test cases to further evaluate the approach, and adding more design patterns to cover a larger search space of possible architectures.They have planned to implement a multiobjective fitness function primarily for the GA implementation.Abdelmoez et al. (2009) given a paper in which Software Architecture Risk Assessment (SARA) tool designed and implemented as a tool for computing and analyzing architectural level risk factors like maintainability-based risk, reliability-based risk and requirement-based risk.By manipulating the data acquired from domain experts and measures obtained from Unified Modeling Language (UML) artifacts, SARA Tool used in the design phase of the software development process to improve the quality of the software product and identify critical components that have high risk levels.They used the product line architecture of a Microwave Oven to demonstrate the usage of SARA tool in assessing PLA.The modified version of the Microwave Model has been aggregated to consist of 9 sequence diagrams and two class diagrams.There were a total of 14 optional and variant classes.From the product line architecture a total of 96 validated product members, were generated with the instantiation process.
Ampatzoglou et al. ( 2011) suggested a methodology for exploring designs where design patterns have been implemented, through the mathematical formulation of the relation between design pattern characteristics and well known metrics, and the identification of thresholds for which one design becomes more preferable than another.The given approach assisted goal oriented decision making, since it was expected that every design problem demands a specific solution, according to it was special needs with respect to quality and expected size.Their methodology has been used for comparing the quality of systems with and without patterns during their maintenance.Thus, three examples that employ design patterns have been developed, accompanied by alternative designs that solve the same problem.All systems have been extended with respect to their most common axes of change and eleven metric scores have been calculated as functions of extended functionality.The results of the analysis have identified eight cut-off points concerning the Bridge pattern, three cut-off points concerning Abstract Factory and 29 cut-off points concerning Visitor.In addition to that, a tool that calculates the metric scores has been developed.
Christian and Mila (2011) described how componentbased systems with multiway cooperation focused on the basis of an architectural constraint that went beyond common acyclicity requirements.The given analysis have concerned on the property of deadlock-freedom of interaction systems and given a polynomial-time checkable condition that ensured deadlock-freedom by exploiting a restriction of the architecture called disjoint circular wait freedom.Roughly speaking, given architectural constraint disallowed any circular waiting situations among the components such that the reason of one waiting was independent from any other one.On the other hand, if their approach failed, the information provided by the entry interactions has given a hint of which components were involved in a potential deadlock.
Rajalakshmi 9
With given information, a software engineer has taken a closer look at given potentially small set of components and either resolve the reason manually or encapsulate given set in a new composite component that has equivalent behaviour, was verified deadlock-free with another technique, and now causes no problems in the remaining system.Their approach used as a design pattern to ensure that a system was correct by construction.If a software engineer sticks to the composition rule imposed by their architectural constraint, a subsequent application of their condition after each composition step facilitated a correct system design in an automatic and convenient way.They concluded the paper with an overview of the current state of affairs in their work on interaction systems.In their research perspective, they followed ideas that ultimately allowed for correctness by construction.They followed the philosophy to develop and investigate design patterns or architectural constraints that were amenable to the formulation of efficiently checkable conditions for the properties in question.Germán et al. (2010) given a paper in which SAME tool computed the similarity between cases by considering the particular dimensions of connector catalogues.The attributes and values for these dimensions depend upon the overall design context, the application domain and the designer's perspective of the problem.As a consequence, the results of the similarity are function biased.So far, they have taken a simple approach based on the structural characteristics of components playing similar roles when attached to connectors.However, a stronger compatibility check required the components to be also equivalent from a behavioral point of view.A related drawback indicated that there was a lack of behavioral modelling in the C&C architectural specifications.In the current SAME implementation, the designer gives details about the way components behave when interacting with each other's.The proposed method prevented the adaptation of the object-oriented solutions to generate behavioral diagrams -such as sequence diagrams -that provided a more complete picture of the object-oriented implementation to the designer.The behavioral aspect of materializations is a topic for future work.SAME provided an editor for the creation of materialization experiences, the specification of the interaction models was still a highly manual task.To overcome the given situation, they were planning to extend the SAME Eclipse Plugin which provides a userfriendly interface that supported the construction of interaction models for the materialization experiences.
PROBLEM DEFINITION
In present time, software architecture is a major issue in any software organization, which develops software for some particular organization or firm.Lots of things affect software development life cycle.To design any software designs we have to keep some points in mind to develop effective software in reasonable time and cost.Here we described the following issues, which have to be removed at the time of software design phase (Hofmeister et al. 1999): What is the most essential part for a software development industry to do to get the main out of its software architects and provide software architectures of the top essential quality?What should be steps to measure the capability?In what way the "theory of software architecture competence" look like?What are the possible organizational practices presently at work to enhance capability?
SOA framework
The desire for enterprise systems that have flexible architectures, detailed designs, implementation agnostic and operate efficiently continues to grow.A major effort towards satisfying this need is to use SOA.Moreover, there is new research and development in order to achieve more demanding capabilities (example, workflow service composition with run-time adaptation to changing Quality of service attributes) that have been proposed for service-based systems, especially in the context of system.A basic concept is for SOA to enable specifying the creation of services that can be automatically composed to deliver desired system dynamics while satisfying multiple Quality of service attributes.As shown in Figure 1.A fundamental SOA concept is to enable flexible composition of independent services in a simple way.The simple concept is crucial since it separates details of how a service is created and how it may be used.This kind of modularity is defined based on the concept of brokers and its realization as the broker service.The SOA conceptual framework lends itself to the separation of concerns ranging from application domains (example, business logic) Information Technology (IT) infrastructure is one of the choices of programming languages and operating systems.The interoperability at the level of services means loose coupling of reusable services.The high-level description of the SOA principals does not account for the operational dynamics of SOA, especially with respect to time-based operations.Therefore, understanding the dynamics of a service-based system using simulation is important.Simulation can also support specific kinds of servicebased software systems that are targeted for business processes with specialized domain Knowledge.
SOA resources
Enterprise applications typically require different kinds of interfaces to the data they store and the logic they implement: data loaders, user interfaces, integration gateways and others.Despite their different purposes, these interfaces often need common interactions with the application to access and manipulate its data and invoke its business logic.The interactions may be complex, involving transactions across multiple resources and the coordination of several responses to an action.Encoding the logic of the interactions separately in each interface causes a lot of duplication.As shown in Figure 2. A Service Layer defines an application's boundary and its set of available operations from the perspective of interfacing client layers.It encapsulates the application's business logic, controlling transactions and coordinating responses in the implementation of its operations
SOA architectural model
Service-Oriented Architecture (SOA) has been widely promoted by analysts and IT vendors as the architecture capable of addressing the business needs of modern organizations in a cost-effective and timely manner.Perceived SOA benefits include improved flexibility and alignment between business processes and the supporting enterprise applications, lower integrations costs (in particular for legacy applications), and numerous other advantages.Although, SOA can play an important role in inter enterprise business-to-business (B2B) applications, SOA is primarily regarded as an intra-enterprise architecture used for internal integration.SOA adoption was initially driven by the emergence of Web Services and related technologies and the need to provide a more effective enterprise computing architecture oriented modelling.SOA is explored in network drivers using in service oriented distributed enterprise applications.Service oriented architecture is generally the structure of components in a program or system, their interrelationships, and the principles and design guidelines that control the design and evolution in time.Software engineering, a design pattern is a general reusable solution to a commonly occurring problem within a given context in software design.A design pattern is not a finished design that can be transformed directly into source or machine code.It is a description or template for how to solve a problem that can be used in many different situations.Patterns are formalized best practices that the programmer must implement in the application Object-oriented design.Patterns typically show relationships and interactions between classes or objects, without specifying the final application classes or objects that are involved.Patterns that imply object-orientation or more generally mutable state are not as applicable in functional programming languages.
The Software Architect will be responsible for contributing specialized technical knowledge in multiple development efforts using object-oriented analysis and design, Service Oriented Architecture (SOA) and distributed systems.Principle responsibility will be the design and implementation of an enterprise-class platform to enable application supportability and performance management.SOA is the aggregation of components that satisfy a design needs.It comprises components, services and processes.Components are binaries that have a defined interface (usually only one), and a service is a grouping of components (executable programs) to get the job done.This higher level of application development provides a strategic advantage, facilitating more focus on the business requirement.SOA isn't a new approach to software design; some of the notions behind SOA have been around for years.A service is generally implemented as a coarse-grained, discoverable software entity that exists as a single instance and interacts with applications and other services through a loosely coupled (often asynchronous), message-based communication model.The most important aspect of SOA is that it separates the service's implementation from its interface.Service consumers view a service simply as a communication endpoint supporting a particular request format or contract as shown in Figure 3.
Reference architecture is a more concrete artifact used by architects.Unlike the reference model, it can introduce additional details and concepts to provide a more complete picture for those who may implement a particular class.Reference architectures declare details that would be in all instances of a certain class, much like an abstract constructor class in programming.Each subsequent architecture designed from the reference architecture would be specialized for a specific set of requirements.Reference architectures often introduce concepts such as cardinality, structure, infrastructure, and other types of binary relationship details.Accordingly, reference models do not have service providers and consumers.If they did, then a reference model would have infrastructure (between the two concrete entities) and it would no longer be a model.The reference model and the reference architecture are intended to be part of a set of guiding artifacts that are used with patterns.Architects can use these artifacts in conjunction with others to compose their own SOA.The concepts and relationships defined by the reference model are intended to be the basis for describing reference architectures that will define more specific categories of SOA designs.Specifically, these specialized architectures will enable solution patterns to solve a particular problem.Concrete architectures may be developed based upon a combination of reference architectures, architectural patterns and additional requirements, including those imposed by technology environments.Architecture is not done in isolation; it must account for the goals, motivation, and requirements that define the actual problems being addressed.While reference architectures can form the basis of classes of solutions, concrete architectures will define specific solution approaches.
Visibility and Real World Effect are also key concepts for SOA.Visibility is the capacity for those with needs and those with capabilities to be able to see and interact with each other.This is typically implemented by using a common set of protocols, standards, and technologies across service providers and service consumers.For consumers to determine if they can interact with a specific service, Service Descriptions provide declarations of aspects such as functions and technical requirements, related constraints and policies, and mechanisms for access or response.The descriptions must be in a form (or can be transformed to a form) in which their syntax and semantics are widely accessible and understandable.The execution context is the set of specific circumstances surrounding any given interaction with a service and may affect how the service is invoked.Since SOA permits service providers and consumers to interact, it also provides a decision point for any policies and contracts that may be in force.The purpose of using a capability is to realize one or more real world effects.At its core, an interaction is "an act" as opposed to "an object" and the result of an interaction is an effect (or a set/series of effects).Real world effects are, then, couched in terms of changes to this shared state.This may specifically mutate the shared state of data in multiple places within an enterprise and beyond.
The concept of policy also must be applicable to data represented as documents and policies must persist to protect this data far beyond enterprise walls.This requirement is a logical evolution of the "locked file cabinet" model which has failed many IT organizations in recent years.Policies must be able to persist with the data that is involved with services, wherever the data persists.A contract is formed when at least one other party to a service oriented interaction adheres to the policies of another.Service contracts may be either short lived or long lived.
Contribution of the paper
Software architecture is a main concern to improve the experience in current industry for producing quality software at reasonable time and cost.It will examine some of the essential issues, which play an important role in software architecture design and it explored five different phase in organization by which we can provide most essential practices which will be unique models of industry and human behavior that can be given on software architecture design and will be used to help organization and also enhance the architectural capability of personal and organizations.
Phase I: It will analyze the duties, skills and knowledge.We will analyze the work of individuals.In which the skills he/she has and how much knowledge he/she have?We will divide knowledge on the basis on domain specific and technology specific.In what manner we can provide coordination, coordination will be for a team or for some team.The main concern part is generating an inter-team coordination model for firm developing a single product or a closely related set of products.Phase V: In this phase we will manage the task using neural network.In this phase we will have a group of task using neural network as the main task will be executed.It will select best task among the group of task.There are number of task an organization has to perform.But the main concern is to know which of the task will be executed first.Choosing the best task according to the environment factors and availability of employees is the best practice in the real world.Software architecture is the set of significant decisions about software of organization which include security, task management, maintainability, performance, resilience, reuse, usability.Our main aim is to enhance these constraints in a proper way.In any organization lots of tasks will be present to perform.Here we will give some priority weightage to each task.In the case of a neural network (NN) based task scheduler, once the job parameters are exactly trained for a particular schedule, it will never miss that given scheduling pattern for that particular task.
CONCLUSION
This paper proposed new intelligence with service oriented architect paradigm to enable system quality to connect with software architectural models from which it is possible to extract precisely information.Our scheme has been proven to have software design with quality in the standard model.A systematic complexity analysis and extensive experiments shows that our proposal is also efficient in terms of computation and design of network used to describe different varieties of messages in SOA.These features of service with network analysis framework scheme a talented solution to group-service oriented communication with access control in various types of design.
INTRODUCTION
-Self-assessed‖, -self-reported‖ or -self-rated health‖ questions such as -How would you rate your current health status and would you say that it is very good, good, moderate/fair or poor/bad?‖are among the most commonly used measure of subjective evaluation of health status.Past studies have found this type of question to be a useful global measure of health (Zimmer et al., 2000).
The health status is usually classified as very good, good, moderate and poor/bad.When the researchers are *Corresponding author.E-mail: girmastat@gmail.com.Tel: +251910185903.
Authors agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License interested in finding the determinants of self reported health status, usually two separate binary logistic regression models are required to develop by grouping the response variable into two categories.This task is tedious and cumbersome due to estimation and interpretation of more parameters.
In many epidemiological and medical studies, ordinal logistic regression model is frequently used when the response variable is ordinal in nature.The study has made an effort to identify the predictors of health status of adolescents using an ordinal logistic regression model and multinomial logit model and selecting the appropriate models among them.
The aim of the study is to compare the efficiency of multinomial logistic regression models and ordinal logistic regression models as well as identifying the significant predictors affecting self reported health status of adolescents.
Baseline category logit (BCL) model
Even if the response is ordinal, we do not necessarily have to take the ordering into account.One category is arbitrarily chosen as the reference category.If it is the first category, then the logits for the other categories are defined by, ( ) ( ) Often, it is easier to interpret the effects of explanatory factors in terms of odds ratio than the parameters β.The odds ratio for exposure for response j (j = 2,…,J) relative to the reference category j=1 is, , Where, π jp and π ja denote the probabilities of response category j (j = 1, ..., J) according to whether exposure is present or absent, respectively.
Cumulative link models (CLM)
A cumulative link model is a model for an ordinal response variable, yi that can fall in j = 1. . .J, categories.Then yi follows a multinomial distribution with parameter πij, where πij denotes the probability that the i th observation falls in the j th response category.We define the cumulative probabilities as, Where θj is the cut points or intercept for each logit and β is vector of slopes for each logit.The CLM was originally proposed by Walker and Duncan (1967) and later called the proportional odds model by McCullagh (1980).The cumulative logits are also defined by Agresti (2002Agresti ( , 2007)).
( ) The odds ratio of the event y < j at x1 relative to the same event at x2 is This is independent of j.Thus the cumulative odds ratio is proportional to the distance between x1 and x2 which made McCullagh (1980) to call the proportional odds model (POM).
Adjacent Categories Model (ACL)
Another general model for ordered categorical data is the adjacent category model.As before, we let πij be the probability that individual i falls into category j of the dependent variable, and we assume that the categories are ordered in the sequence j=1, ..., J. Now take any pair of categories that are adjacent, such as j and j+1.We can write a logit model for the contrast between these two categories as a function of explanatory variables: Here, π ij is i th adolescence falls in j th health rate category, π i(j+1) is the probability of i th adolescent falls in (j+1) th health rate category.
Continuation Ratio Model (CRM) Feinberg (1980) proposed an alternative method to the POM for the analysis of categorical data with ordered responses.The continuation ratio model can then be formulated as, And could essentially be viewed as a ratio of the two conditional probabilities, P(y= yj|x) and P(y >yj|x).The odds ratio for continuation ratio for the k th covariate x k can be obtained directly from its model.
The proportional odds assumption
By proceeding with the model given by (logit (γij) = θj + xi T β) the assumption of the covariate effects are invariant to the cut points, thus implying proportionality in the odds ratios.The proportional odds model can be considered as a series of J− 1 binary logits where the β's are constrained across the models such that: β1 = β2= . . .= βJ−1 = β.
Goodness of fit and deviance
The goodness of fit or calibration of a model measures how well the model describes the response variable.Assessing goodness of fit involves investigating how close values predicted by the model with that of observed values.The goodness-of-fit 2 x process evaluates predictors that are eliminated from the full model, or predictors that are added to a smaller model.The question in comparing models is whether the log-likelihood decreases or increases significantly with the addition or elimination of predictor(s) in the model.
A more general measure called the deviance is defined for generalized linear models and contingency tables.The deviance is closely related to sums of squares for linear models (McCullagh and Nelder, 1989;Nelder and Wedderburn (1972).The deviance is defined as minus twice the difference between the log-likelihoods of a full (or saturated) model and a reduced model: D = −2 (ℓreduced − ℓfull) The full model has a parameter for each observation and describes the data perfectly while the reduced model provides a more concise description of the data with fewer parameters.A special reduced model is the null model which describes no other structure in the data than what is implied by the design.The corresponding deviance is known as the null deviance and analogous to the total sums of squares for linear models.The null deviance therefore also denoted the total deviance.The residual deviance is a concept similar to residual sums of squares and simply defined as: D resid = D total -D reduced .A difference in deviance between two nested models is identical to the likelihood ratio statistic for the comparison of these models.Thus, the deviance difference, just like the likelihood ratio statistic, asymptotically follows a χ2-distribution with degrees of freedom equal to the difference in the number of parameters in the two models.
Model comparison with likelihood ratio tests
Model selection includes the choice of the type of model and variable selection within a model type.In this framework, the parameters estimating method with numerical integration has the advantage of being based on likelihood statistics.Thus, models can be ordered according to likelihood-based measures, such as Akaike's information criterion or Schwarz's Bayesian criterion (which judges a model by how close its fitted values tend to be the true expected values, as summarized by a certain expected distance between the two).In selecting a model, we should not think that we have found the -correct‖ one.Any model is a simplification of reality.However, a simple model that fits adequately has the advantages of model parsimony.If a model has relatively little bias, describing reality well, it provides good estimates of outcome probabilities and of odds ratios that describe effects of the predictors.A general way to compare models is by means of the likelihood ratio statistic.If we consider two models, m0 and m1, where m0 is a sub-model of model m1, that is, m0 is simpler than m1 and m0 is nested in m1.The likelihood ratio statistic for the comparison of m0 and m1 is LR = −2 (ℓ0 − ℓ1), where ℓ0 is the log-likelihood of m0 and ℓ1 is the log-likelihood of m1.The likelihood ratio statistic measures the evidence in the data for the extra complexity in m1 relative to m0.The likelihood ratio statistic asymptotically follows a χ2
Results
The data comprise 2084 adolescents aged 13-17 years who were interviewed to study the health of adolescents in South west Ethiopia, Jimma zone.The adolescents' response was recorded on four ordinal scales (poor, moderate, good and very good).But counts for responses ‖poor‖ and -moderate‖ heath rate are amalgamated into one category -poor/moderate‖ due to sparse cell counts (poor, 1.1% and moderate, 5.6%).From the total of 2084 adolescents, 81.2% had very good health status, 12.1% had good health status, and 6.7% had poor/moderate health status.
The significant variables in BCL model (using R package: MASS, R function: stepAIC) are used to determine a model with the minimum possible AIC (Akakie information criteria).Accordingly, sex, source of water and educational status are the selected variables to yield the minimum possible AIC of all the combinations.So, we fit the BCL model which consists of the variables that yield the minimum AIC, as the lowest AIC is the better fit (Table 1).
The maximum value of the log-likelihood function for the fitted model is -1241.5,giving the likelihood ratio chisquared statistic 2(-1241.4+1260.5)= 38.1.The statistic, which has 8 degrees of freedom (10 parameters in the fitted model minus 2 for the minimal model), is significant compared with the X 2 (8) distribution (p-value < 0.0001), showing also the overall significance of the model.That means the null hypothesis of all slope parameter is zero is rejected (at least one coefficient of the parameter is different from zero).The AIC value is 2503 = (-2*(-1241.5)+2*10) for the above BCL model.
A difference in deviance between two nested models (Table 1) is identical to the likelihood ratio statistic for the comparison of these models (Holtbrugge and Schumacher, 1991).The deviance of the additive model which includes all covariates is 2467.8 and the deviance of the model which only includes the three predictors (that is, sex, source of water and education) is 2482.997.Therefore the likelihood ratio statistics which is 15.2= (2482.997-2467.8),asymptotically follows a χ2-distribution with degrees of freedom the difference in the number of parameters of the two models, 22-10 =12 (that is, X 2 (12)).Since the likelihood ratio statistics shows a p value of 0.23, it implies that we fail to reject the null hypothesis, H 0 : β p = β c = β g = β w =0 (the slope coefficients for place, cooking place, workload and garbage disposal are zero).Therefore, the model which only includes the three predictors (that is; sex, source of water and education) is better than the model which includes all covariates.
Likewise when we compare the BCL model of Tables 1 and Table 2, we are obtaining the likelihood ratio of 3.97 with 2 degrees of freedom having p value of 0.14.This also implies that the model which consists of the three predictors is better than the model having additional one predictor (Table 1).Besides, it has minimum AIC; it implies that the model including only the three predictors is the parsimonious model for the BCL model.When we check the proportional assumption of CLM, after obtaining the possible combinations of covariates which reduce the AIC value, the score test for the proportional odds assumption is 4.3 which follows a χ2distribution with degrees of freedom 4= (4 *(3-2)), that is X 2 (4)= 9.49, having p values of 0.37.It implies that the proportional odds assumption is satisfied.And the likelihood ratio tests for ACL model and CRM for checking the proportional odds assumption are 6.27 and 4.4 having p values of 0.18 and 0.36 respectively.Therefore, there is no evidence against the proportional odds assumption.Hence, the proportional assumption holds for both models, so, we do not need to fit the non proportional odds model.
The second column of estimates in Table 2, for example, gives the log-odds of responding in category 1 (-poor/moderate‖) versus other categories (-good‖ and -very good‖), the log-odds of responding in categories 1 and 2 (-poor/moderate‖ and ‖good‖) versus category 3 (-very good‖).The estimate of ACL model gives the logodds of responding in category 1 (-poor/moderate‖) versus category 2 (-good‖) and category 2 (-good‖) versus category 3 (-very good‖).The estimate of CRM gives the log odds of adolescents fall in one category of health status given the other better health status categories.Since the sign of the coefficients for a predictor is the same for all ordinal logistic regression models (Table 2), they have similar interpretations.For instance, the estimate of sex is -0.51, -0.31 and -0.49 for POM, ACL and CRM respectively.So the odd ratios of male adolescents for all models are less than one, implying that males have slightly better health than females.
The log-likelihood function for the CLM is -1243.6,giving the likelihood ratio chi-squared statistic 2*(-1243.6+1260.515)= 33.9.The statistic, which has 4 degrees of freedom (6 parameters in the fitted model minus 2 for the minimal model), is significant compared with the X 2 (4) distribution (p-value < 0.0001), showing the overall significance of the model.That means at least one coefficient of the parameter is different from zero.For ACL model and CRM, the likelihood ratios are 31.76and 33.68 with X 2 (4) respectively (p value <0.0001 for the two models), showing also the over all significance of the models.The likelihood ratio statistics of POM for the two nested models; that is, for fitted model on Tables 1 and 2 is 1.58, which asymptotically follows a χ2-distribution with degree of freedom 7 -6 = 1 (that is, X 2 (1)).Since the likelihood ratio statistics shows a p value of 0.21, it implies that we fail to reject the null hypothesis of Ho: β w =0, (the coefficient of workload).Therefore a model which excludes this variable is preferable than a model which includes it.Like wise, the likelihood ratio statistics of the two nested models for ACL model and CRM are 0.57 and 1.68 respectively, which follows a χ2-distribution with each degree of freedom 7 -6 = 1 (that is, X 2 (1)).The likelihood ratio statistics shows a p value of 0.45 and 0.20; it implies just as the POM, we fail to reject the null hypothesis of Ho: β w =0 for ACL model and CRM.Generally, a model which is fitted using the three predictors (that is, sex, source of water and education) is better than a model which is fitted using the four univaritly significant predictors or a model which includes all predictors for CLM, ACL model and CRM respectively.
Having the maximum likelihood value for each model, it is possible to have their AIC value.Accordingly, the AIC value of POM is 2499.2, the AIC of ACL model is 2501.3 and the AIC of CRM is 2499.4.
Comparison of models
We used the likelihood ratio test to compare nested models, whereas AIC is used to compare the non-nested models.We compared all models using statistical criteria of log likelihood, goodness of fit and AIC.But choice of model should depend less on goodness of fit.
The ACL model corresponds to a BCL model.One can fit ACL by fitting the equivalent BCL model.But the construction of the ACL model recognizes the ordering of Y categories.To benefit from this model parsimony requires appropriate specification of the linear predictor.Since explanatory variable has similar effect for each logit, advantages accrue from having a single parameter instead of 2= (3-1) parameters describing that effect.When used with this proportional odds form, ACL model fits well.Besides it has minimum AIC value as compared with BCL model.
Usually, the fit of both CLM and CRM is similar for many data sets.Here also the fit of the two models is almost similar.When we see the AIC values of the two models, the AIC of CLM is slightly smaller than that of CRM, has also slightly higher goodness of fit (p=0.79)than CRM (p=0.78) and proportionality is satisfied in better way than other models as its p value is the largest of all.Besides this, the CRM, is not invariant under an amalgamation of adjacent categories; for this reason, CRM is suitable in circumstances where the individual categories of the response are intrinsically of interest.So, CLM is better than CRM for this data set.
When we see the best model among the selected models, CLM fits well for this data set.It is also better than ACL model since it has minimum AIC value and goodness of fit for CLM has larger p value (0.79) than ACL model (0.65).Therefore, POM is the parsimonious model.Because, it satisfies the proportional assumption, has less number of parameters as compared with BCL model, shows model adequacy, has better goodness of fit and has smaller value of AIC as compared with the other models.
Generally, ordinal logistic regression model is better than nominal logistic regression model for this data set.
The final appropriate model is CLM that has two logits in which each logit is only different with their cut point values because of the fulfillment of the proportional odds assumption for this data set.The effects of the explanatory variables are the same across the three logit functions: logit (γij) = θ j -0.51 sex male -0.35 swater tap -0.57educ primary -0.28 educ secondary ; Where, sex male = male adolescences, swater tap = tap or protected source of water, educ primary = primary education educ secondary =secondary education; i = 1, 2. . .2084 and j = 1, 2 (Figure 1).
DISCUSSION
The POM and CRM are the most widely used in epidemiological and biomedical applications (Ananth and Kleinbaum, 1997) while other models for analysis of ordinal outcomes have received less attention.This is because both models may be interpreted in terms of odds ratios (familiar to epidemiologists), basic underlying assumption of each model-equality of β's and statistical models may be plausible biologically.Armstrong and Sloan (1989) reported that usually both CLM and CRM are similar for many data sets.Here also the fit of the two models is almost similar in this study.The POM can be viewed as a model nested with the unconstrained PPOM, and according to the deviance, the unconstrained partial proportion odds model is better than POM as it has a smallest p value (p<0.05) and the proportionality assumption is violated (Ananth and Kleinbaum,1997;Peterson and Harrell, 1990).But in this study POM is the selected model as the assumption is satisfied and had minimum AIC.Usually BCL models are better than ordinal logistic regression models when the proportional odds assumption is violated; in such cases BCL model can be treated as an alternative model for ordinal logistic regression model.
According to this study, CLM was found to be the better model than other models as it had minimum AIC, satisfied the proportional assumption and had better goodness of fit.Besides AIC, an intuitive choice between CLM and CRM can also be based on the goals of statistical analysis.
This finding is consistent with the results of other studies.For example; educational attainment was significantly associated with self-rated health, in the expected directions and females were slightly more likely than males to report fair or poor self-rated health (Veenstra, 2011).
Conclusion
Ordinal logistic regression models were better than nominal logistic regression model.Among ordinal logistic regression models the CLM or proportional odds model was an improved fit as compared to the rest models for any combination of variables in the data set.We also found that sex, source of drinking water and educational status of the adolescents had a significant effect on their health as they were the possible combinations to yield the minimum AIC in the CLM.Being literate and using of tap or protected water had a positive contribution for a better health status of teenagers but high workload which was univariatly significant had a deteriorate impact on state of health and boys were less likely than females to report a deteriorate state of health.(Lee and Lockheed, 1990;Omoifo, 1996Omoifo, , 2004;;Tambo et al. 2011) such as Zimbabwe and Nigeria.Despite efforts to engender equity, disparity seems to persist in gender participation and achievement in science and science related jobs and profession especially in developing nations like Nigeria.While indicators have shown a gradual reduction in gender gap in educational access at the primary school in Nigeria (Okogwu, 2009), the same cannot be said of science achievement (Zembar and Blume, 2011), scientific reasoning skill acquisition (Lawson, 2002;Musheno and Lawson, 1999), entry into science related jobs and professions and scientific attitude exhibition.Studies in the effect of learning environment on students' performance in learning outcome (Hopkin, 2001;Mallam, 1993) revealed significant differences in the achievement of single-sex and co-educational students.Young and Fraser (1994) had earlier stated that most differences in learning outcome previously attributed to gender were actually due to school type.Efforts have also been made in research to identify factors or instructional elements that moderate how girls and boys learn science (Moemeke and Omoifo, 2008).The study shows that girls tend to benefit from curriculum models that emphasize inquiry, hands-on, hypotheticopredictive enquiries and those with visual information prompts (Moemeke, 1999) the same way it benefits low ability learners.In the middle of the 1990s, there was a tilt in research opinions towards co-education.Dale (1969) proposed substantial benefits in educating boys and girls in co-educational setups.Top of his reasons is that it provides avenue for provision of equal opportunities for both sexes.He argued that there was no evidence that coeducation has negative effect on education of girls.However, researches by American Association of University women in the 1990's (Elwood and Gripps, 1999) and Shaw (1995) called for a rethinking of issues of girls' education.They reported that girls in the single sex schools tend to achieve higher in Science and Mathematics even when their laboratories were less well equipped and with less qualified teachers than the boys schools.Different researchers have given reasons for Moemeke and Konyeme 23 higher achievement of girls in single sex girls' schools to include: Self-concept which is the total belief that people have about their competence and ability is higher in girls from single sex schools than in girls from coeducation (Kassin et al. 2008;Tully and Jacob, 2010).Other people's perception affects people's self-concept and as such teachers' communication, attitude and expectation of girls in single sex schools is higher than girls in coeducation.(Tambo et. al, 2011;Kassin et. al., 2008) Reduced possibility of sex-role stereotyping in single-sex girls school compared to coeducation girls where there is possible high level of fear of success and assuring of leadership roles among girls (Lee and Lockheed, 1990) Teachers' gender bias that exists latently in coeducational classrooms which promote subject choice bias in favour of boys' dominance, spitefulness and negative competitiveness (Tambo et. al, 2011).
The mediating role of curriculum model in determining achievement of students in schools in the different school types and for different levels of learning outcome is a major focal point of this study.
Statement of the problem
In the recent past, premium has been placed on learning environment as a factor in students' achievement in science.Such environments include both the physical and psychological spheres in which learning occur.An earlier study by Moemeke and Omoifo (2010) has implicated School type in this milieu.They showed that students from single sex schools performed better than their counterparts from co-educational schools in science.Some other studies have implicated high self-esteem among students from single sex schools as a possible determinant of achievement since it was found to be higher in single sex institution (Cardona, 2011;Lee and Lockheed, 1990).However, in terms of number of students enrolled for science subjects in schools, students from All-boys' school were found to opt for science subjects than girls in All-girls schools.The trend was maintained in coeducational schools (Jackson, 2011).Does it then mean that girls are reluctant to opt for science subjects even when taught using the same curriculum model with their male counterpart?The inconclusiveness of research evidence calls for further studies on the effect of school type on achievement in schools.It is likely that other variables such as curriculum model, intelligence, school location, and ethos as well as school management practices interfere with results hence the variation.This study therefore asks: To what extent does curriculum model and school type influence male and female students' achievement in science?Is there any learning outcome that is specially preferred by single sex or coeducational schools?Is there any interaction effect of curriculum model and school type on students learning outcome?It is hoped that this study will clear the air on these aspects of science learning research and as such give evidence based information for further actions towards gender equity in science classrooms as well as towards achievement of the Education for All (EFA) and Millennium Development Goals (MDGs) elements as it concerns gender and achievement of goals of science and technology education.
Research hypotheses
To enable this investigation, the following null hypotheses were stated.
1.There is no significant difference in learning outcome of science students from single sex and coeducational schools based on curriculum pedagotronics.2. There is no significant difference in the learning outcome of girls from all-girls and their counterparts from coeducational schools in science learning outcome.
3. There is no significant difference in the learning outcome of boys from all-boys and their counterparts from coeducational schools 4.There is no significant interaction effect of curriculum model and school type on learning outcome in science.
METHODOLOGY
The independent variables in the study are curriculum model with three levels (Descriptive learning cycle, Hypthetico-predictive learning cycle and expository approach), school type with three levels (All-boys, All-girls and co-educational) and sex with two levels (Male and female).The dependent variable is learning outcome in three levels (Scientific reasoning skills, Attitude towards science and Achievement in science).The descriptive learning cycle (DLC) as proposed by Karplus and Their (1967) is sequenced approach, which begins with Exploration during which the learner explores the problem of study so as to raise questions which will form a pedestal to the second phase known as term introduction.In this phase, the teacher clarifies and defines concepts, which the exploring students may have come across in the first phase but could not make enough meaning out of.The third phase is the concept application phase during which the teacher and students develop a pattern that enables them to draw a link between concepts within and across disciplines.The HPLC follows the same pattern as the DLC except that there is a conscious effort to lure the learners into two important process skills of science -hypothesizing and predicting which enables them to raise their own hypotheses, make predictions based on their perceived evidence and by so doing reveal their misconceptions/ alternative conceptions about the problem.This exercise provides a platform for fruitful exploration that will follow.The Traditional Expository Approach (TEA) did not involve the learning cycle model.Instead, the approach was teacher-dominated.Students only received facts from the teacher except for few questions that may arise from the students which teacher clarifies.Nine intact classes of SS2 students from three secondary schools in the three senatorial zones of Delta state Nigeria were used for the study.Each intact class selected from a zone received one of three treatment types for a period of two months (8 weeks).
One of the three school types was purposively selected from each zone (All-boys, All-girls and Coeducational schools).A nonequivalent pre-test-post-test control group design without randomization was adopted within the quazi experimental domain.
A total of 210 students consisting of 94 coeducational, 47 All-girls and 69 All-boys students participated in the study.Data was collected using three instruments.They are: 1. Test of Scientific Reasoning Skills (TRS) which is a 10-item test of logical reasoning in Biology.The instrument was adapted from Lawson (1992) Treatment lasted for 10 weeks.The first and last weeks were used for the administration and collection of pretest and posttest data for the study respectively.The six selected concepts were taught within the remaining eight weeks.Three types of classroom procedures were drawn for the three curriculum models for the six lessons.Thus, a total of eighteen planned lessons were taught (see sample of lesson plan attached as appendix).All lessons were taught by the lead researcher and her partner to ensure uniformity while the science teachers in the selected schools acted as research assistants.
Data management and analysis
Data that resulted from the exercise were coded.The curriculum models (TEA, DLC and HPLC) were coded as 1, 2 and 3 respectively.School type which is a second independent variable in the study was coded 1 for coeducational schools, 2 for all-girls schools, and 3 for all-boys schools.Sex was also coded 1 for males and 2 for females.The dependent variable which is learning outcome consists of three levels (scientific reasoning skills, achievement in Biology concepts and attitude towards biology) was coded as A, B and C respectively.A one-way repeated measure Analysis of Variance (ANOVA) was adopted as statistical tool in this study using learning outcome as repeated measures.Differences found to be significant at 0.05 alpha level was subjected to post hoc analysis to determine the source of the significance.
ANALYSIS AND RESULTS
Hypothesis 1: There is no significant difference in learning outcome of science students from single sex and coeducational schools based on treatment.
A one-way ANOVA with repeated measure on one Furthermore, findings show that there was not a statistically significant interaction in the learning outcome between the Test type (AcT, AtT, or SrT) and School Type (Coeducational or Single), F (1.02, 212.65) =1.94, p = .164.Thus, these results indicate that learning outcome of science students is statistically significant irrespective of the school type.Specifically, learning outcome for science students within Coeducational schools is statistically different; the same trend applies to their counterparts within Single-sex schools (Figure 1).This goes to show that science students' performance in the test types is similar despite the inherent differences in the sexorientation, academic, and administrative structures of the distinct school types (Coeducational or Single-sex schools).Moreover, the findings in Table 2 also show that the average performance between students in Coeducational (Mean=54.87,SD=1.07) and Single-sex (Mean=56.03,SD=.96) schools was not significantly different, F (1, 208)=0.694,p=.406.Therefore, the average performance of science students in Co-educational schools is similar to that of their counterparts in Singlesex schools.However, single sex schools maintained a slight edge over their coeducational counterparts in scientific reasoning as is seen in Figure 1.To determine if there is any difference in the learning outcome of the overall sample from single-sex and co-educational school due to curriculum model applied, data is presented in Table 3.
A one-way ANOVA with repeated measure was conducted to determine whether the adoption of the Curriculum Models (CM) significantly affected the Learning Outcome (LO) of science students from Coeducational and Single-sex schools.In other words, the analysis sought to find out whether the performance of science students in Achievement test (AcT), Attitude test (AtT), and Scientific reasoning test (SrT) was influenced by the type of CM adopted.The independent variable CM was adoptedit was measured at three levels [Expository Approach (TEA), Descriptive Learning Cycle (DLC), and Hypthetico-Predictive Learning Cycle (HPLC)] -while the dependent variable was the performance scores (measured in percentages) for each test.Statistical significance was set at 0.05 level of significance.Mauchly's test indicated that the assumption of sphericity had been violated, X 2 (df=2) =13.34, p<.05, therefore degrees of freedom were corrected using Greenhouse-Geisser estimates of sphericity (E=.873).Table 3 shows that the LO of all science students was statistically significant given F(1.02, 210.83)=178.345and p<0.05; this result suggests that the average score of the science students in (at least) one of the three test is significantly different from the rest.In essence, the average performance of students in the given tests i.e AcT, AtT, and SrT were not similar.This difference is further highlighted in Table 4 which shows that students' performance in AtT (mean score=64.260)was significantly better than AcT (mean score=53.550)and SrT (mean score=48.101),whereas their performance in AcT (mean score=53.550)was significantly better than SrT (mean ) shows that the CM effect on LO of all science students was statistically not-significant given F(2.04, 210.83)=0.688and p>0.05; this suggests that the order of performance of science students in the three test (AcT, AtT, and SrT) was not significantly influenced by the CM adopted.Specifically, this implies that irrespective of the nature/kind of CM adopted (whether TEA, DLC, or HPLC), the order of performances in the respective tests was still the same.Emphasis on this outcome is evidenced in Table 4 for TEA, the students performed best in AtT (mean score= 48.056) and least in SrT (mean score=43.333);for DLC, the students performed best in AtT (mean score=64.878)and least in SrT (mean score=47.439);for HPLC, the students performed best in AtT (mean score=70.235)and least in SrT (mean score=53.529).Therefore the curriculum model did not significantly affect how the students performed in the tests.However, based on Table 3, the result shows that the performance of science students differed significantly based on the CM adopted, it yielded F(1.02, 210.83)=178.345and p<0.05; this result indicates that students' overall LO score (cumulative score for AcT, AtT, and SrT) is significantly different for (at least) one of the CM adopted.This difference is further highlighted in Table 4 which shows the outcome of a Bonferroni posthoc test.The post-hoc test shows that students' overall LO score for HPLC (mean score=60.765)was significantly higher than DLC (mean score=55.461)and TEA (mean score=49.685),whereas their overall LO score for DLC (mean score=55.461)was significantly higher than that of TEA (mean score=49.685).Thus, science students (a combination of Coeducational and Single-sex schools) had the best LO score when HPLC model was applied, and least LO score when TEA model was applied.In concluding the analysis, Curriculum model adopted will not significantly influence the learning outcome of science students from Coeducational and Single-sex schools.
Hypothesis 2: There is no significant difference in the learning outcome of girls from All-girls and their counterparts in coeducational schools.
A one-way ANOVA with repeated measure was conducted to determine whether there was statistical significance in learning outcome between female science students from Coeducational and Girls-only schools.Mauchly's test indicated that the assumption of sphericity had been violated, X 2 (df=2) =13.34, p<.05, therefore, degrees of freedom were corrected using Greenhouse-Geisser estimates of sphericity (E=.873).Table 5 shows that the learning outcome of all female science students are significantly different, F (1.75, 150.18) =57.684, p=.000.In particular, they performed similarly in AcT (Mean=60.95,SD=1.523) and AtT (Mean=60.37,SD= 1.355), whereas they performed significantly low in SrT (Mean=45.94,SD=1.453).As the results imply, all female science students are intellectually stronger in AcT and AtT than SrT.Furthermore, findings show that there was not a statistically significant interaction in the learning outcome between the Test type (AcT, AtT, or SrT) and School Type (Female science students from Coeducational or Girls-only schools), F (1.75,150.18)=1.16, p = .311.Thus, these results indicate that learning outcome of female science students is statistically significant in both school types.Specifically, female science students within Coeducational schools performed similarly in AcT (Mean=60.19,SD=2.225)and AtT (Mean=58.37,SD= 1.981) but significantly low in SrT (Mean=46.34,SD=2.124); invariably implying that they are intellectually stronger in AcT and AtT than SrT.This same trend applies to their counterparts within Girls-only schools; similar performance in AcT (Mean=61.70,SD=2.079) and AtT (Mean=62.38,SD=1.850), but significantly low in SrT (Mean=45.53,SD=1.984); see Table 4.However, a closer look at the performances shows that those from Girlsonly schools performed marginally higher in AcT and AtT, than their counterparts in Coeducational schools (Figure 2).
Moreover, the findings in Table 6 show that the average performance between female science students in Coeducational (Mean=54.97,SD=1.64) and Girls-only (Mean=56.54,SD=1.53) schools was not significantly different, F (1, 86)=0.492,p=.485.Therefore, the average performance of female science students in Coeducational schools follows the same pattern as that of their counterparts in Girls-only schools except that girls in single sex schools registered higher mean scores in AcT and AtT (Figure 2).A one-way ANOVA with repeated measure was conducted to determine whether the adoption of the Curriculum Models (CM) significantly affected the Learning Outcome (LO) of female science students from Coeducational and Girls-only schools.In other words, the analysis sought to find out whether the performance of female science students in Achievement test (AcT), Attitude test (AtT), and Scientific reasoning test (SrT) was influenced by the type of CM adopted.The independent variable was the CM adoptedit was measured at three levels [Expository Approach (TEA), Descriptive Learning Cycle (DLC), and Hypthetico-Predictive Learning Cycle (HPLC)] -while the dependent variable was the performance scores (measured in percentages) for each test.Statistical significance was set at 0.05 level of significance.Mauchly's test indicated that the assumption of sphericity had been violated, given X 2 (df=2) =12.18, p<.05, therefore degrees of freedom were corrected using Greenhouse-Geisser estimates of sphericity (E=.881).Table 7 shows that the LO of female science students was statistically significant given F(1.76, 149.79)=61.343and p<0.05; this result suggests that the average score of the female science students in (at least) one of the three test is significantly different from the rest.
In essence, the average performance of female students in the given tests that is, AcT, AtT, and SrT were not similar.This difference is further highlighted in Table 4 which shows that female students' performances in AtT (mean score=64.260)and AcT (mean score=53.550)was not significantly different, yet their scores in both tests was significantly higher than SrT (mean score=48.101).This implies that female science students performed best in the AtT and AcT tests, and least in SrT test.
Furthermore, results in Table 7 that is, LO*CM, shows that the CM effect on LO of female science students was statistically not-significant given F(3.52, 149.79)=2.066and p>0.05; this suggests that the order of performance of female science students in the three test (AcT, AtT, and SrT) was not significantly influenced by the CM adopted.As such, irrespective of the type of CM adopted (whether TEA, DLC or HPLC), the result indicates that the order of performances in the respective tests was still the same.Emphasis on this outcome is detailed in Table 4: for TEA, the female students performed similarly in AtT (mean score=54.19)and AcT (mean score=53.81),and performed least in SrT (mean score=39.35);for DLC, the female students performed similarly in AtT (mean score= 56.98) and AcT (mean score=59.73),and performed least in SrT (mean score=47.67);for HPLC, the female students performed similarly in AtT (mean score=70.67)and AcT (mean score=71.68),and performed least in SrT (mean score=51.48).Therefore the curriculum model did not significantly affect how the students performed in the tests.
However, based on Table 8, the result shows that the performance of female science students differed significantly based on the CM adopted, it yielded F(2, 85)=24.92and p<0.05; this result indicates that female students' overall LO score (cumulative score for AcT, AtT, and SrT) was significantly different for (at least) one of the CM adopted.This difference is further highlighted in Table 4 which shows the outcome of a Bonferroni post-hoc test.
The post-hoc test shows that female students' overall LO score for HPLC (mean score=64.61)was significantly higher than DLC (mean score=54.79)and TEA (mean score=49.12),whereas their overall LO score for DLC (mean score=54.79)was significantly higher than that of TEA (mean score=49.12).Thus, female science students had the best LO score when HPLC model was applied, and least LO score when TEA model was applied.
Hypothesis 3: There is no significant difference in the learning outcome of boys from all-boys schools and their counterparts from Coeducational schools.A one-way ANOVA with repeated measure was conducted to determine whether there was a statistical significance in learning outcome between male science students from Coeducational and Boys-only schools.Mauchly's test indicated that the assumption of sphericity was not violated, X 2 (df=2) =2.74, p>.05, hence the analysis results are based on the assumption that there is homogeneity across the students' performance scores.
Table 9 shows that the learning outcome of all male science students are significantly different, F (2, 240) =100.365,p=.000.This implies that all male science students generally performed similarly in AcT (Mean= 67.18, SD=1.094) and AtT (Mean=64.59,SD=1.163), whereas they performed significantly low in SrT (Mean=49.53,SD=1.169).As the results imply, male science students are intellectually stronger in AcT and AtT than SrT.Furthermore, findings show that there was a statistically significant interaction in the learning outcome between the Test type (AcT, AtT, or SrT) and School Type (Male science students from Coeducational or Boys-only schools), F (2, 240) =6.77, p = .001.Thus, these results indicate that the statistical significance in learning outcome of male science students is not similar in both school types.Specifically, male science students within Coeducational schools performed similarly in AcT (Mean=67.69,SD=1.646) and AtT (Mean=66.46,SD= 1.749) but significantly low in SrT (Mean=46.60,SD=1.759); invariably implying that they are intellectually stronger in AcT and AtT than SrT.However, this same trend does not equally apply to their counterparts within Boys-only schools; a marginal statistical significance was observed between AcT (Mean=66.67,SD=1.443) and AtT (Mean=62.72,SD=1.533),and a significantly low performance in SrT (Mean=52.46,SD=1.541).Thus, male science students in Boys-only schools have distinct learning outcomes in each test type, while their coeducational counterparts have similar learning outcomes in AcT and AtT and distinct for SrT (As shown in Figure 3).However, the findings in Table 10 show that the average learning outcome between male science students in Coeducational (Mean=60.25,SD=1.26) and Boys-only (Mean=60.62,SD=1.11) schools was not significantly different, F (1, 120)=0.047,p=.829.Therefore, the average performance of male science students in Coeducational schools is similar to that of the counterparts in Boys-only schools.
A one-way ANOVA with repeated measure was conducted to determine whether the adoption of the Curriculum Models (CM) significantly affected the LO of male science students from Coeducational and Boys-only schools.In other words, the analysis sought to find out whether the performance of female science students in Achievement test (AcT), Attitude test (AtT), and Scientific reasoning test (SrT) was influenced by the type of CM adopted.The independent variable was the CM adopted it was measured at three levels [Expository Approach (TEA), Descriptive Learning Cycle (DLC), and Hypthetico-Predictive Learning Cycle (HPLC)] -while thedependent variable was the performance scores (measured in percentages) for each test.Statistical significance was set at 0.05 level of significance.Mauchly's test indicated that the assumption of sphericity had been violated, given X 2 (df=2) =8.02, p<.05, therefore degrees of freedom were corrected using Greenhouse-Geisser estimates of sphericity (E=.938).Table 11 shows that the LO of male science students was statistically significant given F(1.88, 223.32)=87.109and p<0.05; this result indicates that the average score of the male science students in (at least) one of the three test is significantly different from the rest.In essence, the average performance of male students in the given tests However, based on Table 13, the result shows that the overall performance of male science students differed significantly based on the CM adopted, it yielded F(2, 119)= 22.84 and p<0.05; this result indicates that male students' overall LO score (cumulative score for AcT, AtT, and SrT) was significantly different for (at least) one of the CM adopted.This difference is further highlighted in Table 11(b) which shows the outcome of a Bonferroni post-hoc test.The post-hoc test shows that male students' overall LO score for HPLC (mean score=66.67)was significantly higher than DLC (mean score=59.01)and TEA (mean score=54.27),whereas their overall LO score for DLC (mean score=59.01)was significantly higher than that of TEA (mean score=54.27).Thus, male science students had the best LO score when HPLC model was applied, and least LO score when TEA model was applied.
Hypothesis 4: There is no significant interaction effect of curriculum model and school type on learning outcome in science.
A two-way ANOVA with repeated measure was conducted to determine whether there was a significant interaction effect between curriculum model and school type on learning outcome of science students.Mauchly's test indicated that the assumption of sphericity was not violated, X 2 (df=2) =19.492, p<.05, therefore, degrees of freedom were corrected using Greenhouse-Geisser estimates of sphericity (E=.873).The analysis output shows that the learning outcome of science students are significantly different, F (1.83, 367.84)=141.82,p=.000.This implies that while all science students generally had similar high outcome in AcT (Mean=64.05,SD=.882) and AtT (Mean=62.48,SD=.730), they performed significantly low in SrT (Mean=48.09,SD=.894).As the results imply, science students are intellectually stronger in AcT and AtT than SrT.
Moreover, findings in Table (14) show that the overall learning outcome of science students is significantly different across the Treatment types applied (i.e TEA, DLC, and HPLC), F (2, 201) =48.35, p = .000.Hence these results indicate that the learning outcome is significantly different for at least one treatment type; specifically, learning outcome was found to be similar between TEA (Mean=51.89,SD=1.058) and DLC (Mean=56.70,SD=.948), but was significantly high for HPLC (Mean=66.02,SD=1.021) treatment.This finding suggests that while TEA and DLC curriculum models appear to have relatively low learning outcome effect among science students generally, its effect was worse for Coed students and girls (Figure 4).The HPLC model on the other hand seemed to be a more beneficial and effective approach to improving learning outcome of all groups especially for girls.The improvement in the outcome scores of the Coed group is a pointer to that effect.Table ( 14) also shows that learning outcome is significantly different across the School types (that is, Coeducational, Girls-only, and Boys-only), F (2, 201) =4.332, p = .014.The results indicate that learning outcome is significantly different for at least one school type: learning outcome for Coeducational (Mean=57.29,SD=.843) and Girls-only (Mean=56.73,SD=1.176) science students was found to be similar, whereas their counterparts from Boys-only schools (Mean=60.59,SD=.983) have a significantly higher learning outcome in all curriculum models (Figure 4).Furthermore, Table 14(b) shows that there is no significant interaction effect between School type and Treatment on learning outcome, F (4, 201) =2.185, p = .072.The results indicate that the significant difference in learning outcome appears to be similar across different treatment types for each school type, and vice-versa (As shown in Figure 4).
Discussion of results
The study centered on the effect of curriculum model (treatment type) and school type on science students learning outcome.One of the results of the study is that there is a significant difference in the learning outcome of science students (mean = AcT= 53.69, ArT =64.54 and SrT=48.07respectively).Science students in the study recorded highest mean scores in attitude (mean=53.69)and least in scientific reasoning (mean 48.07).The high attitude outcome is in line with the Bishop (1980), Lawson (1995) and Moemeke (2010).The high attitude outcome of the subjects in the study may be attributed to method of instruction and social and interactive variables associated with the instructional practices in the study.
The study recorded an improvement in the achievement of the subjects in the same school type over their scientific reasoning skills.The improved achievement in science conforms to explanation given by Simpson and Oliver (1990) and Hegarty-Hazel (1990) that attitude towards science influence and induce achievement but not vice vasa.It means that if instructional practices successfully boast science students' attitude towards doing science a possible increase in the achievement outcome might result.The significant difference in the learning outcome in all school types indicated that school type variables operate similarly respective of ethos characteristics or sex-orientation of the schools.Another finding of the study also showed similarity in the performance of science students in the different levels of learning outcome in the Coeducational and single-sex schools.Though the single-sex schools showed slight superiority in learning outcome (mean=56.03)over the Coeducational subjects (Mean=54.87),the difference was not statistically significant.This is at variance with the previous study reported by Bishop (1980) and Moemeke and Omoifo (2010) whose studies reported statistically significant differences in favour of single-sex school subjects.This may be unconnected with the recent reorganization of school ownership in Delta State, Nigeria (the study area) in which most single-sex schools were handed over to missionary owners and a consequent mass exodus of former single-sex school students to Coeducational Government owned schools.This prevailing situation may have cushioned the effect of school type since the transfer students still bear their foundation ethos back ground.This calls for regular intermittent studies of this sort to monitor variations in the learning outcome of the different school-types.However, the relatively higher scientific reasoning score of subjects of single-sex schools (49.66) over coeducational counterparts (46.49) is indicative of their superior performance in scientific reasoning (As shown in Figure 1).
The study also compared statistically the performance of girls in single-sex schools and their counterparts in Coeducational schools along the three levels of learning outcome.A significant difference was found in their performance in the three outcomes in both schools types.Girls in both school types performed evenly in attitude and achievement in science concepts and lowest in scientific reasoning skills.The result of this study is at variance with that reported by Mallam (1993), Granleese and Joseph (1993), Young and Fraser (1994), Lee and Lockhead (1990), Lee andMarks (1990) andHopkins (2001).The studies referred to above advanced reason for the performance to include: 1. Reduced opposite sex interaction and distractions 2. Increase commitment to academics as a consequence of reduced distraction 3. Removing feelings of inferiority and inhibition in Coeducational girls and 4. High self-esteem in All-girls subjects among others.
This present study result conforms to Brustaert and Brake (1994) who did not find any such difference.It is pertinent to note here that the present result suggests some cognitive or intellectual connection between the performance of girls and the type of outcome which they prefer.The low scientific reasoning performance recorded in both school types suggests that there is need to focus deliberate instructional practices on helping girls generally to improve their reasoning skill and consequently decision making ability.Moemeke and Omoifo (2010) had earlier recommended that instructional strategies which reduce mental tasks while solving problem are more beneficial to girls as it is to low ability learners.The many steps in organizing mental thought processes towards reasoning scientifically in a problem situation is likely to have posed serious problems during scientific reasoning and responsible for the low outcome level.However, worthy of note is the marginal superior mean of girls from girls-only schools in achievement (61.70) and Attitude (62.38) over girls from coeducation schools (60.1 and 58.37) respectively.These differences are however not significant at the 0.05 alpha level used in this study.
This trend is also maintained by boys from boys-only and their counterparts in coeducational schools (AcT > ArT > SrT).In comparison, the higher test scores of boys in coeducational schools in achievement and attitude over the boys from boys-only schools may be psychological and linked to natural tendency for boys to dominate science classrooms especially in culturally influences classrooms such as those in Nigerian in which males tend to be emotionally more balanced than females and the need to boost masculine ego.In the area of scientific reasoning, boys from All-boys schools showed superiority (As shown Figure 3) indicating their superior thinking sequences and possible better utilization of problem solving repertoires.The non-significance of the differences in the performance of Boys from Boysonly school and those from coeducational schools shows similarity in their performance patterns.This may also be linked to the recent administrative reorganization of schools that resulted in mass movement of students from single-sex schools to Government-owned coeducational schools due to introduction of fees in such single-sex schools taken over by their previous missionary owners.
The respect to hypothesis four, result showed that while learning outcome varied similarly according to types within each treatment group and school-type, the HPLC produced significantly highest overall outcome across all measures in all school types.It means that the HPLC was a more potent curriculum model for boosting performance in all outcome measures.This result in similar to Douglas and Kahle (1977), Hurst and Milkent (1996), Lavoie (1999) and Lawson et al (2000) in which HPLC model produced better outcome in all measures across all ability levels and all school types.This potency is linked to certain attributes of the HPLC Model such as helping learners test their knowledge claims, reducing cognitive dissonance associated with multiple science views, helping learners develop adequate logical patterns as well as exposing their misconception or alternative conceptions for possible remediation.The deliberate emphasizing of predictive exercises prior to the learning cycle phase must have provided the impetus for better learning.Though the DLC was found to produce better outcome than the TEA (As shown in Figure 4), the difference was not statistically significant.This result is similar to previous studies by Westbrook and Rogers (1994).
SUMMARY
The study focused on identifying differences in learning outcome across the different measures (achievement in science concepts, attitude towards science and scientific reason) in the different school types (single sex and coeducational) taught with the Learning Cycle Curriculum Model (DLC and HPLC) and the Traditional expository approach (TEA).210 SS ll students participated in the study in the current 2012/2013 academic session.Four null hypotheses were tested.The study lasted for ten weeks.Sample (intact classes) was drawn from the Coeducational, Boys-only and Girls-only schools.Data generated were analyzed using the one-way repeated measures ANOVA and two-way repeated measures ANOVA respectively.Result showed similarity in learning outcome patterns of subject from all school-types and in all measures but different in terms of magnitude with Attitude towards science (AtT) most enhanced followed by Achievement in science concepts (AcT).There was no significant variation between each school-types.However single sex school/subjects proved superior in scientific reasoning when compared with their counterparts from Moemeke and Konyeme 37 Coeducational schools for both sexes.
CONCLUSION
Based on the finding of this study the following conclusions were made.
School type variable has no significant effect on science students learning outcome.The pattern of performance in the difference learning outcome measures is the same in all school types.Among the entire sample, students performed most in attitude towards science, followed by achievement in science concepts and least in scientific reasoning skill with students of single-sex schools having marginal superiority in scientific reason skill.
Within each school type, the performance in learning outcome of students of the same sex conform to the same pattern but with significantly better performance recorded by students of Girls-only schools in Attitude towards science and Achievement but not in scientific reasoning skill.
Males from coeducational schools showed better performance in Achievement in science concepts and Attitude towards science than their counterparts in single sex (boys-only) schools.However, the boys-only subjects were significantly better than their counterparts in scientific reasoning skill exhibition.
On a similar note, science students had significantly different overall LO score based on the CM adopted.Specifically, the best overall LO score was obtained when HPLC model was applied, while the least overall LO score was obtained when TEA model was applied.This could imply that HPLC model is the best model (of the three CM) for LO.It however did not discriminate among sexes.
RECOMMENDATION
Based on the findings of this study earlier highlighted, it is hereby recommended that: All schools should be provided with adequate and enabling learning environment conducive for learning.These include physical, Psychological and social environments since there seem to be no disparity in learning outcome based on school types.
Teachers and counselors in secondary schools should guide coeducational girls adequately to improve their selfconcept, confidence in their ability to learn science and on how to reduce the distractive presence of the opposite sex so as to compete favorably with the opposite sex in science.
The Government should set up some single sex school to be administered and managed by government.The
In this phase we will analyze the human performance technology.It can be measured in the terms of time and cost.Phase III:In this phase we will analyze the organizational learning.It analyze the learning phase through providing some questionnaires, conducting interviews, identifying change in knowledge and organizational performance.Phase IV: In this phase we analyze the organizational coordination.
Figure 1 .
Figure 1.Graphical representation of subjects' performance in the three tests by school type.
Figure 2 .
Figure 2. Graphical representation of performance of girls in the three tests from single-sex and Co-educational schools.
Figure 3 .
Figure 3. Graphical representation of the performance of boys from boys-only and coeducational schools in the three tests.
Figure 4 .
Figure 4. Graphical representation of interaction of school type and curriculum model on learning outcome.
Reference Architecture/Models
Tefera et al. 17 distribution with degrees of freedom equal to the difference in the number of parameter of m0 and m1.The likelihood ratio test is generally more accurate than Wald tests.Cumulative link models can be compared by means of likelihood ratio tests with the anova method.Here, AIC is used for model selection and comparison.
Table 1 .
Base line category logit model.
line category logit model Predictors log (π2/ π1) Good vs. poor/moderate health status log (π3/ π1) Very good vs. poor/moderate health status Estimate
SE=standard error of the estimate, OR=odds ratio and CI= confidence interval.
Table 2 .
Ordinal logistic regression models for selected predictors.
SE=standard error of the estimate, OR=odds ratio and CI= confidence interval.
International License economic, political, and technological prosperity.Many nations such as United States of America, China, Japan, and even India have utilized their human capital base as launching pad for national prosperity.For this to happen, human capital development through quality education of the citizens (male and female) was given paramount attention.In Nigeria, about half of her population is made up of female gender.It means that for any meaningful development of the country, both sexes must contribute their quota equitably in the world of work.Any lopsidedness or tilt may not augur well in the overall interest of the nation.The existence of gender differences in the learning of and achievement in science and Mathematics has been a subject of academic research for many decades and in many countries INTRODUCTIONOne of the outstanding natural endowment of Nigeria as a nation is her human population.This great asset if well harnessed has the potential of producing high caliber human capital necessary for lifting the country into *Corresponding author.E-mail: claramoekphd@yahoo.co.uk,Tel: +2348037438405.Authors agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0
Table 1 .
One-way ANOVA repeated measures for learning outcome, and coeducational and single-sex school types.
Table 2 .
Mean scores for learning outcome, and single-sex and coeducational school types.
Single-sex schools) performed best in the AtT test, and least in SrT test.Furthermore results in Table 3 (LO*CM
Table 3 .
One-way ANOVA repeated measures for learning outcome of science students of Coeducational and Single-sex schools based on Curriculum model.
Table 4 .
Mean scores and standard deviations of learning outcome of coeducational and single-sex science students [based on curriculum model adopted].
Table 5 .
One-way ANOVA repeated measures for learning outcome, and females in coeducational and girls-only school types.
Table 6 .
Mean scores for learning outcome, and females in coeducational and Girls-only school types.
Table 7 .
One-way ANOVA repeated measures for Learning outcome of female science students in Coeducational and Girls-only schools based on Curriculum model.
Table 8 .
Mean scores and standard deviations of learning outcome of female coeducational and Girls-only science students [based on Curriculum model].
Table 9 .
One-way ANOVA repeated measures for learning outcome, and males in coeducational and Boys-only school types.
Table 10 .
Mean scores for Learning outcome, and Males in Coeducational and Boysonly school types.
Table 11 .
One-way ANOVA repeated measures for Learning outcome of male science students in Coeducational and Boys-only schools based on Curriculum model.
Table 12 .
Mean scores and standard deviations of learning outcome of male Coeducational and Girls-only science students [based on Curriculum model adopted].
such, the result indicates that the order of performances in the respective tests varied based on the CM adopted.Emphasis on this outcome is detailed in Table12: for TEA, the male students performed best in AcT (mean score=61.79)but similarly in AtT (mean score=53.42)and SrT (mean score=47.59);for DLC, the male students
Table 13 .
Two-way ANOVA repeated measures for curriculum model and school type on learning outcome in science.
Table 14 .
Mean scores for curriculum model, school type and learning outcome in science. | 2018-12-07T13:57:19.184Z | 2014-07-31T00:00:00.000 | {
"year": 2014,
"sha1": "acff0ed4f5839fa9865a8b4bb24f1570f36aeafe",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/IJSTER/article-full-text-pdf/95AE8A345736.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c0865d9dd3220e596a3fc734a603c2dbaa013f00",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
130321178 | pes2o/s2orc | v3-fos-license | Description of a second species of subgenus Gutta Wrase & Schmidt , 2006 from China ( Coleoptera : Carabidae )
Pterostichus (Gutta) kongshuhensis, sp. n. (type locality: China, Yunnan Province, Baoshan Prefecture, Tengchong County, 2.8 km ENE village Kongshuhe), is described. The new species is morphologically closest to P. (Gutta) gaoligongensis Wrase & Schmidt, 2006.
Introduction
In the course of work on the Carabidae collection of the National Museum of Natural History, Sofia, I found a single male specimen of the genus Pterostichus Bonelli, 1810 from China with the posterior angles of pronotum fully rounded.This specimen was collected by a recent Bulgarian caving expedition to the Chinese province Yunnan (see also , Guéorguiev 2014;Guéorguiev 2015).Subsequent investigation has shown that this specimen belongs to a new species of the little known subgenus Gutta Wrase & Schmidt, 2006.
Material and Methods
The measurements were made with an ocular micrometer mounted on a stereoscopic binocular microscope Olympus SZX10.The photos of the habitus and genitalia were taken by a Zeiss Stemi 2000 microscope equipped with an AxioCam ERc 5s camera and were stacked using CombineZM image stacking software.
Measurements: body length from the apex of the longer mandible in closed position to the apex of the longer elytron (BL); body width as distance across maximal width of body (BW).
Proportions: head length -distance from apex of longer mandibular in closed position to the imaginary line connecting the posterior end of tempora (HL); head width -maximum linear distance across the head, including the eyes (HW); length of pronotum, measured along the midline, from the apical to the basal margin (PL); maximum width of pronotum (PW); width of the pronotal apex, between the anterior angles (PaW); width of the pronotal base -between the lateral points lying on a straight line with the medial point of basal border (PbW); length of elytra, from the basis of scutellum to the apex of the longer elytron (EL); maximum width of elytra (EW).The main features that define this group are: 1/ elytra with microsculpture pattern isodiametric; 2/ head large, in relation to pronotum; 3/ pronotum round, with posterior angles nearly fully rounded; 4/ elytra pyriform, with completely rounded humeri and lacking humeral teeth; 5/ interval 3 of elytra with three setiferous punctures; 6/ apicolateral plica of elytron indistinct; 7/ hind wings strongly reduced; 8/ tarsomere 5 on all legs ventrally unsetose; 9/ median lobe of aedeagus with both ostium large, deflected to left and terminal lamella long; 10/ right paramere short, with distal part rounded.
Research Article
The subgenus includes three species, P. (Gutta) phungaraziensis Wrase & Schmidt, 2006, P. (Gutta) adulterinus Wrase & Schmidt, 2006, and P. (Gutta) gaoligongensis Wrase & Schmidt, 2006(Wrase & Schmidt 2006).The first two species occur in northern Myanmar, while the third taxon lives in the northwestern part of Chinese province of Yunnan.Most likely, the East Himalayan mountainous chain, including the massifs of Hkakabo Razi and Gaoligong Shan, located west of the Salween River has served as area of differentiation of the species.
Description.
Habitus.Large-sized species of Gutta, with moderately convex body, convex eyes, basal angles of pronotum entirely rounded posteriorly and ovate elytra (Fig. 1).Measurements.BL: 13.5 mm; BW: 4.8 mm.Ratios.HL/HW: 1.09; PW/HW: 1.35; PW/PL: 1.20; PW/PbW: 1.71; PbW/PaW: 0.85; EW/PW: 1.21; EL/PL: 2.38; EL/EW: 1.58.Tegument.Thoroughly glabrous dorsally and ventrally (excl.antennomeres 4-11), dorsally smooth, only head with very fine micropunctation (visible at higher magnification), pronotum and elytra without micropunctuation.Color.Dorsal surface black, antennae, palpi, legs (excl.tarsomeres) and ventral surface slightly lighter, tarsi reddish.Microsculpture.Very fine on head and pronotum, coarser on elytra, isodiametric on head and elytra, transverse-mesh on pronotum; tegument shiny, more dorsally, less ventrally.Head.Slightly longer than wide, disc smooth, frontal furrows faintly impressed, widened anteriorly, arched, divergent anteriorly and posteriorly, backwards hardly exceeding level of anterior supraorbital punctures; collar constriction distinct only laterally; eyes moderately large and projecting laterally, two times as long as tempora, length of each eye in dorsal view exceeds length of scape; paraorbital sulci distinct, exceeding level of posterior supraorbital puncture; labrum with concave anterior margin, convex sides and six setiferous punctures, two lateral setae longer than four inner setae; clypeus trapezoid, emarginate anteriorly and laterally, with two large, foveable setiferous punctures situated closer to lateral margins than to anterior margin, clypeal setae as long as lateral labral setae, clypeal suture distinct; antennomeres filiform, pubescent from second fifth of antennomere 4, with penultimate antennomere exceeding base of pronotum, scape longer than any other antennomere, pedicel shortest, antennomeres III-XI of similar length; glossal sclerite of ligula with two long setae on anterior margin; maxillary palpomeres glabrous, somewhat longer than labial palpomeres, palpomere II massive and swollen, thicker twice than following two palpomeres; mentum deeply emarginated, with tooth bifid in front and pair of short labial setae, without paramedical pits, epilobes large, significantly exceeding mentum tooth in front; submentum with two basal setae and two lateral ones, basal setae as long as three times and more than lateral setae.Thorax.Pronotum large, hardly transverse, one fifth wider than long, with widest point at anterior third (Fig. 2); disc gently convex, smooth; midline equally impressed throughout, disappearing just near to anterior border, but distinct to posterior border; anterior border weakly concave, unbordered, wider than basal border; sides convex, more anteriorly, less posteriorly, finely bordered from anterior angles up to basal impressions, each side with a lateral gutter equally narrow along most extent, but widened at basal 1/5; basal border between two basal impressions straight, unbordered; anterior angles rounded, hardly protruding in front, posterior angles entirely rounded off; anterolateral seta at apical third, posterolateral seta removed from posterior angle in distance less than length of antennomere 1; two basal impressions, moderately profound, impunctate, tricorn-shaped, with medial horn sublinear; basis between impressions impunctate, but with at least two longitudinal notches on each side of midline, as notches divergent posteriorly.Elytra subelongate, oviform, rather convex dorsally, widest in their third fourth, narrow basally, gradually widened apically, without both apicolateral plica and apical sinuation, apices of each elytron widely rounded; shoulder rounded, without tooth; basal border complete, forming an obtuse angle with lateral margin; elytral striae well impressed, complete, impunctate, parascutellar stria touching basal border, not anastomosing with stria 1, angular base of stria 1 present, hardly joining stria 2; parascutellar puncture present, removed back from basal margin with distance of three diameters of puncture, situated on angular junction of striae 1 and 2, stria 7 with two preapical punctures, umbilicate series within stria 8, indistinctly interrupted in middle, 7 humeral + 11 apical pores on left elytron and 6 humeral + 10 apical pores on right elytron; intervals rather convex, smooth; interval 3 with three setiferous punctures, first puncture adjoining stria 3 at anterior fourth of elytron, second and third punctures adjoining stria 2 in posterior half of elytron.Hind wings vestigial.Prosternum and proepipleura smooth, prosternal process unbordered, slightly grooved medially.Mesosternum and mesepisterna smooth.Metasternum and metepisterna smooth, metepisterna (without metepimeron) wider than long, with anterior margin longer than inner one.Fore and middle legs moderately long and slender, hind legs longer and more slender; procoxa asetose; protrochanter with one seta; profemur posterior margin with one seta, profemur ventral margin bisetose or trisetose (one seta at proximal end, one or two setae at distal end); protibia widened apically, first three protarsomeres dilated, ventrally densely squamose; mesocoxa bisetose, with one lateral seta and one medial seta; mesotrochanter with one seta; mesofemur ventral margin with two long setae; mesotibia moderately widened apically, with ctenidium well differentiated; first two mesotarsomeres grooved externally; metacoxa bisetose, with one anterolateral and one posterolateral seta; metatrochanter elongate, reniform, as long as a half of metafemur, with one seta; metafemur with two long posteroventral setae; metatibia long and slender; metatarsomeres 1-2 grooved externally; tarsomere 5 on all legs glabrous ventrally.Abdomen.Sternites smooth, 1-2 unsetose, 3-5 with a pair of apical ambulatory setae each of them, 6 with two submarginal punctures and a horseshoe-shaped impression surrounding convex surface (Fig. 3).Male genitalia.Median lobe of aedeagus long, in lateral view strongly arcuate, with basal bulb large and well developed, basal orifice concave, medial third wider than both basal bulb and apical part, ostium large and deflected to left, apical part long, narrowed distally, with lamella finely bent ventrally (Fig. 4); median lobe dorsally elongate, with apical lamella narrowed and slightly bent to left, hooked at the tip, as hook bent to left (Fig. 5).Internal sac of aedeagus without spine-like or thorn-like sclerites.Left paramere conchoid, with a distinctly transverse apophysis and a robust internal process (Figs 6-7).Right paramere short, somewhat bent at apical third, completely round apically (Fig. 6-7).
Etymology.The name, treated as adjective, is derived from the village in Western Yunnan in the vicinities of which this species was discovered.
Locality and habitat.
The type locality is situated in the Gaoligong Mts., at ca. 2530-2600 m altitude, GPS coordinates: N25.73111 E98.66459, it lies approximately 2.8 km east-northeast from village Kongshuhe and is very close to the type locality of the bembidiine Amerizus gaoligongensis Guéorguiev, 2015(see Guéorguiev, 2015: 69, Fig. 4).The holotype of the new species was collected under a stone lying in a shady area on the way to cave Wu Shi Shan 1-2-3 (Boyan Petrov personal communication, Fig. 8).The surrounding area was composed of semi-degraded sub-tropical wood vegetation characteristic of the humid evergreen broad-leaved forests with well-developed undergrowth (see also Guéorguiev, 2015: 71, Fig. 6).
Administratively, the type locality falls into Tengchong County, Baoshan Prefecture, western part of Yunnan Province, China.Affinities.Pterostichus kongshuhensis sp.n. is morphologically similar to P. gaoligongensis.Although male specimens of the latter species are still unknown, there are some marked external differences (see "Differential diagnosis"), which confirms they are two specifically distinct forms.The distance between the type locality of the former and the type locality of the latter (with GPS coordinates: 27°47.90'N,98°30.19'E) is nearly 230 km in a straight line.
The new species could be easily distinguished from P. phungaraziensis and P. adulterinus by the narrower elytra, structure of last visible sternite in the male and the structure of the male genitalia (median lobe of aedeagus laterally more arcuate, dorsally apical lamella more elongate and narrower). | 2019-04-25T13:06:33.390Z | 2015-06-02T00:00:00.000 | {
"year": 2015,
"sha1": "7cc63f7ada43b28aea9ebb2586201f405a771d3d",
"oa_license": "CCBY",
"oa_url": "https://www.biotaxa.org/em/article/download/em.2015.2.34/13607",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cfb5e401c07220f4f596e741e34fdc2bf0aef9b2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Geography"
]
} |
234010902 | pes2o/s2orc | v3-fos-license | Interplay of weak noncovalent interactions in alkoxybenzylidene derivatives of benzohydrazide and acetohydrazide: A combined experimental and theoretical investigation and lipoxygenase inhibition (LOX) studies
In this study, three new hydrazide-based Schiff bases ( 1-3 ) have been synthesized via sonication in good to excellent yield 73-90% and mainly characterized by UV-visible, IR, 1 H-NMR and single crystal X-ray diffraction analysis. The crystal structure of compounds 1 and 2 is stabilized by N-H···O, C-H···O and C-H···N hydrogen bonds, as well as C-H··· π contacts. In addition, weak π ··· π stacking interactions were observed in the structure of 2 . A detailed analysis of the intermolecular interactions that stabilize the crystal packing has been performed by using Hirshfeld surface analysis and energy framework calculations were carried out to analyze and visualize the topology of the supramolecular assembly, indicating that the dispersion energy is dominant over electrostatic one in the most energetic dimers of both compounds. The interaction energies associated with the noncovalent interactions observed in the crystal structures and the interplay between them have been calculated using DFT calculations. Moreover, these intermolecular interactions were also characterized by using both Bader’s quantum theory of atoms in molecules (QTAIM) and NCI plots. The synthesized alkoxybenzylidene analogs of benzohydrazide and acetohydrazide were screened in vitro against soybean lipoxygenase and were found to show better activity than the standard indomethacin. Putative binding modes and comparison of binding interactions in the protein-ligand complex were analyzed by molecular docking studies.
Introduction
Schiff bases are usually synthesized from the condensation reaction of carbonyl compounds with amine derivatives.The chemical and biological significance of Schiff bases can be attributed to the presence of lone pair electrons in the sp 2 hybridized orbital of the nitrogen atom from the azomethine or imine group (-HC=N-).In addition, the metal complexes of these compounds show diverse pharmacological and biological activities including antibacterial [1], antifungal [2] and anticancer activity [3].
The chemistry of carbon-nitrogen double bond of hydrazone derivatives is fastly becoming the backbone of condensation reaction in benzo-fused N-heterocycles day by day [4].Hydrazones constitute an important class of compounds for new drug development [5] and their chemical versatility is mainly attributed to the functional diversity of the azomethine -NHN=CH functional group which has nitrogen atoms with nucleophilic character, an imine carbon atom with both electrophilic and nucleophilic character and configurationally isomerism around the C=N bond.Many researchers have been synthesized these compounds as target structures and evaluated their various biological activities as well as structural properties.Hydrazides/hydrazones act as antimicobacterial [6], anti-viral, analgesic and anti-inflammatory agents [7], anti-platelet, vasodilator, anti-convulsant, anti-oxidant, diuretic and anti-malarial agents.Furthermore, these compounds show anti-trypanosomal, hormone antagonist, anti-arthritis [8] and acetylcholinesterase inhibitor activity [8,9].In addition, many investigations have reported that Schiff bases and hydrazones show a wide range of activities including antiinflammatory potentials [10].
Noncovalent interactions have become attractive for structural chemists due to their crucial role in supramolecular chemistry, molecular recognition and materials chemistry In light of this background, we have reported herein the synthesis and characterization of three new hydrazide based Schiff bases derivatives 1-3 (scheme 1).The crystal structures of compounds 1 and 2 were solved by single crystal X-ray diffractions and the computed molecular structure has been investigated by DFT calculations at B3LYP/6-311G(d,p) level of theory.We have performed a complete analysis of the intermolecular interactions which are responsible of the crystal packing by using Hirshfeld surface analysis, energy frameworks and DFT calculations.In addition, the synthesized compounds were screened in vitro against soybean lipoxygenase and putative binding modes and comparison of binding interactions in the protein-ligand complex were analyzed by molecular docking studies.
Instrumentation.
Melting points were determined on a Yanaco melting point apparatus and are reported as uncorrected.FT-IR spectra were recorded on SHIMADZU FTIR-8400S spectrophotometer using KBr disc method.Similarly, UV spectra were recorded on SHIMADZU UV-1601 UV-visible spectrophotometer. 1 H-NMR (300 MHz) spectra were measured on Bruker Avance instrument in d6-DMSO and TMS as internal standard.Reaction progress was monitored by thin layer chromatography (TLC).2-Hydroxybenzaldehye, 4-hydroxybenzaldehy, 1-bromopropane, 1-bromobutane and 1bromononane were supplied by Sigma-Aldrich, whereas different carboxylic acid hydrazides were prepared according to the literature methods [18,19].
General procedure for the synthesis of 1-3. An Erlenmeyer flask was charged with
1 mmol of respective hydrazide, 1 mmol of alkoxy benzaldehyde and absolute ethanol (15 mL).5-10 Drops of glacial acetic acid were added to catalyze the reaction.The reaction mixture was sonicated [20] for 2 hours at 55 ºC and progress of reaction was monitored through TLC.On completion, the reaction mixture was diluted with water and the precipitates formed were collected through filtration.The final products were purified through re-crystallization from ethanol (scheme 1).
Hirshfeld surface calculations.
Hirshfeld surfaces and their associated twodimensional fingerprint plots [24-27] were calculated by using the CrystalExplorer17.5 program [28].The normalized contact distance (dnorm) surface and the breakdown of two dimensional fingerprint plots were used for identification and quantifying intermolecular interactions that are relevant in the crystal lattice.The dnorm function is based on both de (the distance from the point to the nearest nucleus external to the surface) and di (the distance to the nearest nucleus internal to the surface) and the van der Waals (vdW) radii of the atoms.Graphical plots of the molecular Hirshfeld surfaces mapped over dnorm function show a red-white-blue color scheme, where red highlights shorter contacts, white is used for contacts around the vdW separation, and blue is for longer contacts.3D dnorm surfaces were mapped over a fixed color scale of -0.075 au (red)-0.75 au (blue).The shape index were mapped in the color range -1.00 au (concave) to 1.00 (convex), and curvedness in the range of -4.00 au (flat)-0.01au (singular).The 2D fingerprint plots were generated by using the translated 0.6-2.6Å range, and including reciprocal contacts.The 2D structures of ligands were sketched using Chemdraw 12.0.the 2D structures were
Synthesis and characterization
The compounds under study were synthesized by a procedure simple and faster.
Equimolar quantities of the respective hydrazine and alkoxy benzaldehyde in ethanol as solvent in the presence of acetic acid, react by sonication during 2 hours at 55 ºC leading the corresponding hydrazone derivatives (1-3).Re-crystallization from ethanol affords the products 1 and 2 as crystalline solids in good yields.For 3, only very small and twinned crystals with low X-ray scattering ability were obtained.intensity band at 3233, 3235 and 3233 cm -1 for 1, 2 and 3, respectively are assigned to ν(N-H) stretching mode.The ν(C=O) stretching mode is observed at 1656 cm -1 in the IR spectrum of 1 (calculated 1710 cm -1 ) and as a strong infrared absorption at 1666 cm -1 for compound 2 (calculated 1730 cm -1 ).The ν(C=O) stretching vibration is located at 1658 cm -1 in the IR spectra of
Description of crystal structures of compounds 1-2.
Although the main difference between the compounds are simply the substituents and their substitution positions, the resulting conformations, crystal systems and space groups of both compounds are different.The crystallographic study showed that compound 1 crystallizes in the monoclinic form with P21/c space group, whereas 2 exists in the orthorhombic crystal system, space group Pca21.Both compounds accommodate four molecules per unit cell.In accordance with Table 2, a good agreement between experimental and computed geometrical parameters has been observed.It is important to mention that the calculations were carried out in gas phase where the crystal packing effects are completely ignored.
Notably, the larger discrepancies are observed for the C8-N1 and N1-N2 bonds, which errors are about 0.03 and 0.023 Å, respectively.Calculated and experimental angles and dihedral angles are in very good agreement indicating that both molecules undergo small changes due to intermolecular interactions, as shown in the calculated and experimental molecular structures.The geometrical parameters of the hydrogen bonding interactions for compounds 1 and 2 are displayed in Table 3.The molecule of 1 has a methoxy substituent on the arene ring (C1-C6) in the para-position (see Fig. 1a).In the crystal packing of 1, the molecules are connected with each other in the form of dimers through N1-H1•••O3 and C9-H9•••O3 interactions, where O3 atom from the carbonyl group acts as acceptor (Figure 1b).The former is the strongest interaction as reflected by the geometrical parameters reported in Table 3.The stabilization of the crystal structure is also supported by the presence of weak C-H•••O interactions involving the O3 from the carbonyl as acceptor and the H7A of the methoxy group located in the para-position in the benzohydrazide moiety (Figure 1c).In addition, the molecule of 2 has a propoxy group located at the orto position of the phenyl ring (C10-C15).This propoxy group is involved in 1a).
The structure is also stabilized by C2-H2 1b) involving the phenyl ring Cg2 (C10-C15).The phenyl rings are not involved in π•••π stacking interactions, as compared with the packing of 2 (see below).
D-H•
The crystal packing of 2 exhibits an interesting structural pattern characterized by different structural motifs.The amide O1 and N1 atoms act as acceptors forming C7- hydrogen bonds (Figure 2b).
Hirshfeld surface analysis
Hirshfeld surface analyses have been carried out in order to get further insights into the packing motifs and the contributions of the main intermolecular interactions that are responsible for the crystal stabilization of compound 1.We have not performed Hirshfeld surface analysis of compound 2 due to the molecular disorder observed.Recently, Hirshfeld surface analysis for different hydrazide Schiff bases were evaluated for a better comprehension of their crystal packing [49].Figure 3a 3b), with the shortest (de + di) ≈ 3.0 Å.Furthermore, the shape of the wings and the sum of de and di are indicative of the relevance of the C-
Interactions energies and energy frameworks
In order to describe the intermolecular interactions in a whole-of-molecule approach, we have analyzed the separate electrostatic and dispersion contributions to the total interaction energy.The interactions were calculated using the B3LYP/6-31G(d,p) energy model implemented in CrystalExplorer17.5 program.In the calculation, the total energy is modelled as the sum of the electrostatic (Eele), polarization (Epol), dispersion (Edis) and exchange-repulsion (Erep) terms [30, 51].The main intermolecular interactions observed in the crystal structure of compound 1 are listed in Table 4 along with the respective interaction energies.
The highest total energy of -71.6 kJ/mol corresponds to a molecular pair (Motif 1, Figure 1b) formed by the bifurcated N1-H1•••O3 and C9-H9•••O3 hydrogen bonds, being the N1-H1•••O3 the strongest interaction, in accordance with the geometrical parameters reported in Table 3 and 4 Figure 4 shows graphically the 3D topology of interactions energies in the form of energy frameworks, which provide a view of supramolecular assembly of crystals through cylinders joining centroids of molecular pairs by using red, green and blue color codes for the components electrostatic, dispersion and total energy, respectively.The radius of the cylinders is proportional to the magnitude of the interaction energies.
For compound 1, the partial sum of dispersion energies (-149.9kJ/mol) is greater than that electrostatic ones (-62.9 kJ/mol), which evidences a clear dominance of dispersion energy over the electrostatic component.This observation is in agreement with the higher diameter of the cylindrical tubes for dispersion interaction in comparison to that of the electrostatic counterpart along the a-axis, as shown in Figure 4.
Compound 1
Figure 4. Energy frameworks along a-axis for compound 1, showing the electrostatic (left, red), dispersion (middle, green) and total interaction energy (right, blue).The energy scale factor of 90 kJ/mol was used with a cut-off value of 5 kJ/mol.
Theoretical study
The theoretical study performed in this work is mainly focused to the analysis of hydrogen bonds and C-H•••π interactions observed in the crystal structure of compounds 1 and 2.
These interactions were studied by using NCI plots and QTAIM analysis.These interactions play a crucial role in the biological properties observed for these compounds which were correlated with docking studies (see below).
The molecular electrostatic potential (MEP) surfaces of 1 and 2 are shown in Figure 5.
In both compounds, the most negative region is located at the O-atom from the carbonyl group, with MEP values of -194 and -189 kJ/mol for 1 and 2, respectively.The most positive values are observed at the H-atoms from the amide group.Therefore, the hydrogen bonding interactions between both groups is strongly favored electrostatically.and the color scale is -0.4 < ρ < +0.4 a.u.
Therapeutic efficacy of the synthesized compounds for the inflammation depends upon the inhibition of LOX.The substituted alkoxybenzylidene benzohydrazides (1 and 3) and alkoxybenzylidene)-2-phenylacetohydrazide (2) were evaluated against soybean lipoxygenase enzyme using indomethacin as a standard drug (IC50 = 48.25±1.71µM) (Table 5).Compound 3 bearing 4-nitro functionality displayed better activity in comparison to the standard indomethacin (IC50 = 45.31±2.11µM) (Table 5) while compounds 1 and 2 showed less activity as compared to indomethacin.Structure activity relationship shows that the substituted alkoxybenzylidene benzohydrazides (1 and 3) were found to inhibit Lipoxygenase enzyme to a greater extent as compared to alkoxybenzylidene)-2-phenylacetohydrazide (2) which also depicts that activity is greater when both the aryl rings are conjugated to central hydrazine moiety (compounds 1 and 3) as compared with 2 when conjugation is discontinued in case of phenyl acetohydrazide.
Putative binding modes and comparison of binding interactions can further be analyzed by molecular docking studies.
Molecular docking Studies
The N-aroylhydrazones (NAH) moiety has been known for their amide and imine functions, NAH compounds may exist as C=N double bond stereoisomers (E/Z) and as syn/antiperiplanar conformers/rotamers about the amide CO-NH bond.As is evident experimentally from their crystal structures that all the three compounds exist as E stereoisomers in their most stable antiperiplanar conformation.Therefore, the molecular docking studies in complex with lipoxygenase enzyme was carried out using antiperiplanar conformations of E isomers for binding studies.
The difference between inhibitory activities of analogues studied in this work led us to make structural comparisons in terms of their intermolecular interactions that mediate protein-ligand binding, their relative location in active site, their size, shape, physicochemical properties, etc.In order to investigate the binding mode of the inhibitors and their interaction with amino acid residues of lipoxygenase (PDB ID: 1IK3), molecular docking study of synthesized compounds was performed.Docking study further assisted in the identification of the relative location of the co-crystallized inhibitors and reference molecule in the protein architecture of lipoxygenase.Indomethacin was used as the reference drug in biological screening; therefore, it was also docked with the enzyme to know its binding interaction.Results were analyzed and discussed below.
CrystEngComm Accepted Manuscript
Published on 07 December 2020.Downloaded by Goteborgs Universitet on 12/16/2020 11:14:45 AM. the hydrazones under study are tabulated in Table 6.It has been observed that all the compounds fulfill the criteria to be considered as drug-like molecules as shown in Table 6.The Log P value of the compounds indicates good absorption of hydrazones (1-3).
After analysis physicochemical properties, the compounds are subjected to docking analysis.
Lipoxygenase Structure
The first crystal structure of a LOX
Molecular docking and Binding Analysis
For elucidation of the molecular basis of the mechanism of inhibition for synthesized hydrazones (1-3), the compounds were docked computationally to the active site of soybean LOX.In this study, first the protein structure (1IK3: 2.0 Å) was retrieved from Protein Data Bank (PDB).The protein was already bound with four ligands in its crystal structure i.e. 13(S)-hydroperoxy-9(Z),11(E)-octadecadienoic acid; 13(R)-hydroperoxy- 7.As can be seen, the hydrazones 1-3 preferred to bind the enzyme at a place different in position to that for indomethacin owing to the fact that hydrazones are different in structure than indomethacin (Figure 9).
Compounds 1-3 have potential to block the entry of substrate by binding to amino acid residues lying near the pocket opening of α-helical domain.A more bent conformation is adapted by 2 (red) (Figure 9) as compared to 1 (green) (Figure 9).The enzyme/inhibitor complexes are stabilized by hydrogen bonds in the hydrophilic region and by π•••π and vdW interactions in the hydrophobic region.It has been observed from the binding interactions that the hydrazones interact differently as compared to the interactions of indomethacin (Figure 9) owing to the different structures of compounds 1-3 bearing hydrazone moiety as compared to carboxylic acid moiety of indomethacin along with other structural differences besides their positioning in pocket architecture (Figure 9).The reason for high inhibitory activity of compound 3 as compared to other analogs can be explained by molecular docking studies (Figure 10).Compound 3 showed pronounced hydrogen bond interactions as compared to compounds 1, 2 and indomethacin.The oxygen of butyloxy chain forms hydrogen bond with His548 at a distance 2.52 Å, while the oxygen of hydrazone moiety forms hydrogen bond with main chain amide of Phe162.
The greater binding affinity of this hydrazine is mainly attributed to the presence of a nitro group in the phenyl ring of hydrazide moiety.The nitro group plays an important role by forming four hydrogen bonds with the nearby polar residues.Both the oxygens of nitro functionality form hydrogen bonds with Arg200 at a distance of 2.67 Å and 2.85 Å.
In addition, the NH of Arg159 forms hydrogen bond with the O-atom of nitro group.The interactions of positive Arginine residue with the negative nitro functionality can also be considered as salt bridge interactions.In addition, the fourth hydrogen bond is formed between oxygen of nitro functionality and Ser147.All these types of hydrogen bonds enhance the binding of this ligand with the lipoxygenase active site in comparison to other that the alkoxy aryl hydrazones 1 and 3 interacted more with polar amino acid residues (Figure 10) however hydrazone 2 showed more interaction with nonpolar amino acid residues (Figure 11).The compound showed weak binding affinity and weaker interactions therefore less active than 1 and 3.
The docking results showed that compound 3 forms stronger binding interactions with the target molecule followed by compounds 1 and 2. The molecular docking result is in good agreement with bioactivity data against lipoxygenase.This observation correlates well with the bioassay results, in which compound 3 exhibited higher potency than other ligands because the molecule participates in strong hydrogen bonding interactions with the aforementioned amino acid residues.All the synthesized substituted alkoxybenzylidene benzohydrazides and phenylacetohydrazides were evaluated against soybean lipoxygenase enzyme using indomethacin as a standard drug.The compounds were found to be the inhibitors of soybean lipoxygenase.Structure activity relationship shows that the substituted alkoxybenzylidene benzohydrazides were found to inhibit Lipoxygenase enzyme to a greater extent as compared to alkoxybenzylidene phenylacetohydrazide. Putative binding modes and comparison of binding interactions of the protein ligand complex were also analyzed by molecular docking studies in order to rationalize theoretical and experimental studies.
[ 11 ]
. The hydrogen bonding (HB) interactions are extremely important in biological systems as can be shown in nucleic acids where the assembly is controlled by a combination of H-bonds and π•••π stacking interactions [12].Actually, it is well-known that relatively weak intermolecular interactions such as C-H•••X(X= halogens, O, S, N) hydrogen bonds play a crucial role in the crystal packing of different kinds of molecules [13-17].
For compound 2 ,
and structure refinement.Suitable single crystals of compounds 1-2 were selected for X-ray analyses and diffraction data were collected on a Bruker Kappa APEX-II CCD detector with MoKα radiation at 100 K. Using the SADABS program semi emperical correction was applied [21].SHELX program was also used to solve all structures by direct method [22].Positions and anisotropic parameters of all non-H atoms were refined on F 2 using the full matrix least-squares technique.The H-atoms were added at geometrically calculated positions and refined using the riding model [23].the terminal six C-atoms of nonyl group (C16-C24) are disordered over three set of sites with occupancy ratio 0.46(2): 0.376(17): 0.17(2).All the disordered atoms are refined anisotropically with bond distance and bond angles are restrained.The atoms in each part are refined to have similar thermal parameters.Anisotropic displacement parameters of each part of individual atom are made equal to each other.
The IR spectra of compounds 1 - 3
are shown in Figure S1, ESI.The main features associated with the hydrazone moiety, -C(O)-NH-N=C-will be discussed.The medium CrystEngComm Accepted Manuscript Published on 07 December 2020.Downloaded by Goteborgs Universitet on 12/16/2020 11:14:45 AM.
3 .
The δ(N-H) bending mode of the amide group, in general appears as a very intense absorption in the 1600-1500 cm -1 .The IR spectra show medium intensity absorptions at around 1535, 1559 and 1561 cm -1 for 1, 2 and 3, respectively, in agreement with related compounds [48].The weak bands located at 1124 for 1, 1147 cm - 1 for 2 and 1139 cm -1 for 3 are assigned to the ν(N-N) stretching mode.Finally, the band corresponding to the ν(C=N) stretching mode in Schiff bases is generally observed in the range 1650-1600 cm -1 [3, 17].For the studied compounds, we have assigned this mode to the absorptions observed at 1648, 1643 and 1651 cm -1 for 1, 2 and 3, respectively.The electronic spectra of compounds 1-3 are shown in Figure S2, ESI.The absorption bands at 327 and 307 and 332 nm for 1, 2 and 3, respectively are assigned to HOMO-LUMO electronic transitions with π → π* nature.
The 1 H
NMR (300 MHz, d6-DMSO) data for compounds 1-3 show singlet signals in the δ = 8.0-8.1 ppm range, corresponding to the N-H protons of the amide group.The singlet at δ = 8.2 ppm observed in the spectra of all compounds is assigned to the proton of the azomethine moiety.The protons of the aromatic rings are observed in the region δ = 8.38-6.80ppm.
Figure 1a and 2a show a view of the molecular structure of compounds 1 and 2 ,
Figure 1a and 2a show a view of the molecular structure of compounds 1 and 2, respectively.Selected X-ray bond lengths and angles, together with the computed values at B3LYP/6-311G(d,p) approximation are shown in Table 2.The optimized molecular structure of both compounds are shown in Figures S3-S4, ESI.In the crystal structure of 1, the 4-methoxybenzene moiety A (C1-C7/O1), the linker moiety 1-methyl-2methylenehydrazine B (C8/N1/N2/C9), phenyl ring C (C10-C15) and propanoxy group
Figure 2 .
Figure 2. a) ORTEP diagram of 2 with ellipsoids at 30% probability level with non Hatoms numbering scheme.The molecular structure reveals disorder in the alkyl chain; b) Structural motif showing a combination of C-H•••O and C-H•••N hydrogen bonds (green dashed lines) and C-H•••π (blue dashed lines) interactions; c) Formation of N-H•••O hydrogen bonds and C-H•••π interactions.In Fig. 2b and 2c, only the major occupancy disordered atoms are shown (see text).
Figure 3 .
Figure 3. (a) Hirshfeld surfaces of 1 mapped over dnorm property in two orientations: The second molecule rotated 180º around the vertical axis of the plot.The labels are discussed in the main text.(b) Full and decomposed two dimensional fingerprint plots for compound 1 showing percentage contributions to the total Hirshfeld surface area of the molecules.
. This motif is further supported by presence of C-H•••π interactions involving the H16A of the methylene group and the Cg2 centroid.The dispersion (58.96%) and electrostatic (41.08%) energies contribute towards the stabilization of this molecular pair.Motif 2 is established through an intermolecular C2-H2•••Cg2 interaction resulting in the overall stabilization energy of -41.4 kJ/mol, with 72.13% contribution from dispersion component towards stabilization.Motif 3 is stabilized by the presence of weak intermolecular C-H•••O interactions involving the H7A of the methoxy group and the O3 atom (Figure 1d), [Etot = -28.4kJ/mol with contributions of 64.5% dispersion energy and 35.49% electrostatic energy].
Figure 5 . 1 . 2 (
Figure 5. MEP surfaces of compounds 1 (a) and 2 (b) plotted onto the 0.001 a.u.isosurface.The values at the selected points on the surface are given in kJ/mol.
Figure 7 .Figure 8 . 2 (
Figure 7. Left: NCI surface of a dimer of compound 1.The gradient cut-off is 0.35 a.u. and the color scale is -0.4 < ρ < +0.4 a.u.Right: QTAIM distribution of bond and ring critical points (red and yellow spheres, respectively) and bond paths calculated at B3LYP-D3/def2-TZVP level of theory of the H-bonded fragment.
Figure 8 .
Figure 8. NCI plots of the two dimers of compound 2. The gradient cut-off is 0.35 a.u.
[ 57 ]
from Soybean, described by Boyington et al., established the molecular framework common both plant and animal enzyme.Crystal structure of lipoxygenase bears two major domains: an amino terminal β-barrel, now known as a PLAT (Polycystin-1, Lipoxygenase, Alpha-Toxin) domain and a much larger α-helical domain that houses the catalytic iron [58].The plant enzymes are significantly larger than the animal enzymes (∼900 vs. ∼650 amino acids, respectively), and the smaller animal enzymes are simply trimmed down by the omission of several plantspecific loop regions.Despite the differences, a large helical core, along with the relative placements of most of the ∼17 helices that comprise it, is conserved.At the heart of the core is the catalytic iron, positioned by invariant histidine side chains contributed by the two longest helices in the common core as well as the main chain carboxyl at the Cterminus provided by an invariant Ile.An unusual structural feature of helix α8, a unique insertion which gives it a distinct curvature [59] has been observed in all LOX structures to date.Some inhibitors have been reported to bind either directly or indirectly to the adjacent amino acid residues of cofactor [60-62].
Figure 9 .
Figure 9.An overlay of the docked orientations of the most preferred conformations of compounds 1, 2, 3 and indomethacin in the active pocket of lipoxygenase (cyan) shown in ribbons (Left).Mesh surface view of ligands enzyme complex (Right).Relative
Figure 9
Figure 9 illustrates the relative positioning of hydrazones 1-3 and indomethacin in their minimal energy conformation (out of 20 different conformations for each compound) in the active site of Lipoxygenase.The corresponding binding energies of ligands with most preferred conformation are shown in Table7.As can be seen, the hydrazones 1-3
Figure 10 .
Figure 10.Docking pose of ligands in 3D and 2D display; A) Compound 1 (green), B) Compound 3 (yellow).Ligands are shown in stick mode while receptor is shown in cyan colored ribbons and key residues are shown in stick mode.2D-Interaction diagram of compound 1 and 3 are represented in C and D, respectively.
ligands.In addition to hydrogen bonding interactions, the molecule interacted with the binding site via π•••π stacking, π-alkyl and alkyl-alkyl interations.4-nitro phenyl moiety of the benzohydraide analog interacts with Phe161 displaying T shaped π•••π interaction with a distance of 4.92 Å.Both phenyl rings of compound 3 interact with Val539 (5.04 Å) and Val144 (5.28 Å) showing π-alkyl interactions.However, in 1, phenyl ring bearing 4-methoxy functionality forms cation•••π interaction with Lys545 at a distance of 4.32Å while the phenyl ring bearing propyloxy chain forms T-shaped π•••π interactions.The protein-ligand complex was further stabilized by weaker hydrogen bond between Lys545and the O-atom of hydrazone.In addition some weaker interactions were observed between propyloxy chain of compound 1 and Leu187 and Val539 forming π-alkyl interactions.The analysis showed that compound 3 showed strong hydrogen bonding interactions with target as compared to compound 1 (see Fig.10).
Figure 11 .
Figure 11.Docking pose of ligand in 3D and 2D display; (Left) Compound 2 (red) shown in stick mode while receptor is shown in cyan colored ribbons and key residues are shown in stick mode.(Right) 2D-Interaction diagram of compound 2.
2 ,
Three new hydrazide based Schiff bases (1-3) have been synthesized and structurally characterized.The crystal structure of compounds 1 and 2 were solved by X-ray diffraction methods.In this work, we have performed a detailed quantitatively analysis on the intermolecular interactions present in the crystal structure of both compounds.The crystal packing of 1 and 2 are stabilized by strong N-H•••O hydrogen bonds.In addition, the crystal structure of the compounds shows several relatively weak interactions such as C-H•••O, C-H•••N hydrogen bonds and C-H•••π interactions.In the case of the structure of weak π•••π stacking interactions were observed.This study clearly shows that the energetic distribution in the crystal packing is anisotropic as clearly evident analyzing the energy frameworks diagrams.Analysis of the energy associated with the intermolecular interactions has been also conducted by using DFT calculations and corroborated by NCIplots and QTAIM approach, confirming their importance in the supramolecular assembly of both compounds.In brief, the substituents and the substitution positions play an important role in the packing mode, intermolecular hydrogen bonding and structural conformation of the studied compounds.
Table 1 :
Crystallographic data and details of refinements for compounds (
According to Veber, number of rotatable bonds (NOR) of a druglike molecule should be fewer or equal to 10 [55].Molecules which violate more than one of these criteria may have problems with their bioavailability.Detailed results of druglikeness of
Table 7 .
Inhibitory activity against LOX and binding affinity of all ligands. | 2020-12-10T09:04:52.423Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "d4cdb64ae0c0920b8dee7a1647742c06afbccb34",
"oa_license": "CCBYNC",
"oa_url": "https://ri.conicet.gov.ar/bitstream/11336/148303/5/CONICET_Digital_Nro.ba3fdd46-62ee-43e0-8e37-2275d8eecd78_D.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4b89df77a20ebc332bd95d3abcb2c38ca38cab33",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
54746311 | pes2o/s2orc | v3-fos-license | Stochastic dressed wavefunction: a numerically exact solver for bosonic impurity model dynamics within wide time interval
In the dynamics of driven impurity models, there is a fundamental asymmetry between the processes of emission and absorption of environment excitations: most of the emitted excitations are rapidly and irreversibly scattered away, and only a small amount of them is reabsorbed back. We propose to use a stochastic simulation of the irreversible quantum emission processes in real-time dynamics, while taking into account the reabsorbed virtual excitations by the bath discretization. The resulting method delivers a fast convergence with respect to the number of bath sites, on a wide time interval, without the sign problem.
I. INTRODUCTION
The quantum impurity model has always been the cornerstone in condensed matter and quantum optics. Introduced in order to describe the interaction of magnetic impurities with a metallic host [1], this model is used to describe low-temperature properties of single-electron solid-state devices [2,3], tunelling spectroscopy experiments [4,5], mobility of defects [6,7] and of interstitials [8][9][10] in solids. In the fields of quantum optics and quantum information processing, driven impurity model in a bosonic environment is often called an open quantum system. It is used to describe the two-level atoms in optical fibers [11], Cooper pair boxes coupled to an electromagnetic environment [12][13][14][15][16] and solid-state qubits [17]. In physical chemistry the quantum imputiry model is employed in theoretical analysis of the electron transfer processes between donor and acceptor molecules [18,19].
Lately there was a revisited interest to a numerically exact solvers of the impurity model in a situation when its coupling to the bath is not small. Initially it was connected to the development of dynamical mean-field theory (DMFT) calculations [20][21][22][23]. Within DMFT and its cluster extensions [24], lattice models for strongly correlated fermions are mapped onto quantum impurity problems which are embedded into environment whose spectral properties are determined self-consistently. These equilibrium fermionic problems required solvers of the Anderson impurity working at imaginary-time Matsubara domain. A continuous time quantum Monte Carlo (CT-QMC) family of algorithms [25] was constructed to deliver results which are free of any systematic errors and obey a reasonably small stochastic noise. Experiments with ultracold atomic systems driven the efforts to construct impurity solvers for real time dynamics away from equilibrium [26][27][28]. In this case, both fermionic and bosonic systems are of importance. For bosonic ones, an additional interest is related with cavity-QED and similar problems, where one deals with a (driven) two level system strongly coupled to phonons.
A generic problem about the real-time impurity solvers is that the computational complexity scales exponentially with the increasing time argument. The physical ori-gin of the problem is that as the time passes, the quantum impurity scatters environmental excitations with a (roughly) constant rate. As a consequence, the number of mutually entangled excitations increases at least linearly with time, and thus the dimension of the relevant entangled subspace of the total Hilbert space increases exponentially. In different simulation techniques, this basic issue manifests itself in distinct ways. In the basis truncation methods, we need to include exponentially large number of basis elements as the simulation time is increased. The density matrix renormalization group (DMRG) [29] and numerical renormalization group (NRG) [30] methods also entail the truncation of Hilbert space, and this limits the range of parameters where results of sufficient accuracy can be obtained. In the quantum Monte Carlo (QMC) simulation techniques [24,[31][32][33], the complexity comes out as the sign problem due to the oscillating phase factors of trajectories (diagramms). The quasi-adiabatic path integral (QUAPI) approach [34][35][36][37] has convergence problems at low temperatures and when the environment memory is long [38,39]. The hierarchical equations-of-motion (HEOM) method [40][41][42] employs a Matsubara expansion for the bath density matrix. HEOM is accurate at high temperatures and for near-Debye spectral densities [38], but displays exponential complexity as we move outside these case. The multilayer multi-configuration time-dependent Hartree (ML-MCTDH) approach [43][44][45] has problems in the strongly correlated regimes [38,46]. Probably the most promising of existing real-time solvers is so-called inchworm QMC algorithm [38,47,48], in which the Keldysh contour is split to a number of intervals and the diagrams are hierarchically summed up on them. The method alleviates the sign problem to a large degree, but is of high technical complexity and suffers from a fast grow of memory requirements as time scale increases.
In this paper we propose a technically simple and physically transparent real-time bosonic impurity solver, which is free from a sign problem and does not show signs of an exponential slow down for a number of benchmark problems. In our approach, virtual bath excitations and really emitted bosons are treated in different ways: real (observable) excitations are accounted for within a sign-free QMC (stochastic) procedure, whereas virtual ones are described by the ED treatment. With an increase of time argument, only the number of real excitation grows, that allows to escape an exponential increase of the Fock space for the ED part.
In section II we introduce the general impurity model in bosonic bath. Then in section II A we recall the Keldysh contour path integral formalism and discuss the physical interpretation of influence functional which describes the effect of the bath on the impurity. Using the acquired intuition, in section II B we identify the major factors leading to the exponential complexity of real-time quantum simulation. We formulate the algorithm enabling us to alleviate these factors in II C and II D. The results of test calculations for the spin-boson model are presented in section III. Finally, we conclude in section IV.
II. DESCRIPTION OF THE METHOD
In this section, we present our approach to the simulation of open quantum system dynamics. We consider the following impurity system Hamiltonian where H i and H b are the Hamiltonians of the impurity and of the bath, respectively, and V is a systemenvironment interaction. The environment is supposed to have a quadratic Hamiltonian with the bilinear interaction where s is a certain impurity operator, and b is the bath In our representation, the frequency dependence of the density-of-states is transfered to the coupling coefficient c (ω). We are interested in the calculation of the timedependent impurity observable mean values: Here ρ 0 is the initial state of the total system. The trace operation Tr {·} is taken over all states of the full system. Let us make the conventional assumption that the initial state is factorized, where ψ i (0) is arbitrary state in the impurity's Hilbert space, and ρ b (0) is assumed to be a Gaussian bath state with certain mode occupations Here, Tr b {·} denotes the trace over the bath degrees of freedom.
A. Influence functional and its physical interpretation
In order to understand the physical structure of the driven impurity problem, it will be helpfull to express the observable mean value Eq. (5) in terms of the Keldysh functional integral [49], and employ the notion of influence functional of the environment [50,51].
Let us consider the following general real-time quantum problem where O is the impurity observable. With the choice we obtain the problem Eq. (5) we are aiming at. However, in order to derive our method, we also will need to consider an auxiliary problem with The Keldysh contour technique allows one to map the quantum problem (8) onto the functional integral [49] over the configurational space of the system, Fig . 1: Here, q + (τ ), q − (τ ) are the configurational variables of the system on the forward and on the backward banches of the contour. S i [q + , q − ] is the action functional of the impurity. I [q + , q − ] is the influence functional of the bath [50,51], Here the observable O is inserted. Then the system evolves backwards in time (the upper branch whose quantities are labeled by subscript "-") up to the initial time τ = 0. Here is a forward/backward-branch path integral representation of the impurity operator s. The 2-by-2 matrix K (τ − τ ) is the Keldysh correlation function of the bath, The contour ordering C places the operators (as functions of contour parameter) in the descending order, from left to right. The contour order is defined as For the usual Keldysh contour with factorized inital condition, Eqs. (9) -(10), we have the following Keldysh correlation function: Each of these terms has distinct physical interpretation, as will become evident below. The first term describes the effect of the virtual (unobservable) bath excitations. The second term describes the irreversible spontaneous emission of observable excitations, where the bath memory function is The last term represents the effects of the quantum excitations of bath due to a finite initial occupation n (ω) of the frequency modes. Here the excitation noise memory function is In most practical situations, n (ω) is the finite temperature Bose-Einstein distribution, The physical interpretation of K exc (τ − τ ) is evident from the fact that this term vanishes when the bath is initially in the vacuum state.
In order to illustrate the physical meaning of K virt (τ − τ ) and K emit (τ − τ ), let us assume that the bath is in vacuum state (there is no K exc (τ − τ )). We perform the perturbative expansion of the average Eq. (13) with respect to K virt (τ − τ ) and K emit (τ − τ ). This expansion is represented by a series of diagrams, where each factor K emit (τ − τ ) is represented by a bold line crossing different branches, and each factor −K virt (τ − τ ) is represented by a dashed line crossing the same branch, Fig. 2.
The whole perturbation expansion consists of the diagrams obtained by all the posstible insertions of bold and dashed lines, at arbitrary time points. Then, we observe the following. The time moment of measurement (where the impurity observable O is placed) is the turning point of the Keldysh contour, Fig 1. Therefore, all the crossbranch lines (with factors K emit (τ − τ )) correspond to the excitations which exist at the measurement time, and make a contribution to it, i.e. they are observable, Fig. 2, a). Whereas all the intrabranch lines (with factors −K virt (τ − τ )) represent the excitations which are created and annihilated before the measurement time moment, i.e. they represent the unobservable virtual excitations, Fig. 2, b). According to the aforementioned observation, we divide all the diagrams into the two classes, 3. The first class, containing the diagramms with only the cross-branch lines, Fig. 3, a)., we call the "crossbranch diagrams". They describe the effect of (unread) measurement at time t of the irreversibly emitted bathexcitation quantum field. The second class of diagrams, containing at least one virtual intrabranch line, Fig. 3 b)., which we call the "intra-branch diagrams", describe the dynamical effect of the unobservable cloud of virtual excitations, which always surround any impurity system.
For the Keldysh contour in which the bath evolves from vacuum to vacuum, Eqs. (11) and (12), the Keldysh correlation function in the influence functional Eq. (13) consists of only the virtual part, K (τ − τ ) = K virt (τ − τ ) , which again supports our physical interpretation: if there is no emitted field, and no bath excitations, the virtual excitations still present.
B. An idea of how to eliminate the complexity of real-time simulation Suppose we have an impurity, and we want to compute its real-time evolution. The source of the complexity of this problem lies in the fact that the impurity becomes entangled to the bath excitations, and as the time goes on, the number of entangled excitations grows (in most cases) asymptotically linearly with time. As a consequence, the dimension of the entangled Hilbert subspace grows combinatorially (exponentially) with time. In the previous section, we have identified the three parts of the influence functional, which correspond to the three types of processes: the virtual processes, the irreversible emission, and the excitations by the bath. Let us analyze the contribution of each of these parts to the complexity of real-time simulation, Fig. 4. The last two processes, the irreversibe emission and the excitations by the bath, lead to the growth of the number of entangled excitations. However, the first process, the creation/annihilation of virtual excitations, is expected to reach a stationary number of excitations, so that this is not the factor of complexity.
Then, were it possible to simulate efficiently and in a numerically exact way the emission and the excitation processes, the complexity of the real-time simulation would be greatly reduced. Luckily, we have found at least one way of doing it: the stochastic wavefunction method [52][53][54][55].
C. The stochastic dressed wavefunction method
In the spirit of stochastic wavefunction method [52][53][54][55], we stochastically unavel the emission and excitation parts of the influence functional by applying the Hubbard-Stratonovich transform. Denoting s (τ ) = Figure 4. Suppose that we couple our impurity to the bath at the time moment t = 0. Then, the following three processes start to develop. First, the impurity begin to scatter and to entangle to the bath excitations. Second, the driven impurity begins to emit excitations, which also remain entangled to the impurity. Evidently, the number of excitations involved into these processes will grow without bound as the time passes. However the process of the third kind, the emission and the ultimate absorption of virtual excitations, is expected to saturate on a certain level, so that only a limited amount of virtual excitations is present.
[−s + (τ ) , s − (τ )] T , we have for the emissive part: where the following c-number stochastic field was introduced and ξ (ω) is a complex white noise, For the excitation part of influence functional, we have: where the following c-number stochastic field was introduced and η (ω) is another (independent from ξ (ω)) complex white noise, Now, if we substitute the stochastically unraveled parts Eq. (29) and (32) into the expression for the mean value of the impurity observable Eq. (13), we obtain where Now we are almost done. In order to find the numerical algorithm which follows from Eqs.
In order to interpret the fourth line of Eq. (36), we remember the discussion at the end of section II A that the influence functional with K virt (τ − τ ) corresponds to the full impurity-bath quantum problem Eq. (8) with the vacuum-vacuum boundary conditions for the bath, Eqs. (11)- (12). Therefore, the stochastically unraveled average Eq. (35) can be written in the operator language as where ψ dress (t) is the impurity wavefunction "dressed" by virtual excitations here T is the usual time ordering, and the stochastic Hamiltonian H stoch (τ ) is The dressed wavefunction ψ dress (t) is the solution of the non-Markovian stochastic Schrodinger equation: with initial conditions Observe that formally we still have the full quantum problem for the bath. However, since most of the quantum entanglement is eliminated by performing the averaging over the classical noises ξ and η, and only the projection to the bath vacuum is required, we expect much faster convergence when applying numerical discretizations to Eq. (41).
D. Numerical solution of the stochastic dressed wavefunction equation
The dressed wavefucntion ψ dress (t) is calculated in a truncated Fock space by keeping all the relevant states of the impurity and all the bath states with at most N excitations, for a certain fixed N . Note that when a truncation of the Hilbert space is applied, the norm of the reduced impurity density matrix is not conserved: We compensate for this by normalizing the computed observable averages:
III. RESULTS
We test the proposed stochastic dressed wavefunction approach on the driven spin-boson model, coupled through the spin impurity operator to the bath with the semicircle density of states which corresponds to a chain of bose sites with on-site energy ε 0 and hopping between the sites h. For calculations, we use the following values of parameters of the bath: ε 0 = 1, h = 0.05. The driving field is defined as We consider the two cases, with the bath initially at zero temperature. The first case is when the impurity energy level is placed at the center of the bath's energy band: We calculated the occupation of the equivalent qubit. In Fig. 5, we present the convergence of stochastic dressed wavefunction results with N = 0 (only impurity Hilbert space, no virtual excitations), N = 1, and N = 2, to the exact results in the truncated Fock space. The second case we considered is when the impurity energy level is placed at the edge of the bath energy band: In Fig. 6 we show the convergence of results for the occupation of the equivalent qubit. In both cases the virtual excitations in ψ dress (t) were taken into account by including the first 20 sites of the bozonic chain. From the presented results we see that the convergence on the whole time interval is achieved with only two virtual excitations, whereas ED required to include the states with 8 excitations of the bath. This result confirms our idea that the stochastic dressed wavefunction method is capable of alleviating the exponential complexity of the real-time simulation.
Our approach is related to the conventional non-Markovian quantum state diffusion (NMQSD) methods [52][53][54][55], but there is important difference between them. NMQSD includes only the impurity degrees of freedom, and the influence of the virtual cloud is represented through the functional derivative of the stochastic trajectory with respect to the noise. Since the functional derivative is a computationally complex object, a hierarchy of approximations is developed [55]. However, it is difficult to judge apriori how fast such a hierarchy would converge in the strong coupling regime. At the same time, the stochastic dressed wavefunction method takes into account the fact that the physical state of any open system is not restricted to the open system's degrees of freedom, but surrounded by a cloud of virtual excitations. This way we obtain a clear physical picture of the major convergence factor: the dimension of the part of virtual cloud which is entangled to the impurity and which is statistically significant.
IV. CONCULSION
In this work we present a novel numerically-exact simulation approach for the dynamics of quantum impurity models: the stochastic dressed wavefunction method. In this method, all the observable effects of the environment (irreversibly emitted excitations and the excitations due to finite occupation of the bath modes) are calculated by a Monte Carlo procedure without the sign problem. At the same time the unobservable virtual excitations are calculated by an exact diagonalization. We illustrate our method by providing the results of test calculations for the driven spin-boson model: only two virtual excitations are enough to achieve the uniform convergence on a large time interval. | 2017-12-12T13:21:33.000Z | 2017-12-12T00:00:00.000 | {
"year": 2017,
"sha1": "915560af3e61d253cef5fc2a079d25a7b5bc9048",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "915560af3e61d253cef5fc2a079d25a7b5bc9048",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
255614886 | pes2o/s2orc | v3-fos-license | Determining the optimal number of yard trucks in smaller container terminals
In 2017, smaller container ports handled approximately 22% of total containerized cargo. Nowadays liner operators are calling on those ports with larger ships and demanding fast and efficient turnaround of the ships in port. This is possible only if the berth has the right capacities, is working properly and achieves a good productivity level. Productivity level does not depend only on the quay crane capacities but also the transfer mechanisation, of which the main function is to serve quay cranes on one side and yard cranes on the other side. Choosing the correct type and number of vehicles to transfer container units from berth to yard has become a very important decision in every container terminal. In small container terminals yard trucks represents the most common type of transfer mechanization. That is why this research is based on the allocation of the right number of yard trucks to quay cranes in order to assure better productivity levels in the berth and yard subsystems. For this purpose, a discrete-event simulation modelling approach is used. The approach is applied to a hypothetical small container terminal, which includes operations on the berth-yard-berth relation.
Introduction
A seaport container terminal is a transit point for containerized goods between sea vessels and land transportation modes, such as truck and trains [26]. Although seaport container terminals differ considerably in size, function, and geometric layout, they principally consist of the same subsystems [19]. These are the berth, yard and gate subsystems [2,17]. The coordination among the three subsystems, the assignment of handling resources to the necessary activities and the scheduling of the different movement tasks are the main aspects to be managed and optimized in a seaport container terminal [26]. In container terminals, there are three types of handling equipment involved in the loading/discharging process: quay cranes-QCs, transfer mechanisation (the choice is between: yard trucks-YTs, straddle carriers-SCs, shuttle carriers-ShCs, automated guided vehicles-AGVs), and yard cranes-YCs (the choice is between SCs, rubber-tired gantry cranes-RTGs, railmounted gantry cranes-RMGs, and reach stackers) [6,13,22,41]. QCs handle quayside operations, YCs are deployed in yard stacking operations, while transfer mechanization is used for transportation between quayside and yard. As the berth subsystem is the one that actually defines the size of the ship that can be accepted at the terminal and mostly affects the turnaround time of ship in the port, the focus is in the first phase made on the operational time and utilisation of that area, while the operations in the yard area have an indirect influence on the time that the ship will spend in the port [34]. According to Carlo et al. [8] transport operations in both subsystems should be designed so that bottlenecking in container terminals is avoided. In this context a correct choice of transfer mechanisation is crucial. A very commonly used type of equipment in small ports and the most common one in the small northern Adriatic (NA) ports are YTs. YTs are manned vehicles that pull chassis carrying the containers. They represent a technologically modest means of container transport, as they are unable to lift containers -thus a crane is required for loading and unloading operations. This means that if there is not a good synchronization between QCs and YTs and between YTs and YCs, congestion can occur. The problem of smaller container terminals is that, on the one hand, they have space problems and, on the other, they have already established workflows for container handling. It is therefore financially more advantageous for a container terminal already in operation to purchase additional YTs than to restructure the terminal and purchase handling equipment of a different type. According to He et al. [14], the efficiency of container terminals depends to a large extent on the effectiveness of the allocation of terminal resources in the various phases of container handling.
However, according to Meisel [24] YTs are economically attractive and can provide high flexibility regarding the workload of a terminal. In our research, simulation processes will therefore be based on different numbers of YTs serving QCs. The scope of this research is to define the correct number of YTs that will reduce the turnaround time of the ship in a small container port. To achieve that, numerous computer simulations using Flexsim CT 3.3 software were performed on a hypothetical container terminal (CT). During the simulations, terminal characteristics like berth length, number of QCs, and type of ship were changed (Table 2) in order to gradually increase the annual traffic and capacity from 630,000 TEU to approximately 1 million TEU and evaluate how such an increase of traffic would influence the productivity of the berth in the first place and of the yard in the second place.
Since our research was not focused on a specific port, we combined the elements of various small size ports with similar properties. The basic criteria for the simulation terminal layout were represented by the NA ports of Koper, Trieste, and Rijeka, from which the most common factors were taken. The largest share of data was obtained from the port of Koper.
We note that there is a knowledge gap in the literature when we try to define the type and number of transfer mechanization for transferring container units from berth to yard in small CTs. The research is based on allocating the right number of YTs to the QCs to ensure better productivity levels in the berth and yard subsystems. We develop a model for berthyard operation with emphasis on proper allocation of transfer mechanization. The objective was to provide optimal berth function and thus faster transfer of the vessel in the port. The developed model is easy to use and allows a quick identification of the required mechanization in relation to the capacity of the CT.
The structure of the paper is as follows. Section 2 provides a literature overview, Section 3 provides the development of the methodology with four subsections explaining the framework and characteristics of the basic hypothetical CT, followed by the analysis of the basic model and the further methodology of the simulations performed to determine the optimal number of YTs in a small CT. The simulation results can be found in Section 4, while Section 5 provides a discussion and conclusion with the suggestion for further research.
Literature overview
Nowadays it is extremely important that a ship leaves the port as soon as possible, which means that high efficiency is required from the CT. This is usually measured in terms of operational productivity; such as ship turnaround time or yard utilization. According to Gharehgozli et al. [10] it is important to enable efficient work processes on the sea side, but that the yard stacking system also has a significant impact on the overall capacity of the terminal. The right choice of transfer mechanisation between the sea and land sides of the terminal, therefore, plays a very important role. In this context, a lot of research has been done in recent years on the use of YTs, as they have achieved great efficiency when properly planned [36].
Terminal operating system (TOS) and terminal planning
The basis for further research on container terminal operation systems was provided in the early 2000s by Steenken et al. [33], Stahlbock and Voß [32], Vis and De Koster [37] and Kim and Günther [19]. Later Böse [5], published a handbook of terminal planning, while Singgih et al. [30] have proposed a TOS design that includes integrated scheduling and various decision-making modules related to different aspects of operations in a new conceptual rail-based CT. Kourounioti et al. [21], on the other hand, focused on the container dwell time in the terminals. In 2019 Hervás-Peralta et al. [15], were the first to use the AHP (Analytic Hierarchy Process) to identify and hierarchize the TOS functionalities. A special role in measuring the efficiency of TOS in CTs is also played by the KPIs, which are decisive for the planning and further optimization of the CTs. An interesting study was conducted in this respect by Hinkka et al. [16], who analysed the indicators required for terminal planning and compared them with existing KPIs to measure the performance of ports and terminals.
Although CT problems are related to each other, authors often address the specific issues of an individual segment of the terminal. That is why below some interesting studies on berth and storage productivity -reflecting interrelationships in ports -that pertain to our research are presented.
Berth productivity
The berth subsystem represents the most important part of the CT; since all subsequent processes in other subsystems of the terminals are connected to it. The arrival of ever larger container ships has led to a reduction in berth productivity, which is reflected in longer ship times in the port and higher costs for the ship operator. Seyedalizadeh Ganji et al. [29] pointed out that the length of time that the ship waits for berth availability and the ship's handling time are considered to be the most important measures of effectiveness for a CT, so reducing each of these times enhances the productivity of the terminal. Most researchers therefore have focused on the optimal berth allocation problem (BAP) and the quay crane assignment problem (QCAP) by using generic algorithms [3,4,11,18,23].
An important factor that affects ship handling time is QC productivity. This is measured by the number of moves per hour. Bartošek and Marek [1] pointed out that terminals are able to achieve maximum productivity to 80% of the computed number, due to productivity losses caused by operational disturbances. QC efficiency and impact on possible bottlenecks has been explained in detail by Goodchild and Daganzo [12]. Beškovnik et al. [2] also investigated QC productivity and its influence on berth utilisation. For this purpose, a comparison model of twenty selected global CTs was used to compare productivity in the process of subsystem productivity analysis. However, the productivity of QCs largely depends also on the type and number of the horizontal handling equipment used; this can be allocated to a specific QC or to more QCs simultaneously. The types and operational characteristics of the horizontal equipment were analysed by Carlo et al. [8]. In the classification according to the size of the ship, QCs are divided into several categories. In this study two of those categories are used [1]: Panamax QCable to serve Panamax ships, 11-13 containers wide (rows). Post-Panamax QCable to serve Post Panamax ships, 17-19 containers wide.
Our aim is to select the number of YTs so that despite the disorder in the storage area the QCs are able to achieve 20 moves/hour (Panamax QC) and 25 moves/ hour (Post-Panamax QC), respectively.
Storage problems
In the issued works that discuss storage problems most of the researchers are focused on optimizing transhipment operations and improving the utilization of storage space. An insight into the area's current research, as well as the precise description of the yard layout and the transhipment operations there, was provided by Carlo et al. [7]. Zhang et al. [40] and Woo and Kim [38] dealt with the problem of allocating storage space for containers, focusing mainly on the allocation of storage space to export containers. On the other hand, Zhao and Goodchild [42] analysed the impact of transfer mechanization on the transhipment effects of the terminal.
Computer simulations
Sometimes it is easier and more feasible to use built-in simulation programmes like ARENA software, Micro-Port, AnyLogic, FlexSim CT, etc., that enables the detailed simulation of one terminal subsystem or the interoperability of two subsystems. A detailed review of the available research literature on the application of simulation models in port development over the last 54 years was presented by Dragović et al. [9].
Sislioglu et al. [31] used Data envelopment analysis (DEA) in combination with ARENA software to develop a DST (decision support tool), the aim of which is to find the optimum investment to improve the CT productivity. They focused on minimizing the average turnaround time of the ship in the port while maximizing the container throughput. The same software has also been used by Kotachi et al. [20] who have modelled generic port operations to study how various different inputs can influence the outputs that include throughput, resource utilization and waiting times. AnyLogic software was used by Yang et al. [39] to create a simulation model of the AGV transport system that would effectively increase the utilization rate of QCs and YCs and reduce the time to task completion. On the other hand, Gamal Abd El-Nasser A. Said and El-Horbaty [27], have chosen FlexSim CT software to optimize solutions for the storage space allocation problem, taking into account all various interrelated CT handling activities. The proposed approach is applied on a real case study data of CT at Alexandria port. Stojaković and Twrdy [35] have used FlexSim CT software to determine how a different number of YTs assigned to a single QC can affect the productivity of that crane and the productivity of the entire berth subsystem in a small CT. In this study we chose to take a step forward and build a hypothetical CT in FlexSim CT software and monitor only the interrelations of the berth and yard subsystems, focusing primarily on the berth productivity by coordinating the number of YTs. In that way it was possible to obtain original data on the necessary number of YTs per QC according to the volume of annual throughput in order to achieve maximum productivity at the berth and thus the more rapid departure of large ships.
Methodology development
The methodology can be divided into four main steps: Presentation of the Framework. In the first step the framework of the simulations is presented. That section explains which data have been selected and included in the simulation software to build a basic CT simulation model. Furthermore, it contains the sequence of all the operations performed during the simulation. Terminal layout. After explaining the framework of the simulations, a terminal layout of the basic CT model is presented. In this step the physical characteristics that have been chosen to build the CT (berth and yard subsystems) are provided. As the performed simulations are based on real port data, the ship arrival schedule that has been used in the simulation is given. Analyses of the simulation performed on the basic CT. In the third step the results that were obtained with the simulation of the basic CT model are provided and discussed. The results have shown that YTs were crucial for the achievement of higher berth productivity and they also have significantly influenced the yard operations. Those findings therefore formed the basis for further simulations that enable us to determine the optimal number of YTs at the terminal of smaller sizes. Simulation running. The final step explains how the simulations have been run. To be able to determine the optimal number of YTs for a small CT the simulations were performed in three sets with a different number of YTs per QC. In each set seven different scenarios have been performed, changing several terminal characteristics and increasing the annual throughput of the CT up to 1 million TEU.
Framework
For the purpose of this research, a hypothetical CT, which is a Discrete Event Simulation (DES) model based on real data from the ports of Koper, Rijeka and Trieste, was built with the software FlexSim CT. FlexSim CT is a powerful "what-if" analysis tool used to model systems that change their state at discrete times as a result of specific events. Our research is based on the models developed by Gamal Abd El-Nasser A. Said and El-Horbaty [27] and Gamal Abd El-Nasser A. Said et al. [28]. Therefore, stochastic, dynamic and discrete problems were addressed in this simulation study.
The performed simulation covers all the berth-yardberth operations. The model was entered with all actual data regarding the containers that arrived at the terminal with the ship (import) and the data about the containers that had to be loaded on the ship to leave the port (export), while the number of containers entering and exiting the terminal by land has been performed virtually (by the software) according to the input parameters. All the simulations included the ship arrival at the terminal, the unloading of the inbound containers, container transfer to yard and their positioning in the final slot and vice versa. The sequence of the performed operations in the CT is shown in the Fig. 1.
For vessel simulation a real vessel arrival schedule has been used in which each ship service has a fixed planned arrival time that is the same each week, making realistic simulations possible (the window is shown in Fig. 3). The data regarding the transfer area, storage capacities and yard equipment were taken from the available data of the selected ports and combined for the purpose of the research.
Terminal layout (input data)
The basic simulation model was entered with the following input data: berth length, QCs, ships, incoming and outgoing containers, YTs, yard area, and YC. The basic CT model captures one continuous quay of 600 m divided into two berths (Table 1, Fig. 2). In that way, the large vessels may occupy more than one berth, while small vessels may share a berth.
All QCs perform single cycling operations, which is usual in medium-sized and smaller ports. Transfer operations between the sea and storage area are done by YTs. Initially 5 YTs are assigned to every QC, meaning that 40 YTs are placed at the terminal. The capacities of the variables changed during the seven scenario simulations are presented in Table 2.
The yard area has a storage capacity of 20,160 TEU and is placed parallel to the quay. It is divided into three Basic model with 630,000 TEU of annual throughput.
SCENARIO 2
The quay of the basic model was extended by 100 m and ships were served by 6 PP and 2 P QCs. The annual throughput remained unchanged.
SCENARIO 3
To the previous model a weekly service with a PP ship was added. The annual throughput rose to 689,000 TEU.
SCENARIO 4
Another weekly service with a PP ship was added to the previous model, while one feeder service was eliminated. The annual throughput changed to 768,000 TEU.
SCENARIO 5
The QC layout was changed to 7 PP and 1 P. The ship schedule changed with one new PP service instead of a small one. The annual throughput rose to 844,000 TEU.
SCENARIO 6
The weekly schedule changed with the purpose of increasing the annual throughput to 899,000 TEU.
SCENARIO 7
In the last scenario only PP QCs were placed on the quay. The weekly schedule changed in order to increase the annual throughput to 990,000 TEU.
Stojaković and Twrdy
stacking zones; import, export and a zone for empty containers. Every block for full containers is served by an RTG, while a block with empty containers is served by a reach stacker. All the resources of the same type have the same specifications. No replacing of containers has been done at the storage once they have been placed in their final slot. The terminal's traffic in the basic model consists of thirteen services that cover the following types of ships: 46.15% feeder ships (up to 1500 TEU) 23.08% Panamax ships (1500-5000 TEU) 30.77% Post Panamax ships (over 5000 TEU) Despite the fact that Post Panamax ships represent only 30.77% of all ships that arrive at the terminal, they account for 66.20% of the terminal throughput. The initial annual throughput amounts to 630,000 TEU per year. Simulations were conducted for a period of 1 week or until the completion of the transhipment operations on the last scheduled ship. The ship schedule for every day of the week and the placement at the individual berth is illustrated with grey squares in Fig. 3.
Analyses
The chosen schedule foresaw every day berth occupation for a duration of 1 week (shown in Fig. 3). At the first berth, ships of up to 2300 TEUs and lengths of up to 220 m called, while the second berth is primarily intended for the larger Post Panamax ships, the length of which is approximately 300 m.
During the analysed week, P QCs handled 3306 TEUs, accounting for 27.25% of transhipped TEUs, while the remaining three quarters of total weekly throughput were performed by PP QCs. The average occupancy of all QCs in the analysed week was 43.47%, while the average working time of PP QCs reached 62.88%. A very significant output for the implementation of the assigned manipulations was the QCs waiting time for YTs to arrive. On average all eight QCs accounted for 11.44% of waiting time, which is not much, but if we focus only on the PP QCs, the waiting time rose to 19.56%, meaning that they required more YTs or faster operations in the yard area. This greatly affects the whole time of the ships in the port as the QC can't drop the container on the surface and move to another manipulation, rather must wait. On the other hand, P QCs accounted for only 3.32% of waiting time, meaning that five YTs were sufficient for them.
On average, QCs transhipped slightly fewer than 20 moves/hour, which is acceptable for the small QCs and also in line with actual achievements in nearby ports, while such a small achievement for large QCs is unacceptable. This affected the berth occupancy, which amounted to 62.70%. Though, as the maximum limit to still achieve optimal results on the berth is set at 65%, it is still within an acceptable range. Nevertheless, at the second berth where large ships are mooring the actual occupancy (74.06%) exceeded the allowed limit. With such results, the optimization of the quay should be considered by the terminal operators. The simulation has shown that for the current annual traffic the storage area of the terminal is still sufficient or even underutilised. The average utilization of the yard space amounted to 31.21%, while the maximum recommended is up to 60%. The RTGs placed on the import blocks waited much longer for the arrival of YTs than those placed on export blocks. The average utilization rate of the YCs therefore showed that there is currently a surplus of YCs at the terminal. Nevertheless, in case of their reduction higher congestion would occur, as there is a lack of YTs for transporting containers from berth to yard and vice versa to help to achieve optimal results.
With the selected inputs, the working time of the simulated terminal lasted 210.65 h. If we would consider only the berth subsystem in the simulation (excluding all the yard operations), that time would be reduced to 140 h, as QCs would achieve higher productivity. The simulation showed that in our case the biggest problem is in the transfer area where YTs are operating.
Simulations
The results of the basic model showed that the insufficient number of YTs were the principal cause for poorer efficiency of QCs and longer ship stops at the terminal. For that reason, further simulations were conducted by increasing the number of the YT. Initially, we increased the transfer mechanization by one on every QC, but relevant results were not reached until we allocated ten YTs to every QC. The most suitable results (comparable to that achieved in ports) were obtained only by placing sixteen YTs on a QC. Even though that is not common in small size terminals it allowed us to obtain the desired productivity of the QCs when yard productivity was not at its highest.
Further simulations were therefore divided into the following three sets: Set 1: each QC served by 5 YTs, Set 2: each QC served by 10 YTs, Set 3: each QC served by 16 YTs.
Thus it was possible to determine how a change in the number of YTs affected the productivity of QCs and ship serving times.
In each of the three sets, the same simulations were effectuated covering seven scenarios where container throughput gradually rose to near 1 million TEUs. At the same time, berth and storage capacities were optimized in order to enable the reception of such traffic. The scenarios are described in Table 2.
Due to the use of probability distribution different simulation runs of the same scenario did not give us identical results. Consequently, several simulation runs were performed for each scenario. The reported results are the average of the simulation runs of each scenario.
Results
The simulations included the comparison of each scenario with different criteria and with different numbers of YTs. The results are shown in Table 3 and in Fig. 4 to Fig. 9. The best obtained results are marked with a green dot, while the worst have red dots. The yellow dot denotes cases where the obtained results were not much different from the best one and were therefore also very favourable.
The study included P and PP QCs, as this is the equipment of most medium and small ports which are included in both feeder and deep sea services. Primarily, the simulations were focused on the berth subsystem, as that area in connection with the transfer mechanization was identified as the one affecting most the time of the ships in port.
The results in all scenarios showed that when the number of YTs per QC had been increased to the maximum, the working time of QCs and the QC waiting time for YTs to come decreased (Figs. 4, 5). This has had a positive effect on the QC productivity and on the reduction of the berth occupancy ratio, which is crucial if the port wants to increase annual traffic, acquire larger ships and assure them fast operations (Figs. 6, 7). The exception was scenario 6, where the best results in all criteria were achieved by increasing the number of YTs to 10 (Figs. 4,5,6 and 7). It is also clearly shown that in scenarios 3, 4 and 5 PP QCs achieved very favourable results with 10 YTs on each QC (Figs. 4, 5, and 6). Those are significantly similar to the results of the 3rd set of simulations, which indicates that 10 YTs would be sufficient to achieve good transhipment effects on ships in case of traffic increase to 689,000 TEU or more. In almost all scenarios the 3rd set of simulations provided the lowest berth occupancy rates (Fig. 7). Nevertheless, in the 2nd set very favourable results that did not exceed the recommended critical point of 65% were achieved (Fig. 7), which means that a financial investment in 16 YTs would not make sense. In addition, under such conditions, storage problems would be exacerbated. They were significantly worse with 16 YTs than with 10 YTs on a QC (Figs. 8,9). After numerous simulations, it was clear that the best results on the yard were achieved in the 1st set of simulations with 5 YTs on each QC (Figs. 8,9). We are therefore dealing with an extremely complex problem that requires important decisions. For ports it is essential to determine the point of balance, where the berth productivity required by the ship-owners can be achieved with the least negative effects on the storage operations. Storage problems decrease the efficiency of the berth subsystem, that is why ports cannot ignore berth requirements and choose the number of YTs that corresponds primarily to the yard. The survey showed that with a terminal of such dimensions it is possible to achieve optimum results while reaching a throughput of 850,000 TEU. With a million TEU the productivity is significantly reduced, and the overall system becomes overloaded. A graphic overview of the changes in the results obtained during stage simulations are emphasised in the following Figures.
Discussion and conclusion
The article presents the results of a study conducted on a hypothetical small CT using Flexsim CT 3.3 software. The analysis covered all berth-yard-berth operations with the emphasis on the correct allocation of transfer mechanization to allow the optimal functioning of the berth, and, consequently, a faster turnaround of the ship in the port. As nowadays ship-owners are utilizing ever larger ships even in smaller ports, such ports are faced with increasing amounts of containers and requirements for faster ship operations in order to reduce costs in ports.
We based our study on the work of Gamal Abd El-Nasser A. Said and El-Horbaty [27], in which a 54% reduction in container turnaround time at the port was achieved by applying the optimization model for storage space allocation at the Alexandria container terminal. In our case, we used the data of a hypothetical terminal with the capacity of 630,000 TEU and the simulations were divided into seven scenarios that allowed a gradual increase in traffic up to about one million TEU. The simulations were run in three sets, each using a different number of YTs per QC.
The results showed that, on average, the QCs in all seven scenarios achieved the most favourable results with 16 YTs, but the simulation set with 10 YTs serving a single QC is not significantly different. Therefore, the investment in 16 YTs per QC (according to the results obtained) is not worthwhile, and without an exceptionally good strategy for allocating operations to YTs, congestion would occur on both subsystems considered. However, the situation was reversed in the yard area. Therefore, in our case, increasing the number of internal transfer mechanization has a negative impact on the operation in the yard area. Regardless, due to the poor results obtained at the berth with 5 YTs on one QC, the number of YTs had to be increased, even with the risk of a slight deterioration of the productivity in the storage area. Our aim was to achieve a QC productivity and berth occupancy that meets the requirements that ship-owners have today in smaller ports. This is why we opted for the higher number of YTs per crane. In addition, choosing the wrong strategy for assembly can lead to congestion in the berth area -especially in the storage area -which in turn has a negative impact on operations at the quay.
In summary, the competitiveness and operating level of CTs currently depends on the global demand for container transport and, consequently, on the decisions of ship-owners to use larger vessels. In this context, medium and small ports are in a very difficult position as they are subject to a cascade effect, which according to Merk et al. [25] means that ships that have become redundant due to very large new vessels are used in direct services that include medium and small ports. The results of this study are important for operators of smaller CTs as they can use this model to find the right number of handling mechanizations to increase annual throughput. The main limitation stems from the fact that not all CTs were included in the analysis, but only one smaller terminal. Therefore, the conclusions presented here are limited to providing an answer to the main question of how to ensure fast and efficient handling of ships that now reach 6000 to 8000 TEU and how to meet the main demand of ship-owners -fast handling at ports to reduce costs. The focus was on performance metrics and operating times, which depend on mechanisation in each subsystem. Since the results of the 2nd set of simulations gave quite good results for both subsystems in scenarios with increased annual traffic, it would be most beneficial for a small container port to choose 10 YTs per individual QC. The potential of many models is limited by data availability. The lack of detailed data remains a major challenge, also for determining the right number of YT to QC to ensure better productivity in the berth and yard subsystems. However, the evaluated results show that the presented concept would lead to a significant improvement in the overall productivity of the small CT. The results obtained proved to be authentic; however, we are wondering whether a terminal of such capacities can achieve even better results in both subsystems using alternative transfer mechanization. Our further research will thus focus on upgrading the presented model, where transfer operations will be accomplished by another type of transfer mechanization. In that way it will be possible to compare the results and create some good guidelines for efficient optimization of operation in small container ports. | 2023-01-12T14:35:56.192Z | 2021-03-22T00:00:00.000 | {
"year": 2021,
"sha1": "51d94e998f0ff41e79cd317cfafb7220e5aac00b",
"oa_license": "CCBY",
"oa_url": "https://etrr.springeropen.com/counter/pdf/10.1186/s12544-021-00482-6",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "51d94e998f0ff41e79cd317cfafb7220e5aac00b",
"s2fieldsofstudy": [
"Engineering",
"Business"
],
"extfieldsofstudy": []
} |
198400726 | pes2o/s2orc | v3-fos-license | Application of seismic tomography for detecting structural faults in a Tertiary Formation
Seismic refraction technique is an increasingly useful geophysical tool for geotechnical studies in civil engineering work including the mapping of different soil formation of subsoil and detection of the bed rock. Additionally, wave velocity is a key parameter which correlates directly with significant geotechnical parameters of soils and rocks. Today, the evolution of the measurement technique in the field and the data processing allows to obtain tomographic images which increases its potential for applications to evaluate structuration of rock mass. This work describes the basic principles of seismic tomography and a case history of an application in civil works used to detect hidden faults in the sedimentary Gatun formation at the north of Panama. The correlation between the seismic profile and geologic profile obtained from boreholes showed very good agreement. Subsequent directed boreholes performed at the site confirmed the position and nature of the faults detected.
Introduction
The seismic refraction method is based on travelling of the elastic compression wave through geological media by using various sensors which are displayed equally spaced along a survey line. Elastic waves are generated at the ends and intermediate points by means of hammers, mass falls, guns, or even explosives for deep prospection. These waves are detected by sensors called geophones that measure the vibration velocity when the elastic wave reach the point where they are located. In general, 12, 24 or 48 geophones are usually displayed along the survey line. The spacing between geophones and length of the survey line depend on the required depth of exploration. As a roughly approximated rule, exploration depth ranges between 1/4 and 1/3 of the total length of the line. The signal captured by the geophones is conditioned (eg. Amplified and filtered) by a seismograph that also allows for displaying and recording the signals of all the geophones arranged along the lines. The time of arrival of the compression waves at each geophone is determined from the records and the space-time curves are then obtained. The analysis of these curves allow determine the seismic profile. The most common methods of interpretation of time-space curves are: intercept times, apparent velocities, wavefronts, delay times and the general reciprocal method [1][2][3]. Today, increasingly advances in sophisticated computational methods, allowed the development of tomographic processing algorithms of refraction. These algorithms allow to solve variations or velocity gradients in depth and lateral changes in highly variable sites such as due to the presence of voids, faults, karst for example. Tomographic images generally show gradual variations of wave velocities with depth oppositely to that obtained in the traditional interpretation methods where layers with different constant velocities are identified. The limitations of gradual velocity variations are discussed in [4]. In sedimentary environments where the propagation velocity increases with depth due to the change of confinement, the gradual variation of speed is more realistic than that obtained with a multi-layer model. The opposite can be understood in environments with harsh shifting such as sediment overlaying the bed rock or interlayers that occur due to the presence of water or stiff cemented soils.
The seismic refraction method has numerous applications in geotechnical engineering field including: the mapping of the stratigraphy of a site, determining the location of the bed rock and the water table, Evaluation of the degree of fracturing and weathering and of the rock, detection of geological faults and degree of compactness and cementation of sedimentary layers, the evaluation of dynamic soil parameters for use in seismic designs and more recently the determination of geotechnical parameters for foundation design. Probably the biggest limitation of seismic refraction is that requires that the stiffness of the layers increase in depth. The presence of any inclusion with lower stiffness can be misinterpreted. This work describes an application of the tomographic technique used to evaluate the faulting of the bed rock in the Tertiary Gatun Formation in a section performed across the Rio Chagres located at the north side of Panama. The results were thereafter validated by directed boreholes, which also allowed to evaluate the applicability of the method and recognize its limitations. Figure 1 shows the location of the Rio Chagres close to the town of Colon, northwest of Panama, on the Atlantic coast side. The Isthmus of Panama is formed by the rigid block (often referred to as the micro plate Panama), located between the tectonic plates of Cocos and Nazca south, the Caribbean plate north and plaque South east America. It is part of the volcanic arc of Central America originated in the Miocene about 17 million years ago. The continuous movement of North America and South America, being at about 25 mm / year, is responsible for the internal deformation of the isthmus which translates into folds and interior faulting [5]. The predominant directions of faults are parallel to the Panama Canal in the direction northwest-southeast and some direction Northeast-Southwest [5] with high degree of dip [6]. The most recognized faults among the first are Limon and Pedro Miguel and among the second is the Rio Gatun. Figure 2 shows the faulting study from Pratt [6]. Notice the possible structural control of the faults on the course of the Rio Chagres. Despite the complexity tectonism described, a very low actual seismic activity is recognized, although, such information creates a significant uncertainty in seismic risk assessment in Panama. The general stratigraphy of the sector of interest has been extensively described by Jones [7] and Woodring [8] which mainly consist on Miocene and Holocene deposits overlaying the Pre-Tertiary volcanic formations. The deepest sedimentary deposit is the Gatun Formation. This Formation is composed of alternating sequences of sandstone, siltstone, conglomerate and marine peats. It is gray-green in color, with a gentle dip to the northwest. Formation contains content of macro and micro fossils in very good condition of preservation. It has an estimated maximum thickness of about 500 m with an approximate elevation of +100 m in southern Limon Bay and reduces its elevation below sea level to the Caribbean coast. The Chagres Formation over the Gatun Formation, consists of massive sandstone, siltstone and some element found in its base identified as Toro limestone.
Localization and geology
The Atlantic Muck is the most recent Quaternary sediments of the Pleistocene and Holocene, which overlap to the formation Gatun, filling basins and channels created by erosion. Thus, the Atlantic muck is widely distributed in the Gatun Lake area between the north shore of the lake and Gamboa. This includes the Chagres River and Gatun Trinidad valleys associated with the interior and coastal wetlands areas with thicknesses that can reach 80 m deep. The Atlantic Muck consists of clays, silts and fine sands with abundant organic matter highly saturated.
The topography of the area of interest has a smooth slope dipping towards the north, essentially as the result of the progressive sedimentation, in contrast to erosion processes involved in adjacent areas. In a relatively recent geological period, the surface of the earth was higher than today and the most important rivers of the Atlantic generated deep valleys. Following a period of subsidence, it caused a reduction in the speed of currents, with the resulting deposition of silt and plant debris. Sea periodic invasions generates depositions in salty water conditions of marine sediments. The last movement in the coastal area of the Atlantic raised surfaces a few feet above sea level.
Works performed
The geophysical study across the Chagres River was carried out as part of a program required for evaluating the geotechnical conditions for the foundation of a bridge. Figure 2 shows the geographical location of the Chagres River and the location of the projected bridge crossing in the vicinity of the Atlantic coast. The geotechnical program included geophysical studies and drilling. A total of 33 rotary drills were made up to a maximum depth of 117 m at the place of each of the bridge piers. The geophysical studies consisted of a seismic line using the refraction technique along the axis of the bridge, downhole (DH) tests and spectral analysis of surface waves (MASW) were also conducted at the site.
In this work, 5 seismic lines of 115 m in length each were performed along a total of approximately 840 m of coverage. For this work, a Geode 24 device from Geometrics was used. Each of the survey lines was performed using a total of 24 geophones with separations 5 m. The geophones were fixed to the soil by means of inserts of 8 cm in length. For each seismic line, 5 shots were made at both ends, at the quarters and at the center. In addition, measurements were taken with long distance shots corresponding to the opposite bank of the river in lines 1, 2, 3 and 4, to obtain information of soil layers below the river. The energization at each point was made using a pressurized gas cannon system. This system allowed to increase the impact energy with respect to a manually driven mass and therefore increase the penetration depth (no explosives were allowed to be used at the site). Figure 3 displays a picture of the gas cannon described. To cover the section of the river, without using hydrophones, the lines were placed on the margins and alternately was shooting from the margins opposite the lines.
Fig. 2. Map of Panama with the main direction of faults.
(modified from Pratt [5]). Red line shows the location of the seismic line. Field records obtained were processed and interpreted using the computer programs Plotrefa, and Pickwin from Geometrics. For the data processing the following steps were used: A. Detailed study of the records: Records were studied signal by signal, in order to assess the quality of them and consistency.
B. Processing Records: The raw signals obtained were filtered in bandwidth sensors, in order to remove line noise, odd measurements. C. Detection of Arrivals: From the records processed, the arrival times of the compression waves were determined. The detection of these points was initially performed by the computer program Pickwin and thereafter checked manually. D. Plotting Profiles: With the first arrivals curves space-time are plotted as shown in Figure 4. E. Obtaining seismic profiles: From the analysis of the space-time curves, the cross sections were obtained. For this operation, the processing software Plotrefa from Geometrics was used. The program uses the routine of least squares minimization in order to approximate the time model calculated with that measured in the field. F. Interpretation of Seismic Profile: The seismic profile was interpreted using a program that allows generic images tracing dividing lines and signs.
Results
The tomography that results of composing and processing the five seismic lines all together is shown on Figure 5. The seismic tomography obtained presented a mean square error RMS = 17.64% being a little higher than the desirable limit of 5%. However, the boundaries and transitions between layers were consistent with those obtained from boreholes. The tomography shows a profile of increasing wave velocity with depth. Unconsolidated sediments to slightly consolidated and saturated are related to propagation velocities of 500 m / s and up to 1000 m / s. Notice that even this layer is saturated, the measured maximum velocity is lower than that of the water (1500 m / s). Such effect may be attributed to the presence of large amount of gas bubbles in the soil arising from de decomposition of organic matter. Gas bubbles can be observed directly when escaping from drilled boreholes. The upper soft layer increases its thickness as one moves away to the margins of the floodplain river. Materials with larger sand content and higher level of structuration yield propagation velocity values between 1000 m / s and 1800 m / s. Propagation velocities between 1800 m / s and 2500 m / s are associated to the Gatun formation. The lower limit is consistent with the weathered and fractured rock. Propagation velocities greater than 2500 m / s are attributed to a rock in good structural condition. An advantage of this method is to allow visualization of variation in the stratigraphic in both vertical and horizontal directions. In the tomography image of Figure 5, it is also shown vertical discontinuities observed in the Gatun formation indicated by mean of shear arrows and attributed here to multiple faults detected. The direction of the arrows shows the direction of displacement of the blocks. The existence of the faults was thereafter confirmed by directed boreholes. The discontinuities are filled with brecciated sandstone, medium to coarse grained, with a high degree of weathering, completely fractured and with high contents of shells. In some sectors the discontinuities are composed of alternating layers of calcareous tufa and fine sand and silty. Seven zones of normal fault type were identified, corresponding to a structure of Graven type, due to regional extensional stresses. In the river zone it has not been observed the presence of a reflector of high velocity to the depth reached by this study of approximately 90 m. It is estimated that this lack of reflector may be due to the presence of faults or lateral discontinuities on the banks of the river that did not allow the passage of the wave train or a high degree of fractured of rock. This zone could be described as a sector where a possible failure of important magnitude develops.
Conclusions
This work presented the result of the seismic survey in a sedimentary formation using the methodology of the tomography acquisition and data processing. The main result of the work done is the tomographic image of the cross section presented on Figure 5. From this results, the following conclusions can be highlights: a) The geophysical methodology proposed to evaluate the site performed adequately, even though the desirable accuracy of 5 % could not be reached due to the many difficulties that presented the surface conditions at the site. b) The tomography cross section at the site allows to evaluate the variation with depth of the various layers associated to different compression wave velocities. The most relevant
Progresive (m)
Elevation (m) Vp (m/s) interphase was that of the bedrock corresponding to the Gatun Formation. c) The tomography image allows also distinguish vertical discontinuities that were related here to faults. These vertical faults were then verified using directed boreholes. d) An interesting observation is the wave propagation velocity obtained for the Atlantic Muck, which was much lower than that of the water. This lower value was attributed here to the presence of gas bubbles due to the high organic matter content.
We thank Louis Berger Group Inc. Company for allowing us the publication of the data presented here. | 2019-07-26T11:17:25.379Z | 2019-06-01T00:00:00.000 | {
"year": 2019,
"sha1": "2c564c3e27ae71548c174a8bd40ee75d2b32ff74",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/18/e3sconf_isg2019_18008.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7f6a3cf464c012ab46e19338710f799dd20f3d87",
"s2fieldsofstudy": [
"Engineering",
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
} |
9894779 | pes2o/s2orc | v3-fos-license | Campylobacter jejuni in commercial eggs.
This study evaluated the ability of Campylobacter jejuni to penetrate through the pores of the shells of commercial eggs and colonize the interior of these eggs, which may become a risk factor for human infection. Furthermore, this study assessed the survival and viability of the bacteria in commercial eggs. The eggs were placed in contact with wood shavings infected with C. jejuni to check the passage of the bacteria. In parallel, the bacteria were inoculated directly into the air chamber to assess the viability in the egg yolk. To determine whether the albumen and egg fertility interferes with the entry and survival of bacteria, we used varying concentrations of albumen and SPF and commercial eggs. C. jejuni was recovered in SPF eggs (fertile) after three hours in contact with contaminated wood shavings but not in infertile commercial eggs. The colonies isolated in the SPF eggs were identified by multiplex PCR and the similarity between strains verified by RAPD-PCR. The bacteria grew in different concentrations of albumen in commercial and SPF eggs. We did not find C. jejuni in commercial eggs inoculated directly into the air chamber, but the bacteria were viable during all periods tested in the wood shavings. This study shows that consumption of commercial eggs infected with C. jejuni does not represent a potential risk to human health.
Introduction
Campylobacter jejuni are among the most common causes of human bacterial diarrhea in developed countries (EFSA, 2009). The prevalence of these bacteria in poultry is high, and the consumption of contaminated chicken is the main risk factor for consumers (Humphrey et al., 2007).
Gastroenteritis caused by infection with C. jejuni in humans is usually self-limiting, causing diarrhea, abdominal pain, and fever, in the course of 5 to 7 days. In rare cases, it can trigger Guillain-Barre syndrome, a severe paralysis form in which the patient has neurological disorders (Altekruse et al., 1999;Moore et al., 2005).
The epidemiology of campylobacteriosis in poultry is very discussed and poorly understood (Sahin et al., 2003). Despite the recognition that the consumption of contami-nated chicken is responsible for a large number of instances of human campylobacteriosis, the involvement of other poultry products such as eggs is not yet fully understood.
A better understanding about the eggs role in the spread of C. jejuni is necessary. This is because the physiological characteristics of the egg and the bacteria may allow entry of the microorganism to the egg's interior. If these microorganisms penetrate and multiply inside commercial eggs, the consumption of this food can pose a risk to human health. However, it is known that the albumen has several defenses against microbial organisms that can invade the contents of the egg immediately after oviposition (Baker et al., 1987), thus blocking the passage and survival of such organisms inside the eggs.
The objective of this study was to evaluate the ability of C. jejuni to penetrate the pores of the shells of commercial eggs and colonize the interior of these eggs.
Materials and Methods
For this study, the C. jejuni subsp. jejuni (IAL 2383) strain was used. This strain was isolated from a human outbreak, characterized, and deposited in the Bank of Cultures of the Instituto Adolfo Lutz, São Paulo (IAL, Brazil). This strain expresses the key genes associated with the virulence of C. jejuni in humans and birds (data not shown).
To determine whether C. jejuni penetrated into the interior of eggs, we used wood shavings that previously had been sterilized and then artificially contaminated with 10 5 cfu/g of C. jejuni (IAL 2383). The culture was diluted in Bolton broth supplemented with 5% bovine blood hemolysate and used to coat the shavings. For the control group, sterile shavings were coated with Bolton broth supplemented with 5% bovine blood hemolysate without bacteria. For the control group, we used 90 commercial eggs (infertile), and for the test group we used 300 commercial eggs (infertile). Before the start of the experiment, we analyzed three samples (each of 10 eggs) to ensure that each lot had not previously been infected with C. jejuni.
The eggs from different groups were arranged in individual containers for each treatment. A layering arrangement was used, interspersing the eggs with the shavings, and the eggs were maintained at room temperature (about 25°C) until the analysis. Time points for sample processing were set at 3, 7, and 24 h of contact. For each time period, the egg yolk from 100 test eggs and 30 control eggs were collected for analysis. A sample of shavings from each group was collected at each time period to check for the viability of the bacteria.
The egg yolks and shavings were analyzed by traditional cultivation on plates and real-time PCR.
To verify if the type of egg (fertile or infertile) influences the passage or survival of C. jejuni, we performed the same experiment described above with commercial eggs (infertile) on 180 SPF eggs (fertile) (a 90-test group and 90-control group).
To check the viability of the bacteria inside the infertile eggs, we used 120 commercial eggs: 60 for the test group and 60 for the control group. Treated eggs were inoculated with 0.1 mL of saline (8.5 g NaCl/L) containing 10 5 cfu of C. jejuni IAL 2383. The inoculum was introduced into the air space at the superior pole using a sterile hypodermic needle (0.3 mm x 13 mm), without disrupting the chorioallantoic membrane. The control eggs were inoculated in the same manner and with the same solution but without the bacteria. After periods of 3, 7, and 24 h of inoculation, the egg yolk were collected for analysis by traditional cultivation on plates and real-time PCR.
To check if the albumen inhibits the growth of C. jejuni, 10 5 UFC.g -1 strain (IAL2383) diluted in saline (8.5 g NaCl/L) was inoculated in different concentrations of commercial and SPF egg albumen diluted in Bolton broth (Oxoid) supplemented with 5% equine blood hemolysate (Table 1). The samples were cultivated on solid mCCDA media (Oxoid) under microaerobic conditions (Probac microaerobic generator) in anaerobic jars at 37°C for 48 h prior to being counted.
In the Applied Animal Biotechnology Laboratory of the Federal University of Uberlândia (LABIO-UFU), 1 mL of egg yolk and vitelli from each sample taken from commercial and SPF eggs, respectively, was added to 9 mL of Bolton broth (Oxoid) and immediately analyzed for the presence of C. jejuni by real-time PCR assay. In parallel, the samples were pre-enriched under microaerobic conditions (Probac microaerobic generator) at 37°C for 24 h for subsequent cultivation on solid mCCDA medium (Oxoid).
The real-time PCR assay used the BAX System (Dupont, Wilmington, DE) according to the manufacturer's recommended procedures (Users Guide BAX®System, 2007). In the laboratory, 5 mL samples of collected medium (Bolton broth) were transferred to microtubes containing 200 mL of lysis solution. The mixture was heated at 37°C for 20 min and 95°C for 10 min, and then it was transferred to a cooling block (2°C to 8°C) for 5 min. After cooling, 30 mL of lysate were transferred to PCR tubes containing the primers for C. jejuni, C. coli, and C. lari and the other reagents needed for PCR assay. The tubes were transferred to the thermocycler/detector and the pre-established program for the hardware followed. At the end of the amplification and detection cycles, the equipment automatically issued the results, identifying the species and number as cfu/g. The ATCC 33291 strain of C. jejuni was processed in parallel as a positive control.
The Bolton broth (Oxoid) preenriched samples were inoculated onto mCCDA (Oxoid,) agar with 16.0 mg of cefoperazone, 5.0 mg of amphotericin B antibiotic supplement (Oxoid), and 5% bovine blood hemolysate. The plates 77 Fonseca et al. DNA from the colonies was extracted using a thermo-extraction procedure and amplified using primer sets I and II, as described by Gillespie et al. (2002) For the reaction, 20 ng of DNA and other reagents (Invitrogen Brasil Ltda, São Paulo) were used, as described by Harmon et al. (1997). C. jejuni (ATCC 33291) was used as a positive control in all the amplification reactions, and the negative control was composed of sterile ultrapure water. The reaction components were amplified in the thermocycler using the conditions described by Harmon et al. (1997). At the end of the reaction, the amplified products were analyzed using electrophoresis on a 1.5% agarose gel, stained with SYBR Safe solution (Invitrogen Brasil Ltda), using 0.5x TBE running buffer at 8 V/m. The gel was visualized under UV light in a transilluminator system with photodocumentation capability.
To confirm that the C. jejuni colonies recovered from the samples were those previously inoculated experimentally, the recovered colonies, the strain originally inoculated, a positive control of C. jejuni ATCC 33291 and a negative control (natural isolate of C. coli) were analyzed using the random amplification of polymorphic DNA (RAPD) technique.
The DNA was extracted by thermal treatment and quantified using a spectrophotometer. The primer used, 5'-GTGGATGCGA-3', was synthesized by Invitrogen Brasil Ltda: 1290 (Akopyanz et al., 1992). The final amplification reaction volume was 20 mL: 10 ng of bacterial DNA, 1x amplification buffer, 2.0 mM MgCl 2 , 1 U Taq DNA polymerase, 200 mM triphosphate (dNTP) and 30 pmol of each deoxynucleotide (Invitrogen Brasil Ltda). The amplification was performed in a thermocycler (Eppendorf) under the following conditions: one initial cycle at 92°C for 2 min, then 35 cycles, each consisting of the steps: 92°C for 15 s, 36°C for 1 min, and 72°C for 1 min, and one final cycle at 72°C for 5 min. The amplified products and a 100 bp molecular weight marker (DNA Ladder, Invitrogen Brasil Ltda) were analyzed by electrophoresis on a 1.5% agarose gel, stained with SYBR Safe solution (Invitrogen Brasil Ltda) using a 0.5x TBE running buffer. The agarose gel was visualized under UV light in the transilluminator with a photodocumentation system.
We used descriptive statistics and calculated the samples positive for Campylobacter by PCR and plate cultivations as a percentage. The kappa coefficient (p < 0.05) was calculated with BioStat 5.0 to compare the results obtained from the traditional culture plating to those obtained from the PCR assays.
Results and Discussion
All results found in plate culture exhibited 100% replicability of the results of real-time PCR. There was no recovery of bacteria in egg yolk from commercial eggs in contact with wood shavings, nor in eggs inoculated with C. jejuni on the outer shell membrane.
In order to check if the albumen had an inhibitory effect on the viability of C. jejuni, we used several albumen concentrations, and there was 100% of positivity of C. jejuni in all concentrations from SPF and commercial albumen eggs. However, the amount of bacteria found in the CFU was inversely proportional to the amount of albumen (Figure 1).
Unlike what occurred with commercial eggs, C. jejuni was able to pass through the pores of SPF eggs and remain viable up to the 3-h point. Of the 30 eggs measured within 3 h, 20% (5/30) were positive for C. jejuni. The results of RAPD-PCR showed that there was 100% similarity between the inoculated strain and the strains isolated from SPF eggs. This result proves the possibility of penetration of Campylobacter in eggs placed in contact with the bacteria and the viability of the eggs inside for up to 3 h.
There was no recovery of bacteria in egg yolk from commercial eggs. These findings are in agreement with Paula et al. (2009), who also found no C. jejuni in commercial eggs that remained in contact with the bacteria culture when inoculated on the membrane of the shell.
The albumen is not a favorable environment for growth of microorganisms due to the presence of enzymes such as lysozyme, avidin, ovoflavoprotein, and ovotransferrin, which have antimicrobial activity directly or indirectly (Cogan et al., 2001). In this study, we found that the components of albumen are not sufficient to totally inhibit the growth of C. jejuni. However, to verify the effect of albumen on C. jejuni, we used the optimal temperature for growth. It may be that in practical situations the albumen is associated with non-ideal temperature for bacteria, thus inhibiting their growth. These findings suggest that the fertility of the egg is a decisive factor in the positive samples, because only in SPF fertile eggs were bacteria recovered, a finding similar to that found by Fonseca et al. (2007).
Despite the low percentage of positive in SPF eggs, evidence of the passage of bacteria present in the wood shavings into the interior of SPF eggs may be related to the characteristics of the egg and this microorganism. The motility of the bacteria and its size, equivalent to 0.2 to 0.8 mm in width by 0.5 to 5 mm in length (Vandamme et al., 1992), indicate that it is possible to pass through an egg shell's pores, which have an average diameter of 11 mm to 12 mm.
A factor to consider about the absence of the pathogen at subsequent times (7 and 24 h) in SPF eggs and at all times in commercial eggs is a consequence of immunity of maternal origin. This hypothesis is based on the work of Sahin et al. (2002) stressing the importance of maternal immunity and the fragile nature of the bacteria, making it uncultivable during a relatively short period after infection. It may be that maternal immunity is crucial in the negativity of commercial eggs and that in SPF eggs, this immunity has not so much interference or has an association later. But this study did not collect enough data to confirm this speculation. It is possible that temperature, associated with other factors such as the fertility of the eggs and characteristics of the eggs, has an important role in inhibiting the presence of bacteria.
Some studies have reported the difficulty of isolating bacteria of the genus Campylobacter in eggs. Doyle (1984) did not find Campylobacter spp. in the internal contents of eggs after they remained in contact with a bacteria culture at three different temperatures. Baker et al. (1987), Rabie (1992), and Zaki and Redda (1995) also failed to isolate Campylobacter in egg yolk. Sahin et al. (2002) demonstrated that C. jejuni has limited ability to penetrate the egg shell in the same breeding that eliminated the organism in feces.
Birds are the main reservoirs of C. jejuni, and consumption of chicken meat is implicated as the main disseminator of campylobacteriosis in humans. However, this study shows that despite the bacteria being able to penetrate the SPF egg shell and survive in the albumen environment, it is not able to penetrate and remain viable within the commercial egg temperature of 25°C. Some associated factors seem to prevent the bacteria from penetrating and surviving in commercial eggs, and thus the consumption of infertile eggs is not a risk for campylobacteriosis in humans. | 2018-04-03T03:45:53.433Z | 2014-05-19T00:00:00.000 | {
"year": 2014,
"sha1": "a7ce8389b415ae4879055537ec866115f7be87f3",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/bjm/v45n1/v45n1a11.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bd391d063619f80daf648b5970bdb90cf81f96d9",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
56124619 | pes2o/s2orc | v3-fos-license | P o S ( j h w 2 0 0 4 ) 0 0 7 Cosmological Parameters from Galaxy Clustering in the SDSS
We present estimates of cosmological parameters from the application of the Karhunen-Loève transform to the analysis of the 3D power spectrum of density fluctuations using Sloan Digital Sky Survey galaxy redshifts. We use Ωmh and fb Ωb Ωm to describe the shape of the power spectrum, σ8g for the (linearly extrapolated) normalization, and β to parametrize linear theory redshift space distortions. On scales k 0 16hMpc 1, our maximum likelihood values are Ωmh 0 264 0 043, fb 0 286 0 065, σ8g 0 966 0 048, and β 0 45 0 12. When we take a prior on Ωb from WMAP, we find Ωmh 0 207 0 030, which is in excellent agreement with WMAP and 2dF. This indicates that we have reasonably measured the gross shape of the power spectrum but we have difficulty breaking the degeneracy between Ωmh and fb because the baryon oscillations are not resolved in the current spectroscopic survey window function.
Introduction
Redshift surveys are an extremely useful tool to study the large scale distribution of galaxies.Of the many possible statistical estimators the power spectrum of the density fluctuations has emerged as one of the easiest to connect to theories of structure formation in the Universe, especially in the limit of Gaussian fluctuations where the power spectrum is the complete statistical description.There are several ways to measure the power spectrum [for a comparison of techniques see 26].Over the last few years, the Karhunen-Loève method [30,hereafter VS96] has been recognized as the optimal way to build an orthogonal basis set for likelihood analysis, even if the underlying survey has a very irregular footprint on the sky.A variant of the same technique is used for the analysis of CMB fluctuations [5].
The shape of the power spectrum is well described by a small set of parameters [6].For redshift surveys, it is of particular importance to consider the large-scale anisotropies caused by infall [12].Using a forward technique that compares models directly to the data, like the KLtransform, enables us to easily consider these anisotropies in full detail.Here we present results of a parametric analysis of the shape of the fluctuation spectrum for the SDSS galaxy catalog.
Data 2.1 Sloan Digital Sky Survey
The Sloan Digital Sky Survey [SDSS; 31,22] plans to map nearly one quarter of the sky using a dedicated 2.5 meter telescope at Apache Point Observatory in New Mexico.A drift-scanning CCD camera [10] is used to image the sky with custom set of 5 filters (ugriz) [9,20] to a limiting Petrosian [17] magnitude of m r 22¡ 5. Observations are calibrated using a 0.5 meter photometric telescope [11].After a stripe of sky has been imaged, reduced, and astrometrically calibrated [18] , additional automated software selects potential targets for spectroscopy.These targets are assigned to 3 deg diameter (possibly overlapping) circles on the sky called tiles [2].Aluminum plates drilled from the tile patterns hold optical fibers that feed into the SDSS spectrographs [29].The SDSS Main Galaxy Sample [MGS; 23] will consist of spectra of nearly one million low redshift ( ¢ z£ 0¡ 1) galaxies creating a three dimensional map of local large scale structure.
Large Scale Structure Sample
Considerable effort has been invested in preparing SDSS MGS redshift data for large scale structure studies.The first task is to correct for fiber collisions.The minimum separation between optical fibers is 55¤ ¤ which causes a correlated loss of redshifts in areas covered by a single plate.Galaxy targets that were not observed due to collisions are assigned the redshift of their nearest neighbor.Next the sky is divided into unique regions of overlapping spectroscopic plates called sectors.The angular completeness is calculated for each sector as if the collided galaxies had been successfully measured.Galaxy magnitudes are extinction-corrected with the Schlegel, Finkbeiner, & Davis [19] dust maps, then k-corrections are applied and rest frame colors and luminosities are calculated [3].Subsamples are created by making appropriate cuts in luminosity, color, and/or flux.A luminosity function is then calculated for each subsample [4] and used to create a radial selection function assuming Ω m ¥ 0¡ 3 and Ω Λ ¥ 0¡ 7 cosmology.20¡ 44 [4].Rest frame quantities (ie absolute magnitudes) are given for the SDSS filters at z=0.1, the median depth of the MGS.In a study of the two point correlation function of SDSS galaxy redshifts, Zehavi et al. [32] found that the bias relative to M galaxies varies from 0.8 for galaxies with M ¥ M £¢ 1¡ 5 to 1.2 for galaxies with M ¥ M 1¡ 5. Norberg et al. [15] found similar results for the 2dF, with the trend becoming more pronounced at luminosities significantly greater than L .The dependence of clustering strength on luminosity could induce an extra tilt in the power spectrum because more luminous galaxies contribute more at large scales and less luminous galaxies contribute more at small scales due to the number of available baselines.We minimize this effect by staying within M ¥ M ¥¤ 1¡ 5.A uniform flux limit of m r ¦ 17¡ 5 was applied, leaving 110,345 redshifts for sample 10 and 134,141 for sample 12.Although there are luminosity limits for this sample, it is essentially a flux limited sample with a (slowly) varying selection function.We used galaxies in the redshift range 0¡ 05 ¦ z ¦ 0¡ 17.
The Karhunen-Loève Eigenbasis
Following the strategy described in VS96, the first step in a Karhunen-Loève (KL) eigenmode analysis of a redshift survey is to divide the survey volume into cells and use the vector of galaxy counts within the cells as our data.This allows a large compression in the size of the dataset without a loss of information on large scales.Our data vector of fluctuations d is defined as where c i is the observed number of galaxies in the i th cell and n i ¥ ¢ c i £ is its expected value, calculated from the angular completeness and radial selection function.The data is "whitened" by the factor 1 § n i to control shot noise properties in the transform (VS96).We call this the "overdensity" convention.
The KL modes are the solutions to the eigenvalue problem RΨ n ¥ λ n Ψ n with the correlation matrix of the data given by where ξ i j is the cell-averaged correlation matrix, δ i j § n i is the shot noise term, and η i j § ©¨ni n j can be used to account for correlated noise (not used in this analysis).The most obvious source of correlated noise in the MGS would be differences in photometric zero points between different
PoS(jhw2004)007
Cosmological Parameters from Galaxy Clustering in the SDSS Adrian Pope SDSS imaging runs, which would result in "zebra stripe" patterns of density fluctuations.The MGS selection has a magnitude limit, but no color selection terms, so the variation in target density depends only linearly on the photometric calibration.The r band zero point variation is 0.02 mag rms [1], indicating that the density variation should be ¦ 2%.The transformed data vector B is the expansion of d over the KL modes Ψ n : The KL basis is defined by two properties: orthonormality of the basis vectors, Ψ m Ψ n ¥ δ mn , and statistically orthogonality of the transformed data,
The Correlation Function in Redshift Space
In order to directly compare cosmological models to our redshift data using a two point statistic we must calculate the redshift space correlation function ξ ¡ s¢ ¨ri £ r j , where r i and r j describe positions in the observable angles and redshift.The infall onto large scale structures affects the velocities of galaxies leading to an anisotropy in redshift space for a power spectrum that is isotropic in real space [12].Szalay, Matsubara, & Landy [24] derived an expansion of the correlation function that accounts for this anisotropy in linear theory for arbitrary angles.The expansion is where the c nL coefficients are polynomials of β and functions of the relative geometry of the two points.The quantity β relates infall velocity to matter density and is well approximated by the fitting formula β ¥ Ω 0¦ 6 m § b where b is the bias parameter.Further terms in Eq. (3.4) are negligible as long as 2 ¢ ∂lnφ ¨r § ∂lnr (where r is the distance to the cell and φ ¨r is the radial selection function) does not significantly differ (ie orders of magnitude) from unity.For the redshift range considered in this analysis § 2 ¢ ∂lnφ ¨r § ∂lnr § ¦ 4. When using counts-in-cells, we must calculate the cell-averaged correlation matrix where W i ¨y is the cell window function and x i is the position of the i th cell.To be precise, W i ¨y should describe the shape of the cell in redshift space.Numerical calculation of this multi- dimensional integral can be computationally expensive.However, for the case of spherically symmetric cells we can change the order of integration and perform the redshift space integrals in Eq. (3.6) analytically before the k-space integral in Eq. (3.5).If both cells have the same window function, we can use Eq.(3.4) as our cell-averaged correlation function (with r i and r j indicating the cell positions) if we replace P ¨k with P ¨k W 2 ¨k in Eq. (3.5) where W ¨k is the Fourier trans- form of the cell window function.This results in a one dimensional numerical integral.The full technical details of our method will be presented in Matsubara, Szalay, & Pope [14].
We used hard spheres as our cell shape and placed them in a hexagonal closest packed (the most efficient 3D packing, with a 74% space-filling factor) arrangement.The current slice-like
PoS(jhw2004)007
Cosmological Parameters from Galaxy Clustering in the SDSS Adrian Pope survey geometry and packing arrangement causes some spheres to partially protrude outside the survey.The effective fraction of the sphere that is sampled is also affected by the angular completeness of our survey (which averages 97%).We calculate our expected counts as if the sphere was entirely filled and multiply the observed galaxy counts by 1 § f i where f i is the fraction of the ith sphere's volume that was effectively sampled.This sparser sampling also increases the shot noise by a factor of 1 § f i .Cells with f i 0¡ 65 were rejected as too incomplete.We found that a 6h¥ 1 Mpc sphere radius allowed us to fill the survey volume with a computationally feasible number of cells without the spheres protruding too much out of the survey, while smoothing on sufficiently small length scales so that we do not lose information in the linear regime (2π § k ¡ 40h ¥ 1 Mpc).We used 14,194 cells for sample 10 and 16,924 for sample 12.
The calculation of the sampling fraction for each cell is difficult due to the complicated shapes of the sectors (see Section 2.2).We created a high resolution angular completeness map in a SQL Server database using 10 7 random angular points over the entire sky.Each point was assigned a completeness weighting by finding which sector contained the point or setting the completeness to zero for points outside the survey area.We used a Hierarchical Triangular Mesh [HTM; 13] spatial indexing scheme to find all points in the completeness map that pierce a cell and calculate the volume weighted completeness for that cell.
Eigenmode Selection
The KL transform is linear, so there is no loss of information if we use all of the eigenmodes.However, if we perform a truncated expansion we can use the KL transform for compression and filtering.The difference between the original data vector and a truncated reconstruction, , where we use only M out of a possible N modes can be related to the eigenvalues of the excluded modes by ¨d d 2 ¥ ∑ N i¢ M£ 1 λ i ¡ The error is minimized (in a squared sense) when we retain modes with larger eigenvalues and drop modes with smaller eigenvalues, which is sometimes called optimal subspace filtering [28].
The eigenvalue of a KL mode is also related to the range in k-space sampled by that mode.Our models assume that linear theory is a good approximation, which is only valid on larger scales.Consequently we only wish to use KL modes that fall inside a "Fermi sphere" whose radius is set by our cutoff wavenumber k f .If we sort modes by decreasing eigenvalue, they will densely pack k-space starting from the origin.The modes resist overlapping in k-space due to orthogonality.The shape of a KL mode in k-space resembles the Fourier transform of the survey window function.This means that the number of KL modes within the "Fermi sphere" depends mostly on the survey window function and does not drastically change if we change the size of our cells, as long as we have significantly more cells than modes (which means that our cells must be smaller than the cutoff wavelength).In a fully three dimensional survey the modes would fill k-space roughly spherically and M ∝ k 3 f .However, the current SDSS geometry resembles several two dimensional slices, resulting in KL modes that resemble cigars in k-space.These modes pack layer-by-layer into spherical shells whose diameters are integer multiples of the long axis of the mode.See Fig. 5 in Szalay et al. [25] for a visualization.This results in a scaling more like M ∝ k 2 f .In choosing the number of KL modes to use in our analysis we try to keep as many modes as possible for better constraints on our parameter values while requiring that our modes are consistent with linear theory.We have developed a convenient method for determing the range in k-space
PoS(jhw2004)007
Cosmological Parameters from Galaxy Clustering in the SDSS Adrian Pope probed by each KL mode.We separate the integral in Eq. (3.5) into bandpowers in k.This allows us to determine how strongly each mode couples to each bandpower, which shows a coarse picture of the spherically averaged position of the mode in k-space.Fig. 1 illustrates this concept and Fig. 2 shows a grayscale image of how the modes couple to the bandpowers for the current analysis.
Once we choose a value for the cutoff wavenumber k f , we truncate our expansion at the mode where wavenumbers larger than k f start to dominate.We can use the statistical properties of the transformed data to check that we are avoiding nonlinearities.A rescaled version of the KL coefficients b n ¥ B n § Ψ n should be normally distributed.Non-linear effects would cause skewness and kurtosis in the distribution of b n .We do not see evidence of non-linear effects when we use k f ¦ 0¡ 16hMpc¥ 1 (corresponding to length scales 2π § k f ¡ 40h¥ 1 Mpc).This value for the cutoff wavenumber leaves us with 1500 modes for sample 10 and 1850 modes for sample 12.
Model Testing
We estimate cosmological parameters by performing maximum likelihood analysis in KL space.The likelihood of the observed data given a model m is
PoS(jhw2004)007
where C m is the covariance matrix and can be calculated as the projected model correlation matrix, Our method is based upon a linear comparison of models to data, thus the R m (and C m ) model matrices only contain second moments of the density field.This linear estimator is computationally more expensive than quadratic or higher order estimators, but the results are less sensitive to nonlinearities.For a comparison of different estimation methods, see Tegmark et al. [26].
In practice we must decide on an explicit parametrization.We construct a power spectrum assuming a primordial spectrum of fluctuations with a spectral index n s ¥ 1.We use a fitting formula from Eisenstein & Hu [6] to characterize the transfer function, including the baryon oscillations.We fit for Ω m h and f b ¥ Ω b § Ω m while taking a prior of H 0 ¥ 72 ¤ 8 km s¥ 1 from the Hubble key project [8] and fixing T CMB ¥ 2¡ 728K [7].We fit the linearly extrapolated σ L 8g for normalization, where σ L 8g ¥ bσ 8m and b is the bias.Linear theory redshift-space distortions are characterized by β (see Section 3.2).
In order to search an appreciable portion of parameter space we have developped efficient methods to calculate the model covariance matrices C m .The straightforward approach would be to calculate the model correlation matrix for a set of parameters and then project into the KL basis and calculate a likelihood, but this is computationally expensive.The covariance matrix can easily be written as a linear combination of matrices and powers of σ L 8g and β (see Section 3.2)
PoS(jhw2004)007
Cosmological Parameters from Galaxy Clustering in the SDSS Adrian Pope , so we can project pieces of the correlation matrix and add them in the appropriate proportions for those parameters.However, the shape of the power spectrum depends on Ω m , f b , and H 0 in a non-trivial way.We project each bandpower of the correlation matrix (see Section 3.3).separately and add the pieces of the covariance matrix together with appropriate weighting to represent different power spectrum shapes.This alleviates the need for further projections.We must be careful when choosing our bandpowers so that we retain sufficient resolution to accurately mimic power spectrum shapes (especially baryon oscillations), but we must also be careful that our k ranges are large enough that the integrals converge correctly.Note that a non-optimal choice of fiducial parameters does not bias our results, but it can result in non-minimal error bars.This procedure can be iterated if necessary.
Results and Discussion
Our best-fit maximum-likelihood parameter values for samples 10 and 12 are presented in Table 1.Results are given for the priors described in the "Model Testing" section and also when using the additional prior Ω b ¥ 0¡ 047 ¤ 0¡ 006 from WMAP [21].We show the results of sample 10 and 12 to give some indication of sample variance, although sample 10 is a subset of sample 12.
The middle column of Fig. 3 shows the marginalized one-dimensional and two-dimensional confidence regions for the power spectrum shape parameters Ω m h and f b for sample 10 without the additional prior on Ω b .There is a strong correlation between Ω m h and f b .The gross shape of the power spectrum (ie ignoring the baryon oscillations and concentrating on the position of the peak and slope of the tail) is nearly constant along the ridge of this correlation due to a degeneracy between shifting the position of the peak with Ω m h and adding power to the peak with f b .However, the strength of the baryon oscillations varies significantly over this range.Table 1 shows that our estimates of Ω m h agree well with the WMAP value of 0¡ 194 ¤ 0¡ 04 [21] and the 2dF value of 0¡ 20 ¤ 0¡ 03 [16] when we use the additional prior on Ω b , and the associated confidence regions are shown in the left column of Fig. 3.The results with the Ω b prior indicate that the gross shape of the power spectrum we measure is consistent with WMAP and 2dF, as can be seen in Fig. 4 which shows the (isotropic) real-space power spectra inferred from the cosmological parameter estimates from the three surveys.However, the results without the Ω b prior show that we have difficulty breaking the degeneracy between Ω m h and f b because the baryon oscillations are not resolved due to the current state of the SDSS window function.The right column of Fig. 3 shows the marginalized one-dimensional and two-dimensional confidence regions for σ L 8g (normalization) and β (distortions) for sample 10.Again there is a strong
PoS(jhw2004)007
Cosmological Parameters from Galaxy Clustering in the SDSS Adrian Pope correlation between these parameters, which is expected from their dependence on b.Our constraint on σ L 8g is strong, but we can only measure β to 20% which limits our ability to perform an independent estimate of b.We can compare our results to WMAP by examining the combination of parameters σ L 8g β ¥ σ L 8m Ω 0¦ 6 m , for which we obtain the value 0¡ 44 ¤ 0¡ 12, in excellent agreement with the WMAP result of 0¡ 44 ¤ 0¡ 10 [21].By combining our measurements with WMAP results we find b ¥ 1¡ 07 ¤ 0¡ 13 for our galaxy sample, but this compares information dominated by galax- ies with redshifts 0¡ 1 ¦ z ¦ 0¡ 15 to present-day matter.If we use a ΛCDM model to extrapolate to the present, we would find b 1¡ 16.Our galaxies cover a range of luminosities but our signal is dominated by the more luminous galaxies (brighter than L ) because there are more long baselines available for the more distant galaxies.This must be kept in mind when comparing our measurement of σ L 8g with other estimates using SDSS data which focus on L galaxies [25,27] This analysis used less than one third of the data that will comprise the completed SDSS survey.Our ability to measure cosmological parameters will increase as the survey area increases, but we should also gain leverage in resolving features in the power spectrum as our survey window function becomes cleaner.The thickest slice of data from the samples used was roughly 10 ¡ , implying a thickness of 50h¥ 1 Mpc at z 0¡ 1.As the slices become thicker, the KL modes will become much more compact in that direction in k-space.Thus we will benefit from the change in the survey aspect ratio in addition to the increase in survey area.
Figure 1 :
Figure 1: Illustrations of the coupling between modes and bandpowers.Part a) shows how a bandpower can be thought of as a shell in k-space.Part b) illustrates how a survey geometry that is slice-like in real space results in a cigar shaped window function in k-space.Finally, part c) shows how the modes couple to the bandpowers as the modes pack in k-space.
Figure 2 :
Figure 2: Grayscale image of wave number vs. mode number.The horizontal red line indicates k f ¡ 0¤ 16hMpc¥ 1 .The vertical black line indicates the truncated number of modes used for likelihood analysis.
10 Figure 3 :
Figure 3: Likelihoods for parameters using sample 10.The left column shows the power spectrum shape parameters with an Ω b prior.The middle column shows the power spectrum shape parameters without an Ω b prior.The right column shows normalization and distortion parameters.The contours in the joint parameter plots are the two-dimensional 1, 2, and 3 σ contours.The points in the f b vs. Ω m h plots are MCMC points from WMAP (alone).Parameter combinations not plotted are nearly uncorrelated.
Cosmological Parameters from Galaxy Clustering in the SDSSAdrian PopeThis analysis considers two samples of SDSS data, which we will label sample 10 and sample 12.Both samples were prepared in similar manners, although using different versions of software.Sample 12 represents a later state of the survey and the sample 10 area is contained in sample 12. Sample 10 represents 1983.39completeness-weighted square degrees of spectroscopically observed SDSS data and 165,812 MGS redshifts.Sample 12 has 205,484 redshifts over 2406.74 square degrees.Both samples are larger than the 1360 square degrees of spectroscopy in data release 1 [DR1; 1] of the SDSS.The geometry of the samples and DR1 are qualitatively similar, consisting of two thick slices in the northern cap of the survey and three thin stripes in the south.
Table 1 :
Maximum likelihood parameter values and 68% confidences (marginalized over all other parameters).Ω b indicates that a WMAP prior was used. | 2018-12-12T10:14:00.126Z | 2006-07-14T00:00:00.000 | {
"year": 2006,
"sha1": "37af425e1ef0b4d4f0f11f5b655a091e5084ac69",
"oa_license": "CCBYNCSA",
"oa_url": "https://pos.sissa.it/016/007/pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "37af425e1ef0b4d4f0f11f5b655a091e5084ac69",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
2528148 | pes2o/s2orc | v3-fos-license | Repurposing drugs in oncology (ReDO)—cimetidine as an anti-cancer agent
Cimetidine, the first H2 receptor antagonist in widespread clinical use, has anti-cancer properties that have been elucidated in a broad range of pre-clinical and clinical studies for a number of different cancer types. These data are summarised and discussed in relation to a number of distinct mechanisms of action. Based on the evidence presented, it is proposed that cimetidine would synergise with a range of other drugs, including existing chemotherapeutics, and that further exploration of the potential of cimetidine as an anti-cancer therapeutic is warranted. Furthermore, there is compelling evidence that cimetidine administration during the peri-operative period may provide a survival benefit in some cancers. A number of possible combinations with other drugs are discussed in the supplementary material accompanying this paper.
Subsequent in vivo results were also obtained with CIM in a number of rat and mouse models by different research groups, with attention focused in particular on immunomodulatory mechanisms [18][19][20]. Other early in vivo results indicated that CIM could enhance the cytotoxic effect of cyclophosphamide in male DBA2 mice injected with P-388 leukaemia cells, significantly increasing survival time [21]. However, there were issues with study replication, and it was reported by Hannant et al that in vivo results from other groups could not be reproduced [22]. Similarly, the potentiation of the cytotoxic effect of cyclophosphamide could not be replicated with a different mouse model, although a potentiation of the anti-tumour effect of razoxane was reported [23].
One possible explanation for these mixed results was suggested by the results in immunocompetent and immunosuppressed DBA2 mice, which showed that CIM reversed the accelerated growth of implanted tumours in immunosuppressed mice known to have higher levels of suppressor cell activity but had no effect on normal mice (though in one of three experiments CIM treatment increased tumour growth in normal mice, a result possibly due to atypically slow tumour growth in this experiment) [24].
The effect of CIM, often in combination with other agents, particularly immunomodulators such as interferon or IL2, has continued to be explored in numerous in vivo studies in a range of cancer types, including melanoma [25], ovarian cancer [26], colorectal cancer [27], gastric tumours [28], pancreatic cancer [29], lung cancer [30], and gliomas [31] in the years since the early 1980s. It is beyond the scope of this paper to fully summarise this wide range of studies, only a few of which have been referenced here, particularly as there is a comparable range of clinical work being carried out and which is summarised in the next section. However, it should be noted that it is the clinical evidence that is of primary interest from the point of view of drug repurposing [32].
Human data in cancer
One of the earliest references to an anti-cancer effect of CIM comes from a report of two cases published in Lancet in 1979 [33]. In both cases, patients with metastatic disease had displayed some tumour regression following treatment with CIM. This, and the subsequent flurry of animal results that focused primarily on the putative immunomodulatory role of CIM, initiated many clinical studies in the use of CIM in oncology. A full survey is beyond the scope of this paper, but the main results are summarised below, listed for those cancers for which there is the highest level of clinical evidence. www.ecancer.org ecancer 2014, 8:485 Colorectal cancer Based on earlier in vitro and in vivo results [27], Adams and coworkers investigated the use of perioperative CIM in patients undergoing surgical resection of colorectal cancer. Control patients showed significant falls in lymphocyte proliferation and cell-mediated immunity. In contrast, patients treated with oral CIM at a dose of 400 mg twice a day for a minimum of 5 preoperative days, then intravenously for 2 post-operative days, showed no significant falls in either lymphocyte proliferation or cell-mediated immunity, indicating that CIM helped reduce post-operative immunosuppression following resection [34]. There was some indication that this difference could provide some clinical benefit in a follow-up that looked at survival at 3 years in two subsequent reports, which showed that with a median follow-up at 30 months, the calculated 3-year survival was 93% for CIM-treated patients and 59% for controls [35,36].
In a randomised, blinded trial, Svendsen et al treated 192 patients with oral CIM, at a dose of 400 mg twice a day, following surgery (resection or exploratory) for colon (123 patients) or rectal (69 patients) cancer [37]. CIM treatment commenced in the first 3 weeks following surgery and continued for 2 years. The primary end point was cancer-specific mortality. In patients treated with curative intent (148 patients), there was no difference in this end point between the treatment and control arms. However, on stratification, the curatively treated patients with Dukes C disease tended towards lower cancer-specific mortality, though this did not reach statistical significance (29% reduction, 90% confidence interval 2-57%, p = 0.11 log-rank test). There were no differences between groups in the non-curatively treated patients.
In another double-blinded trial of preoperative CIM in Australia, 112 colorectal cancer patients were randomised to low-dose (400 mg twice a day), high-dose (800 mg twice a day), or placebo arms [38]. Treatment was given for five days prior to surgical excision. Kaplan-Meier survival analysis showed there were no significant differences between treatment groups, although there was a trend towards survival advantage in the high-dose CIM (800 mg) group (p = 0.20, log-rank test). However, stratification by replication error (RER) positive or negative tumours showed a statistically significant difference between the high-dose CIM group and the placebo group (p = 0.04, log-rank test) for patients with RER negative tumours.
An unblinded randomised multi-centre trial in Japan reported on long-term survival of colorectal cancer patients treated post-operatively with oral CIM, at a dose of 800 mg daily [39]. A total of 72 patients with colorectal cancer and a primary tumour of T2 or T3 were enrolled, after exclusion of patients who had previously been treated with chemotherapy, radiotherapy, or immunotherapy or who had multiple cancers or severe complications. Of these 72 patients, two did not undergo curative resection, three did not receive adequate drug administration, and three whose disease stage was considered inappropriate for the trial were further considered ineligible and were excluded from the analysis. The remaining 64 patients were randomly allocated, and there were none lost to follow-up. All patients underwent curative resection and then received 200 mg per day of oral 5-U for one year. The treatment group of 34 patients additionally received 800 mg per day of CIM for the 1-year treatment period. In both groups, treatment started 2 weeks post-surgery. Mean follow-up was 10.7 years and showed that the 10-year survival rate of the treatment group was 84.6%, whereas that of a control group was 49.8% (p < 0.0001).
A trial of oral CIM during the perioperative period in 49 patients suffering from gastrointestinal tumours investigated the effect on the immune response, including 19 patients suffering from colorectal cancer [40]. Patients were randomised to perioperative CIM (24 patients) or control (25 patients), with an equal number of colorectal cancer patients (19) in each arm. The patients in the treatment group received 400 mg of oral CIM three times a day for 7 days prior and up to the day of surgery, and then intravenous CIM at 600 mg twice a day during and post-surgery for 10 days. The control arm received standard of care without CIM. The primary end points were measurement of immune status, including peripheral blood lymphocytes, natural killer (NK) cells and tumour-infiltrating lymphocytes (TIL). No clinical outcomes were assessed. In comparison with blood counts from healthy individuals, both treatment and control arms showed decline in the proportion of total T cells, T helper cells, and NK cells. These changes were reversed in the patients in the CIM arm, who showed significantly higher counts on the 10th post-operative day than controls. Also significant was the difference in the number of patients showing increases in TIL response: 68% (17/25) of the patients in the treatment group had significant TIL responses, and only 25% (6/24) of the cases had discernible TIL responses (p < 0.01).
A 2012 Cochrane Review of H 2 RAs as adjuvant treatments for resected colorectal cancer pooled data from six randomised clinical trials, five of which utilised CIM and one used ranitidine. The review found a trend towards improved survival when H 2 RAs were utilised as adjuvant therapy in patients having curative-intent surgery for colorectal cancer (HR 0.70; 95% CI 0.48-1.03, P = 0.07). However, analysis of the five CIM trials (with pooled data for 421 patients) found a statistically significant improvement in overall survival (HR 0.53; 95% CI 0.32 to 0.87) [41]. Overall the authors concluded that: cimetidine appears to confer a survival benefit when given as an adjunct to curative surgical resection of colorectal cancers. www.ecancer.org ecancer 2014, 8:485
Melanoma
The earliest clinical evidence of an effect of the CIM on melanoma was in a series of three cases of recurrent malignant melanoma being treated with coumarin (at a dose of 100 mg per day). Oral CIM was started at a dose of 1000 mg per day when these patients were no longer responding to coumarin treatment. In these cases there was rapid regression of multiple lesions and a corresponding and long-lasting improvement in physical condition. In one further case, recurrent disease was treated with a lower dose of coumarin (25 mg) and CIM (1000 mg), but the disease progressed rapidly, and the patient died shortly after. This patient had not previously been treated with coumarin [42].
A similar pattern was recorded in a series of six melanoma patients, five of whom had disseminated disease, treated with human leukocyte interferon-alpha, with little evidence of effect. The addition of CIM after a period of 6-8 weeks led to a remarkable change, with complete remissions seen in two patients, a partial remission in one, and disease stabilisation in another [43]. The same authors subsequently reported on 20 patients who had also been treated with interferon-alpha with no objective responses; the subsequent introduction of CIM led to six objective responses, including five complete regressions and one extensive partial regression, and three cases of prolonged disease stabilisation [44].
At that time, the standard of care treatment for metastatic melanoma was treatment with dacarbazine or nitrosoureas, with an objective response rate of around 15% and median overall survival of 4 months [45].
A series of seven Phase II studies, in 191 patients, on the use of recombinant interferon-alpha 2a (rIFN-2a), alone and in combination with other agents, for the treatment of disseminated malignant melanoma was carried out by Creagan et al. One of these studies included oral CIM, at a dose of 1200 mg per day (300 mg QID), as an immunostimulant [46]. The response rate for these diverse studies ranged from 0% to 23%, with an aggregate response rate of 14% (27 of 191 patients), and median survival time was 6 months [47]. The best responses, at 23%, were for rIFN-2a monotherapy or rIFN-2a with CIM, suggesting no additional benefit for CIM, although there was a lower level of toxicity in the CIM group.
A later Phase II study, by a different group, looked at the combination of interferon-alpha 2b, IL-2, and cisplatin in metastatic melanoma in a group of 87 patients, with and without oral CIM. An overall response rate of 27% was achieved in the 82 patients evaluable for response, with median response duration of 7 months and median survival of 10.1 months. There were no significant differences between the CIM and non-CIM arms of the study [48].
It is possible that the difference in outcomes between the earlier trials and the more disappointing later trials may be related to the different formulations of interferon used. The form of interferon used by Flodgren et al was human leukocyte derived (HuIFN-alpha (Le)) [43,44], whereas the work of Creagan et al [47] and Schmidt et al [48] used recombinant interferon 2 alpha. Human-derived interferons contain a wider range of alpha interferon subtypes than recombinant interferons, which are normally restricted to alpha 2a or 2b, and there is some anecdotal evidence that recombinant interferons may not be as effective in stopping tumour development [49]. It may be hypothesised that CIM interacts with some additional alpha interferon subtypes to potentiate the effect and to improve response to treatment, as shown in the earlier trials.
Investigation of CIM as a monotherapy in metastatic melanoma was also investigated in two small Phase II trials, by different groups. In the first trial, 19 previously untreated patients were treated with oral CIM at 1200 mg (300 mg QID). Objective responses were observed in three (19%) patients, including one long-lasting (16+ months) complete response, one near-complete response (21+ months), and one partial response (7 months). Overall median time to progression was 1.4 months, and overall survival was 6 months [45]. A later trial of CIM monotherapy involved 15 treatment-naïve patients who were treated with high-dose CIM (600 mg QID), although three patients experienced stable disease for 2-4 months. There were no objective responses recorded, and median survival time was 5.3 months, suggesting little significant activity as a monotherapy in this group of patients, although toxicity was negligible even at this high dose [50].
Gastric
Early concerns were raised that the treatment of duodenal ulcers with CIM might alleviate the symptoms of gastric carcinoma, thereby masking disease progression [51], or that CIM itself may be carcinogenic and increase the risk of gastric cancers [52]. However, long-term post-marketing surveillance has shown no such association [53]. www.ecancer.org ecancer 2014, 8:485 A report from Denmark assessed overall survival of gastric cancer patients treated with oral CIM at 800 mg per day (400 mg BID) for 2 years. In this double-blinded study, 181 patients were randomised to CIM or placebo immediately after surgery or the decision not to operate. Median survival in the CIM group was 450 days and 316 days in the placebo group, a statistically significant result (p = 0.02, log-rank test). Relative survival rates (CIM/placebo) were 45%/28% at 1 year, 22%/13% at 2 years, 13%/7% at 3 years, 9%/3% at 4 years, and 2%/0% at 5 years [54].
However, a larger randomised, double-blinded trial in the UK, involving 442 patients, did not find a positive effect of oral CIM [55]. Patients were randomised to either low-dose (400 mg, BID) or high-dose (800 mg, BID) CIM or placebo until tumour progression, recurrence, or death. The median survival for patients receiving CIM was 13 months (95% confidence interval, 9-16 months) and 11 months for the placebo arm (95% confidence interval, 9-14 months), a result that did not reach statistical significance. Within the CIM arms, median survival for the high-dose group was 13 months (95% CI 7-20 months), and 13 months (95% CI 8-18 months) for the low-dose. The 5-year survival was 21% for those randomised to CIM compared with 18% in the placebo arm, again a result that did not achieve statistical significance.
Renal cell carcinoma
The earliest clinical evidence that CIM might have some effect on renal cell carcinoma (RCC) was from a small trial that looked at the combination of CIM and coumarin in 45 patients suffering from metastatic RCC [56]. Patients were treated with coumarin, 100 mg orally daily; on day 15 of treatment, oral CIM was started at a dose of 1200 mg (300 mg QID), and treatment with both drugs was continued until disease progression. Of 42 evaluable patients, there were three complete responses (CR) and eleven partial responses (PR), giving an objective response rate of 33.3%. Twelve patients exhibited stable disease (SD). The median duration of the PR group was 5 months (in the range 4-21+ months), while the median duration of the SD group was 7.3 months (in the range 4-16.5+ months). There were no reported toxicities with the treatment. Subgroup analysis showed that there were no objective responses in the 14 patients who had not undergone nephrectomy, whereas the fourteen objective responses occurred in the 31 patients who had undergone nephrectomy.
A number of subsequent studies were unable to reproduce this encouraging result. A similar protocol was used in a three-centre Phase II trial that enrolled 31 patients, the majority of whom (84%) had been nephrectomised [57]. Whereas the original study used CIM at 300 mg four times a day, this trial used a dose of 400 mg three times a day, in all other respects the protocol was the same. Of the 31 patients treated, only two (6.5%) showed a PR of 63 weeks and 73 weeks. Both patients experienced regression of pulmonary metastases. Five patients experienced SD (in the range 28-45+ weeks). Similarly, another small study used a protocol identical to the original to treat 25 patients, 21 of whom had been nephrectomised. Here there were no objective responses recorded, although five patients experienced SD for more than 3 months. One possible explanation for the disparity may be explained by the better performance status and lower tumour burden of the patients in the original study [58].
CIM has also been investigated in combination with human lymphoblastoid interferon-alpha (LIFN-a) in RCC. A total of 37 patients with advanced RCC were treated between 1982 and 1995 in Japan, of whom 21 patients had metastatic disease at presentation, and 15 had recurrence after nephrectomy. LIFN-a was administered intramuscularly at 5 million units (MU) daily for 5 to 7 days a week for at least 8 weeks, and CIM was administered orally at 200 mg QID [59]. Treatment resulted in an objective response rate of 41%, with a CR in seven patients and a PR in eight. Additionally, 12 patients exhibited SD. Patients with lung metastases showed the best response to therapy. The 5-year survival rates for patients with and without response and overall were 74%, 20%, and 41%, respectively. Histopathologically, high-grade tumours had a better response to combined therapy than did low-grade tumours.
A subsequent Phase III trial by the same group compared treatment of LIFN-a alone with LIFN-a and CIM, with 36 patients recruited to LIFN-a alone and 35 patients to combined LIFN-a and CIM. Intention-to-treat analysis showed one CR, four patients with PR, 16 with SD, and 12 with progressive disease (PD) among the 36, with an overall response rate of 13.9%. Of the 35 patients in the LIFN-a and CIM arm, there were two cases of CR, 8 patients with PR, 13 with SD and 11 with PD, yielding a response rate of 28.6% (P = 0.13). Time to progression ranged from 9 to 845 days (median 112 days) in the LIFN-a group, and from 31 to 1,568 days (median 125 days) in the LIFN-a plus CIM group (P = 0.87) [60]. While tending towards improved response, the authors concluded that the addition of CIM to LIFN-a did not significantly improve the response rate compared to LIFN-a alone. www.ecancer.org ecancer 2014, 8:485 Despite this conclusion, interest in the combination of interferon and CIM for advanced RCC continues with the addition of other agents. For example, the combination of LIFN-a, CIM, the COX-2 inhibitor meloxicam and the angiotensin II receptor antagonist candesartan was investigated in the Phase II (I-CCA) trial involving 51 patients, of whom 37 (73%) had received prior nephrectomy [61]. Patients received 3-6 MU of LIFN-a, 400 mg CIM BID, 10 mg meloxicam daily and 4 mg candesartan daily. Initially the angiotensin converting enzyme (ACE) inhibitor perindopril erbumine was used, but as this caused a persistent cough in some patients, it was replaced with candesartan. CR was observed in four patients (8%) and PR in seven (14%), giving an overall response rate of 22%. None of the four CR patients relapsed during the 16-81 month follow-up. Of the remaining patients, 24 patients (45%) had SD for at least 6 months, yielding a clinical benefit in 67% of patients, with no grade 3/4 toxicities observed. The median progression-free survival and overall survival were 12 and 30 months, respectively. These results were sufficient for the authors of the study to conclude that the therapy was a potential first-line treatment for advanced RCC that needed to be confirmed in a large international Phase III trial.
The use of high-dose CIM as a single agent in metastatic RCC has also been the subject of a Phase II clinical trial involving 42 patients in the United States. Patients, of whom 38 were evaluable, were treated with 600 mg CIM QID. Two patients showed CR, one of 26 months and one of 33+ months, yielding an objective response rate of 5.3%. There were no cases of PR and four cases of SD (duration in the range 3-9+ months). Both patients with CR had experienced prior nephrectomy [62]. At this relatively high dose, toxicity was minimal, with one case of mild leukopenia reported.
Other cancers
In addition to the clinical investigations in colorectal and gastric cancer, melanoma, and RCC, there have been a few clinical studies in other cancer types. An investigation of a possible correlation between preoperative CIM and measures of tumour cell proliferation (Ki-67 staining) in breast cancer found no association [63].
In pancreatic cancer, there is a recently published case report of activity of the anti-angiogenic agent TL-118, which includes CIM as one of four drugs that make up the combination agent [64]. In this case report, a 75-year old woman with radiologically confirmed inoperable pancreatic cancer has been treated with TL-118 and gemcitabine and has shown a long-lasting (16 months) progression-free survival. Treatment interruption correlated with an increase of tumour marker CA 19-9, and resumption of treatment reduced levels of this marker.
In metastatic prostate cancer, an early trial looked at the combination of CIM and coumarin, using the same protocol as that for melanoma and RCC [65]. While no objective responses were reported in the fourteen patients in the trial, three patients experienced significant reduction in pain from bone metastases and decreased analgesic use that persisted until disease progression at 3, 5.5+, and 9 months.
A small trial in 28 advanced serous ovarian carcinoma patients found that standard platinum-based chemotherapies augmented with CIM, at a dose of 800 mg/day, commencing 2 weeks before surgery and continuing synchronously with chemotherapy, showed statistically significant improvements in overall survival compared to platinum-based chemotherapy alone [66].
The use of oral CIM in the treatment of Kaposi's Sarcoma in patients with AIDS was investigated in eight patients with progressive disease (PD) [67]. CIM was given orally at a dose of 300 mg QID, rising to 600 mg QID if there was no response within one month. Of the eight patients evaluated for response, one showed a complete remission of 7+ months, one patient had a partial response of 8 months and one showed a mixed response of initial regression followed by PD. The other five patients all showed PD. No patients reported toxicity, and several reported symptomatic improvements.
Clinical trials
TL-118 is a novel drug combination produced by Tiltan Pharma Ltd, Israel. Designed as a multi-targeted anti-angiogenic agent, the four drugs that make up the combination are: CIM, low-dose cyclophosphamide, diclofenac, and sulfasalazine. TL-118 is formulated as an oral suspension and is designed to be taken by patients at home rather than administered in a clinical setting. Currently, there are three Phase II clinical trials of TL-118: www.ecancer.org ecancer 2014, 8:485 NCT01509911 is an international multi-centre trial in metastatic pancreatic cancer for patients starting gemcitabine treatment. The primary outcome is the disease control rate after 16 weeks of treatment.
NCT01659502 is a single centre study in pancreatic cancer. The primary outcome is a clinical benefit measurement (a composite score based on pain, performance status, and weight) in a 2-year time frame.
NCT00684970 is a multi-centre, single-country trial designated as a Phase IIB trial for metastatic castration-resistant prostate cancer. The primary end point is progression-free survival from 24 weeks after commencement of treatment up to 3 years. Secondary end points include overall survival, time to prostate-specific antigen (PSA) progression, PSA response, and pain response in evaluable patients.
A randomised, double-blinded Phase II trial in Australia and New Zealand (ACTRN12609000769280) is investigating the perioperative use of CIM in patients with colorectal cancer treated with curative resection. The dose is 800 mg tablets twice daily for 5 weeks, starting a week before surgery. The primary outcome is 2-year disease-free survival, with additional subgroup analysis of patients with positive tumour staining for sialyl Lewis antigens. Secondary end points include longer-term disease-free survival, overall survival, and duration of postoperative inflammatory cytokine elevation, assessed as the time that plasma concentrations of each cytokine (TNF, IL-1B, IL-6, IL-8) are elevated above pre-treatment baseline. Recruitment to the trial is complete, and 45% of the cohort have rectal cancer [68].
Mechanism of action
The anti-tumour action of CIM has been shown to be due to four distinct mechanisms: • Anti-proliferative action on cancer cells • Immunomodulatory effects • Effects on cell adhesion • Anti-angiogenic action
Cancer cell proliferation
It has been shown, in vitro and in vivo, that multiple tumour types express the histamine-synthesising enzyme, L-histidine decarboxylase (HDC) and that tumours can secrete high levels of histamine in a paracrine and/or autocrine fashion. Histamine is highly pleiotropic, with multiple functions involving inflammatory immune response, gastric acid secretion, and action as a neurotransmitter. These diverse physiological actions are mediated by four histamine receptors, of which H 2 and H 4 are implicated in cancer cell proliferation, invasion, and angiogenesis [69,70].
A direct effect on cancer cell proliferation has been shown in two xenograft experiments in which exogenous histamine increased tumour growth in C170 and LIM2412 human colorectal cell lines implanted in Balb/c nu/nu mice, an effect that was reversed by oral CIM but not by the H 1 RA diphenhydramine [27]. Similar results have been shown in gastric cancer, with locally applied histamine increasing the proliferation of implanted MKN45G xenografts in nude mice, an effect abrogated by CIM [71]. In vitro analysis in both colorectal and gastric cancer cell lines showed that the dose-dependent increase in cellular proliferation induced by histamine was associated with an accumulation of cyclic adenosine monophosphate (cAMP) [27,71].
However, there is also some evidence to suggest that some anti-proliferative effects of CIM may not be entirely related to generic activity as an H 2 RA. A comparison of the anti-proliferative effect of different H 2 RAs in gastric cancer cell lines showed that CIM significantly reversed histamine-stimulated proliferation in a dose-dependent manner, ranitidine had a lesser effect and famotidine showed no effect [72]. This suggests either that there is something specific about the binding of CIM to the H 2 receptor or else there are additional off-target effects of CIM action. For example, there is also some evidence CIM can cause apoptosis in the Caco-2 human colorectal cancer cell line independent of its action as an H 2 RA [73]. Similarly, while CIM synergised with a novel phospho-valproic acid to inhibit pancreatic tumour growth in mouse models, another H 2 RA, ranitidine, showed no such activity [74]. www.ecancer.org Immunomodulation Histamine has multiple effects on both innate and adaptive immune responses, mediated by the four histamine receptors (H 1 -H 4 ). In relation to cancer, histamine is associated with an immunosuppressive tumour microenvironment, including an increase in CD4 + CD25 + regulatory T cell (Treg) activity, reduced antigen-presenting activity of dendritic cells (DC), reduced NK-cell activity and increased myeloid-derived suppressor cell (MDSC) activity [75][76][77].
In particular, histamine binding to the H 2 receptor is associated with suppression of IL-12 and stimulation of IL-10 secretion and is implicated with a shift in Th1/Th2 balance toward Th2-dominance of the immune response. This effect was reversed by CIM in human PBMC [78]. Similarly, in an HDC knock-out mouse model, animals inoculated subcutaneously with the LM2 murine breast cancer cell line showed slower tumour growth than in HDC wild-type mice. The knock-out mice, lacking endogenous histamine, showed a predominance of Th1 cytokines and a lower level of Foxp3 (associated with CD4 + CD25 + Tregs) expression compared to wild-type tumour-bearing mice [79].
In addition to Treg cells, the other key drivers of the immunosuppressive tumour microenvironment are MDSC cells, involved in extensive cross-talk with Tregs in promoting T-cell dysfunction and in skewing the immune response towards Th2 [80]. MDSCs express H 1 -H 3 receptors, and there is in vitro and in vivo evidence that blockade of H 1 (using the H 1 RA cetirizine) or H 2 (using CIM), can reverse the immunosuppressive action of these cells [77]. For example, the addition of CIM reduced the tumour burden in a B16 melanoma mouse model [77] and in a mouse model of 3LL lung tumour [30].
CIM has also been shown to increase the in vitro antigen-presenting activity of monocyte-derived DC, in advanced colorectal cancer patients compared to controls [81]. An increase in NK activity compared to non-CIM-treated controls has also been noted in cardiopulmonary bypass surgery [82].
Additionally, perioperative CIM has been shown to reverse the inhibition of lymphocyte proliferation induced by histamine and to increase the number of TIL in colorectal and gastric cancer patients [36,40,76]. Increased TIL was associated with prognostic significance in these trials, and is also considered significant in a range of other cancer types, including breast, ovarian, brain, and head and neck cancers.
Cell adhesion
CIM has been shown to have an inhibitory effect on cancer cell adhesion to endothelial cells independent of its H 2 RA activity. Using a monolayer cell adhesion assay the adhesion of HT-29 colorectal cancer cells to human umbilical vein endothelial cells was investigated for CIM and two other H 2 RAs (famotidine and ranitidine). Where CIM inhibited adhesion in a dose-dependent manner, the other H 2 RAs had no effect. In a nude mouse model, CIM dose-dependently reduced the incidence of HT-29 liver metastases, suppressing it completely at the highest dose (200 mg/kg/day) [83]. The effect on cell adhesion was mediated by the interaction between tumour sialyl Lewis antigens and E-selectin expressed on the endothelium. Subsequent investigation has shown that there is a positive correlation between response to CIM treatment in colorectal cancer patients and high expression levels of sialyl Lewis-X and sialyl Lewis-A [39]. The reported 10-year cumulative survival rate of the CIM group with higher staining of sialyl Lewis-X in tumours was 95.5%, whereas that of a control group was 35.1% (P = 0.0001).
In addition to colorectal cancer, the inhibitory effect on cell adhesion has been demonstrated for other cancers, including breast [84], salivary gland tumours [85], gastric cancer [86] and glioblastoma [31].
Angiogenesis
The final mechanism of action that has been investigated in relation to the anti-cancer action of CIM is the effect it has on tumour neo-angiogenesis. Ghosh et al investigated the role of histamine in the production of vascular endothelial growth factor (VEGF) in carrageenin-induced granulation tissue in rats, and found that it was mediated by the H 2 receptor, and that the upregulation of VEGF induced by histamine was reversed by CIM [87]. A study comparing CIM and roxatidine (another H 2 RA), found that both drugs strongly reduced colon 38 tumour implants in C57BL /6 mice syngeneic mice, and that this inhibition was related to reduced expression of VEGF and reduced micro-vessel density in the implanted tumours [88]. Additionally, there is also evidence that the anti-angiogenic effect of CIM administration may also be related to a reduced expression of platelet-derived endothelial growth factor (PDECGF), as well as VEGF, in mouse and rat models of bladder cancer [89]. www.ecancer.org ecancer 2014, 8:485 Mechanistically, it has been suggested that VEGF expression is increased by histamine via the activation of the cyclooxygenase-2 (COX-2) pathway in colorectal cancer cell lines, a process mediated by the H 2 and H 4 receptors [90]. This process was disrupted by the H 2 RA zolantidine and H 4 RA JNJ 7777120. In contrast, CIM showed no effect on VEGF expression in an in vitro endothelial cell model of angiogenesis [91].
Next steps
The abundance of clinical evidence shows that CIM has demonstrable therapeutic effects in a range of cancers, particularly cancers of the gastrointestinal tract, RCC, and melanoma (summarised in Table 1). There is also evidence, both in vitro and in vivo, that these effects are most likely related to well-documented immunomodulatory effects. Furthermore, the evidence indicates that these effects may extend beyond the direct effect on the H 2 histamine receptor and that CIM has off-target effects which are not shared by other H 2 RAs. We can hypothesise, therefore, that a portion of the variability of response to CIM reported in different clinical trials may be explained by the degree of variability of immune function in cancer patients. In common with other immunotherapeutic agents, this suggests that CIM may be more efficacious in patients with lower tumour burden and higher immune function, and in cancers with a greater antigenic potential. Indeed, an explanation proffered for the differences in response reported by different clinical trials in RCC was the better performance status (related to tumour burden) in patients in early trials compared to the poorer response reported in later trials [58].
This suggests that, in general, CIM should not be used as a single agent in an adjuvant setting, or in patients with large tumour burden or in cancer types which are known not to respond well to immunotherapeutic intervention. However, with that proviso, there is still considerable scope for clinical investigation of CIM as an immunostimulant, with a possible anti-metastatic action, in a range of cancer types.
In particular, one window of opportunity exists in using CIM to address the issue of post-operative immunosuppression. It is known that surgical resection, a mainstay of cancer treatment for many forms of the disease, causes a post-surgical immune suppression that may be associated with an increased risk of recurrence or metastatic spread [92,93]. There is already strong evidence, summarised in a Cochrane Review, that perioperative CIM is associated with reduced immunosuppression and a lower risk of disease recurrence in the curative resection of colorectal cancer [41]. Moreover, the increased risk of post-surgical recurrence exists in other forms of cancer, including breast, lung, head and neck, and osteosarcoma. In breast cancer, for example, the perioperative use of the non-steroidal anti-inflammatory drug (NSAID) ketorolac is being investigated as potential agent to improve survival following mastectomy [94]. CIM is of potential benefit in these other cancers, in addition to the established benefit in colorectal cancer. Given the evidence that perioperative CIM reduces post-surgical immunosuppression, it is suggested that there is a need for clinical trials to establish whether it may be of benefit, in terms of overall survival, in the following cancer types: It may be critical in these trials to start CIM treatment in the days immediately prior to surgery and continue for a number of weeks following, indeed it is worth noting that the most impressive clinical trial data show a dramatically improved survival for colorectal patients treated with oral CIM (800 mg/day) and oral 5-FU (200 mg/day)-the CIM-treated group had 10-year survival of 84.6% versus 49.8% for the 5-FU-only group [39]. Additionally, the investigation into the combined perioperative use of CIM and diclofenac/ketorolac warrants attention.
There is evidence that an immunological adjuvant may of benefit in a wide range of cancers, including some of those in which CIM has already shown some clinical benefit: It is suggested that CIM be investigated as an adjuvant to the existing standard of care therapies in these diseases.
Given the primary putative mechanisms of action-the strongest evidence is for effects on immunity and cell adhesion-there are a number of additional agents that warrant investigation for synergy with CIM, some of which are listed in the supplementary material accompanying this paper.
New protocols
It is instructive to review the ongoing clinical trials of CIM as an anti-cancer agent as they serve as useful templates for future investigations. The Australian trial of perioperative CIM in colorectal cancer (ACTRN12609000769280) builds directly on a number of similar earlier trials, which are effectively summarised in a Cochrane Review [41]. It is to be hoped that a positive result in this trial will focus attention once more on the potential of CIM to positively affect overall survival. Of note a recent analysis of the long-term results of the EORTC 22921 trial found that adjuvant fluorouracil-based chemotherapy after preoperative radiotherapy (with or without chemotherapy) does not affect disease-free or overall survival [95], suggesting that this is an indication where progress is urgently needed and where CIM already has shown strong clinical evidence of effect.
However, using CIM to address post-surgical immune suppression is not the only possible model of use. The other clinical trials on-going combine CIM with a number of other low-cost agents to form the novel drug combination TL-118. In this model of use, it is the combination of multiple repurposed drugs, with similar low toxicity and low costs, which together form effective and novel treatment options. In this manner, we can create multi-targeted protocols which pose minimal risks to patients and yet offer hope of therapeutic efficacy. A number of these protocols are described in the supplementary material. Of necessity, such combinations are speculative, and though the evidence for the individual agents may be strong, the evidence for these combinations is often mechanistic or based on pre-clinical data only. While there is a need for more pre-clinical studies, it can be argued that given the urgency of patient need and the low toxicity of these proposed combinations, it is acceptable that small patient trials will begin in the near future. www.ecancer.org ecancer 2014, 8:485
Conclusion
The evidence for an anti-cancer effect of CIM treatment comes from in vitro, in vivo, and considerable amounts of human data. There are a number of well-described mechanisms of action, particularly of multiple immunomodulatory effects which have been assessed in data from clinical trials as well as from in vivo models. As an agent CIM has well-established pharmacokinetics and an excellent toxicity profile. Its use in clinical trials together with several chemotherapeutic agents has not shown any clinically relevant interactions, except with epirubicin, and showed evidence of a possible protective effect with vinorelbine and cisplatin. It is, therefore, a very strong candidate for repurposing as an oncological treatment, particularly as a perioperative treatment for surgical resection of solid tumours, in combination with existing standard treatments and alongside other repurposed drugs in a range of cancers.
A number of these multi-drug combinations have been outlined in the supplementary material in the hope that clinicians act upon this data to initiate clinical trials as a matter of some urgency.
Introduction
The following drugs warrant further investigation in combination with cimetidine (CIM), both in pre-clinical studies and potentially in clinical trials. These combinations, listed in Table A1, have been selected on the basis of existing pre-clinical and clinical experience in each of the indications. In some cases, these combinations replicate existing protocols currently being tested in clinical trials, but substitute known and repurposed drugs for the newer and/or more toxic agents currently being investigated. All these proposed combinations are expected to display relatively low toxicity and use low-cost and generally available agents.
Higher-priority agents
The agents listed below have a high degree of clinical evidence of efficacy and are currently either in clinical use in oncology or are currently being investigated in clinical trials. They have been selected as potential agents to be used in combination with CIM. Note that these drugs are not listed in order of priority.
• Metronomic chemotherapy-There is increasing clinical interest in using metronomic dosing schedules in a range of existing chemotherapeutic drugs, particularly cyclophosphamide, capecitabine, etoposide, temozolomide, and vinorelbine [1,2]. At the continuous low doses used in metronomic chemotherapy, there is little evidence of a direct cytotoxic effect on tumour cells, with increasing evidence that the therapeutic effect is primarily driven by anti-angiogenic and immunomodulatory actions [3,4]. Combining low-dose metronomic chemotherapy with other immunotherapeutic agents is an attractive proposition that is being actively investigated in clinical settings [5,6]. Given the strong evidence that CIM acts in multiple immunomodulatory ways, there is every reason to believe that it would synergise with existing metronomic chemotherapy regimens to reverse cancer-induced immune suppression, improve activity of cytotoxic T lymphocytes, and prime immune responses in a Th1 direction. There is no evidence to suggest that the addition of CIM to these regimens would increase the lower levels of toxicity associated with metronomic chemotherapy. • Itraconazole-This broad spectrum antifungal drug is being, or has been, clinically investigated as an anti-cancer agent in a number of trials, including for metastatic prostate cancer (NCT00887458), basal cell carcinoma [7], non-small cell lung cancer [8], refractory ovarian cancer [9], and triple-negative breast cancer [10]. The putative mechanisms of action are anti-angiogenic and inhibition of the Hedgehog signalling pathway [11,12]. There is a pre-clinical evidence to suggest that the combination of Hedgehog pathway inhibition and the targeting of myeloid-derived suppressor cells (MSDC) may be a beneficial strategy in some hard to treat tumours such as pancreatic cancer [13]. It is suggested, therefore, that the combination of itraconazole and CIM be investigated in pancreatic and other solid tumours. It should be noted that there is some evidence in animal models of an interaction between itraconazole and CIM, such that the AUC of CIM was increased by 25%, an effect which may be of clinical benefit in extending the therapeutic effect of this drug combination [14]. • Diclofenac/Ketorolac-As with a number of other NSAID (particularly those with evidence of COX-2 inhibitory properties), diclofenac shows some evidence of anti-cancer activity. There is evidence that perioperative or intraoperative diclofenac or ketorolac may be associated with lower risk of cancer recurrence or metastatic spread following surgical resection of tumours [15]. Additionally, there is pre-clinical evidence of a direct anti-cancer role of diclofenac in a number of different malignancies. For example, there is in vivo evidence in ovarian cancer [16], in melanoma [17], and glioblastoma [18]. Diclofenac has also been used in the treatment of desmoid tumours in adults and paediatric patients, including in combination with vinblastine [19]. Mechanisms of action of diclofenac include anti-inflammatory effects, inhibition of COX-2 and effects on tumour cell metabolism [17]. Of particular interest is the immunological effect of diclofenac, which has been shown, ex vivo, to potently reverse post-irradiation immunosuppression [20]. It is hypothesised that effects of diclofenac or ketorolac, including anti-inflammatory and pro-immunity effects, would synergise with the immunotherapeutic effects of CIM and that this combination warrants clinical investigation. www.ecancer.org ecancer 2014, 8:485 treated with curative resection followed by adjuvant PSK in these three cancers, and these are effectively summarised by Maehara et al [22]. There have been fewer clinical trials evaluating the use of PSP, though it is notable that a small trial in non-small cell lung cancer patients found a slower disease progression than in non-treated controls [23]. Also notable is a double-blinded clinical trial in canine hemangiosarcoma which found that high-dose PSP significantly delayed the progression of metastases and afforded the longest survival times reported to date in this condition [24]. The anti-cancer activity of both PSK and PSP is understood to be primarily due to diverse effects on the immune system, including reversal of immune suppression, upregulation of Th1 cytokine production and direct effects on tumour cells. These mechanisms of action parallel those of CIM, suggestive of a possible synergy that should be investigated in particularly intractable malignancies, such as pancreatic cancer [25] or soft-tissue sarcomas. • Mebendazole-This widely used anthelmintic drug has shown pre-clinical and clinical evidence of activity against a range of cancers. It is being investigated in two clinical trials, with temozolomide, in glioblastoma, and there are case reports in colorectal and adrenocortical carcinoma [26]. The primary mechanism of action as an antiparasitic is the inhibition of tubulin polymerisation in the gut of helminths, and microtubule disruption has also been assessed in relation to its potential anti-cancer activity, for example in a melanoma and glioblastoma models [27,28]. There is some evidence that CIM increases the peak plasma levels of mebendazole [29], an effect that may prove therapeutically useful in combination protocols. In particular, both drugs show evidence of anti-cancer activity in colorectal cancers, and it is suggested that clinical or pre-clinical investigation for colorectal disease is warranted. • Hydroxychloroquine-The anti-malarial drugs chloroquine and hydroxychloroquine are known inhibitors of autophagy currently being investigated with a range of standard of care treatments against cancer [30]. The rationale is that cellular stresses generated by cancer treatments induce an autophagic response from tumour cells and that this autophagic state confers resistance to treatment. Inhibition of autophagy in such cases blocks this resistance and increases radio-and chemo-sensitivity [31]. However, there is intriguing evidence that the effectiveness of autophagy inhibition as an effective strategy can be compromised by defective immune responses [32,33]. Possibly this is because autophagy inhibition leads to greater immunogenic cell death (ICD) in certain treatment modalities, such as radiotherapy, photodynamic therapy and other ROS-related cell death pathways (though not necessarily for certain chemotherapeutic agents) [32,34,35], but in an immunosuppressive environment ICD does not lead to greater anti-tumour immune response. The use of CIM to reverse immunosuppression in parallel to autophagy inhibition, (with chloroquine, hydroxychloroquine or other inhibitor, such as the antibiotic clarithromycin [36]), could therefore be a fruitful strategy to pursue in a clinical trial. • Aspirin-There is abundant epidemiological evidence of a cancer-preventative effect of long-term aspirin usage, summarised for example by Thun et al [37], in recent years there has also been an increasing interest in the adjuvant effects of aspirin treatment [38][39][40]. In particular the use of aspirin post-diagnosis has been shown to be associated with improved overall survival in colorectal, breast, prostate, and oesophago-gastric cancers [40,41]. There is also some evidence that aspirin use is associated with improved long-term survival in non-small cell lung cancer treated with curative resection [42]. It is proposed, therefore, that aspirin be combined with CIM for long-term post-operative protocols in a number of cancer indications, including colorectal and breast cancer. | 2018-04-03T03:20:01.989Z | 2014-11-26T00:00:00.000 | {
"year": 2014,
"sha1": "1a452908467d2cb0b04859461b158fe928425523",
"oa_license": "CCBY",
"oa_url": "https://ecancer.org/journal/8/pdf/485-repurposing-drugs-in-oncology-redo-cimetidine-as-an-anti-cancer-agent.php",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0b2197e721634248c07803235a460888e3bd16c2",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259485238 | pes2o/s2orc | v3-fos-license | Two‐Compartment Perfusion MR IVIM Model to Investigate Normal and Pathological Placental Tissue
Perfusion and diffusion coexist in the placenta and can be altered by pathologies. The two‐perfusion model, where f1 and, f2 are the perfusion‐fraction of the fastest and slowest perfusion compartment, respectively, and D is the diffusion coefficient, may help differentiate between normal and impaired placentas.
surrounding the intervillous space, which constitutes the maternal side.The chorionic plate is characterized by the villous trees that are vascularized with fetal blood and exchange nutrients and substances with the surrounding maternal free blood that flows through the intervillous space. 1 The placenta may incur structural and physiological abnormalities such as intrauterine growth restriction (IUGR) and placental accretism. 1he placenta accreta is characterized by the absence of the maternal decidua, so the villous trees can proliferate and anchor directly on the myometrium. 1 There are three degrees of abnormal placental infiltration: the placenta accreta where villi do not invade muscles, the increta where villi invade the myometrium and the percreta where villi cross the serous membrane and can infiltrate the bladder or the rectus depending on the position of the accretion zone. 1 The accretion zone is highly perfused, so this pathology could cause hemorrhages during the delivery and could also lead to a hysterectomy.2 As placental function is related to the health of the newborn and the future adult, 3,4 the development of noninvasive diagnostic techniques to assess and monitor it is desirable.
IUGR is due to the anomalous trophoblastic invasion of the spiral arteries 1 : the arteries are narrower than in normal subjects, and this causes the maternal placenta's incoming blood to have higher pressure and velocity. 1Because the fetalmaternal blood exchange occurs at the interphase between the intravillous maternal placenta and the villous membrane (the trophoblast), the oxygenated blood could be more drained than in healthy placentas causing a decrease of the trophoblastic functionality. 1 IUGR fetuses are characterized by an estimated fetal weight (EFW) below the 10th percentile, so they are smaller than healthy normal fetuses and they could develop heart and neurological disease after birth. 5,6IUGR fetuses may be differentiated into fetal growth restriction (FGR) and small for gestational age (SGA) groups, according to the presence or absence of fetoplacental Doppler abnormalities detected in utero.However, while ultrasound is currently considered the primary diagnostic tool to predict perinatal outcome, [6][7][8] its imaging and flowmetry cannot assess the micro-perfusive and microstructural placental qualities.MRI may be employed as an alternate method to investigate the placental tissues. 9In particular, diffusion weighted imaging (DWI) is a powerful technique that provides microstructural information without requiring contrast agents that could result in adverse effects on fetal development. 10iven the complexity of biological tissues in which perfusion and diffusion compartments coexist, different models have been developed to approximate the DWI signal. 11The most widely known and used is the intravoxel incoherent motion (IVIM) model, a bi-exponential model that considers two separate compartments: that of perfusion quantified by the perfusion fraction f IVIM of perfusing molecules at a rate given by the pseudo-diffusion coefficient D*, and the diffusion compartment quantified by the diffusion coefficient D 12 of 1 À f IVIM water molecules.4][15][16] In the placental tissue, f IVIM quantifies the perfusion fraction of water molecules perfused in microcapillaries with D* rate, whereas D, which quantifies the hindered diffusion of water molecules in the extracellular space, is related to tissue microstructure.Some authors have identified f IVIM as a biomarker to discriminate between IUGR and normal fetuses [13][14][15] and to discriminate between normal and accreta placentas. 16In this study, we hypothesized that, in the placenta, two main perfusion compartments exist in addition to the diffusion compartment, and that the introduction of more parameters describing placental perfusion can provide more information to identify the placenta's physiological and microstructural characteristics and understand the mechanism involved in placental diseases.
Thus, the aim of this study was to investigate the potential of a three compartment (two perfusion and one diffusion) model, based on the two-perfusion model developed by Fournet et al.,17 to discriminate between normal, SGA, FGR, and accreta placentas.
Materials and Methods
Fig. 1 shows a schematic representation of the pipeline used in this study.
Study Cohort
Following the study's approval by the ethical committee of the Policlinico Umberto I, Sapienza, Rome, Italy, 85 singleton pregnancies with an average gestational age (GA) of 20 AE 2.5 weeks (15-27 weeks) were enrolled from January 2018 to March 2022.All patients completed written consent forms prior to the study.One patient was excluded as it had not a singleton placenta, 19 other patients were excluded because of motion artifacts and signal noise ratio (SNR) lower than 3 a.u.for high b-values.The final study subjects were comprised of 65 patients who were divided into four groups: normal pregnancy (Normal), n = 43; FGR, n = 9; SGA, n = 6, and accreta, n = 7.The accreta group consisted of n = 4 accreta, n = 1 increta, and n = 2 percreta (Table 1).
Model
Placenta physiology is characterized by at least two main perfusion compartments: the compartment given by the exchange of substances through the trophoblastic cells between the mother and fetus side and the perfusion compartment related to pumped blood inside the villous trees.It is reasonable to think that the perfusion rate of the two compartments is different by at least one order of magnitude. 18oreover, the diffusion compartment in the placenta is mainly due to the blood flowing in the intravillous space.
The model described by Fournet et al. 17 considers the contribution of two vascular pools (capillaries and larger vessels) in rat brains: The two-perfusion model foresees the presence of diffusion in each compartment and divides perfusion compartments into slow and fast perfusion.We adapted Fournet et al. model 17 to study the placenta tissue adding the slow perfusion contribution to the first fast perfusion compartment: where D is the pseudo-diffusion coefficient relating to the maternal blood flowing inside the intravillous space, D * 1 is the fastest pseudo-perfusion coefficient given by the blood flowing inside microvessels and villous trees, D * 2 is the slower pseudo-perfusion coefficient given by the exchange of nutrients between the maternal and the fetal compartments, f 1 is the fastest perfusion fraction and f 2 is the slowest perfusion fraction related to the trophoblastic cells.The choice to add D * 2 in the fast perfusion compartment was because it could not be distinguished from the fastest D * 1 relating to the villi's vasculature, thus D * 2 should contribute to the actual fast perfusion compartment.
To reduce the number of free parameters, the diffusion coefficient D was estimated by fitting a mono-exponential model at high b-value b ≥ 200 s m 2 À Á to DWI data.Since the placenta has two well-defined perfusion processes contributing to the global perfusion, the IVIM model was used to obtain the total perfusion fraction Therefore, the resulting model used in this work is: where f IVIM and D mono (the D estimated by a mono-exponential model) are fixed.All the models were fitted to DWI data using a homemade Python script using a nonlinear leastsquares algorithm.
Data Acquisition
DWIs were acquired using a 1.
Preprocessing
The maternal and fetal sides of placentas were analyzed.
Given the complexity of the placental tissue, six regions of interest (ROI) were manually delineated on the fetal and maternal sides by two different radiology specialists both of 5 years of experience (all results relating to these six regions are available in the Supplemental Material).In accreta placentas the ROI was delineated on the accretion zone.
In multiple-coil acquisition, the noise is known to follow a noncentral χ distribution, which collapses in a χ distribution on the background with a number of degrees of freedom relying on the coils' number. 19The noise was estimated using the following estimator based on local moments 19 : where x is the local mean of the corrupted squared signal calculated using a local window size 21 Â 21, b σ 2 L is the estimated squared noise of the image, and L is the number of coils.In clinical scanners, row data are already filtered by default with a weak filter, which is of an unknown type because it is protected by manufacturing company licenses.Hence, the exact noise distribution is unknown.In this study, therefore, the "mode" has been approximated as the maximum of the signal intensity histogram, and the resulting corrected signal is given by 19 : where the corrupted signal's second order M 2 L x ð Þ x was calculated over every single ROI for the analysis of the tissue.Diffusion and perfusion parametric maps were obtained by first performing a voxel-wise signal correction, where x was estimated by implementing a 2 Â 2 filter on each voxel (example shown in Fig. 2).Then, the twoperfusion IVIM model was fitted using a bugged trees algorithm provided by MATLAB's machine learning official toolbox (MATLAB R2021a).In particular, the bugged trees are built by the function TreeBagger(), then the function predict() was used to obtain the maps.
Fitting Controls
Due to the number of parameters that need to be estimated using the two-perfusion IVIM model, there is a danger of overfitting.To avoid this and to compare IVIM and the twoperfusion IVIM models, the Akaike information criterion (AIC) 17,20 was applied corrected for small size samples.AIC is defined as: where N b is the number of b-values, MSE is the mean squared error, and k is the number of the model's parameters.As shown by Riexinger et al., 21 the comparative goodness of a model can be evaluated by considering the difference between the two competitor models' AICs (eg, AIC 2perf À AIC IVIM Þ.A negative value indicates that the first model (in the example, two-perfusion) is more suitable for describing that data.The following AIC differences were calculated: AIC 2perf À AIC IVIM ¼ À5:53 and IC 2perf À AIC mono ¼ À23:85, the negative values indicating that the two-perfusion model is more suitable than the IVIM and mono-exponential models.The Akaike weights 22 were calculated for each model as: Þ is proportional to the likelihood of the ith model given the data L M i jData ð Þ , Akaike weights can be considered as the probability that the ith model is the best model given the data and the set of models.Indeed, as shown by Fournet et al., 17 a model's Akaike weight higher than the threshold 0.9 indicates that the model may be considered the best model of the set, and a robust inference may be possible.The Akaike weights 22 were calculated for each model (mono-exponential, IVIM, and two-perfusion) with that of the two-perfusion model being best (w 2perf AIC ð Þ¼ 0:94; Fig. 3).
Statistics
Continuous variables were expressed as mean AE standard deviation.All the parameters' values obtained from each placenta group were analyzed by performing a Cohen's d test 23,24 and an ANOVA test with Dunn and Sid ak's posthoc correction (MATLAB 2021a).Since the ANOVA required the groups' homoscedasticity, a Levene test was performed to confirm the null hypothesis of equal variances across the groups.Due to the small number of placentas with different accretism, to evaluate the results obtained in accreta, increta, and percreta compared to healthy placentas, the Cohen's d effective size was used.
Regarding the correlation analysis, Spearman's coefficient was evaluated.A P-value <0.05 indicated a statistically significant difference or correlation.
Examples of Two-Perfusion and IVIM Maps
Three slices from the parametric two-perfusion and IVIM maps of an example healthy placenta with GA = 22.4 weeks are shown in Fig. 4. In the first slice in Fig. 4a, the maternal side is outlined in blue, whereas the fetal side is in green; the red color outlines the umbilical cord and its insertion in the second and third slices.All the perfusion fractions f IVIM , f 1 and f 2 maps (Fig. 4d,b,c, respectively) had higher values in the region of the umbilical cord insertion, and in the decidua (i.e., the maternal side), which is highly perfused by the spiral arteries.The perfusion coefficient D * 2 in Fig. 4g showed patterns, especially in the third slice where the umbilical cord insertion is far away, which were not visible in conventional IVIM D * maps (Fig. 4h). Figure 5 shows the fetal brain of the same subject reported in Fig. 4 and shows that the twoperfusion model seems to better highlight perfusion differences in tissues than the conventional IVIM model.Indeed, in Fig. 5c the cerebral ventricles membranes clearly have higher f 2 values, whereas they are not visible in the conventional f IVIM map. 25 Moreover, D * 2 (Fig. 5g) has a homogeneous value inside the ventricles.
In Fig. 6, the parametric maps of an FGR and a percreta placenta are displayed.In the DWIs shown in Fig. 6a, the placenta of the FGR subject is outlined in red, whereas the bladder of the percreta patient is outlined in yellow.In the maps of FGR, a placental lacuna is outlined in light blue, where perfusions and diffusions are higher than the surrounding tissues.The conventional IVIM D * (Fig. 6h) does not show the placental lacuna, which is visible in all the other maps.The accretion zone of the percreta placenta is outlined in dark blue and shows high values of f 1 (Fig. 6b), whereas the bladder on the top is characterized by the lowest f 1 values (Fig. 6b).The accretion zone is also characterized by the lowest values of f 2 (Fig. 6c).Even though the accreta placenta is heterogeneous in the f IVIM map (Fig. 6d), the accretion zone has fewer sharp edges compared to the f 1 and f 2 maps (shown in Fig. 6b,c, respectively).The bladder of the accreta placenta is characterized by high values of diffusion given by the urine presence (Fig. 6e), whereas it is characterized by the slowest values of D * 1 compared to the entire placenta, which is highly perfused (Fig. 6f).Finally, the D * 2 parametric map in Fig. 6g shows the lobes' structures of the accreta placenta, which are partially visible in the conventional IVIM D * map displayed in Fig. 6h.
Two-Perfusion and IVIM Metrics
ANOVA showed that both f 2 and f IVIM parameters were significantly higher in normal placentas than those in the FGR placentas (Table 2).Moreover, f 2 and f IVIM were significantly higher on the fetal compared to the maternal side in normal placentas (see Fig. 7a-c and Table 2).
In Fig. 6b,c, the accretion zone has a higher value of the fastest perfusion fraction f 1 than that in the normal placenta.In particular, the entire percreta and increta placentas have the highest f 1 values (see Fig. 7d,f) with a large size effect (Cohen's d = À2.66,Fig. 7f).Moreover, a large effect size was found considering the discriminant power of the f 2 parameter between normal fetal and increta and percreta placenta groups (Cohen's d = 1.12,Fig. 7g).It was also found that f IVIM is higher in the normal fetal side placenta than in the accretion zone with a small effect size (d = 0.26).
Perfusion fraction f 1 was significantly different between the fetal side of SGA and FGR placentas whereas there was no significant difference (P-value = 0.26) in f IVIM (Fig. 7a,c).
Normal placentas had a significant negative correlation between the diffusion coefficient D and the perfusion fractions f IVIM and f 2 (ρ = À0.36 and ρ = À0.38,respectively, in the fetal side and ρ = À0.56 and ρ = À0.50, respectively, in the maternal side) as shown in Fig. 8 (see also Fig. S2 and Tables S4-S6 in Supplemental Material).The accretion zone showed a significant positive correlation between the slowest perfusion fraction f 2 and the GA (ρ = 0.90), whereas the negative correlations between f 1 and GA (ρ = À0.77and P-value = 0.05) and between f IVIM and GA (ρ = À0.63 and P-value = 0.14) did not achieve statistical significance.The normal placenta did not show any correlation between the IVIM perfusion fraction and the gestational age (ρ = À0.31 and P-value = 0.051 in the fetal side and ρ = À0.24 and P-value = 0.12 in the maternal side) or the f 1 and the GA (ρ = À0.24 and P-value = 0.12 in the fetal side and ρ = À0.13 and P-value = 0.40 in the maternal side) or the f 2
Discussion
Since the placental tissue is a complex tissue from a vascular point of view, and most of the placental pathologies are related to vascular dysfunctions, in this work, we have used two-perfusion and IVIM metrics to better describe the complex perfusion in the placenta.
Apart from the number of subjects analyzed and the mean GA of the individual groups, our IVIM analysis of the placenta may differ from those reported in the literature due to the different types of image-denoising treatment.Subsequently, we will discuss the results obtained with the twoperfusion model highlighting the possible advantages of its use, compared to the IVIM model.
Conventional IVIM Model
The IVIM model is currently widely used in placenta MRI studies to estimate perfusion without employing exogenous contrast agents, 13,14,16,[26][27][28][29][30][31][32][33][34] with the perfusion fraction f IVIM being sensitive to changes of perfusion inside the placental tissues.In agreement with previous IVIM studies, f IVIM was significantly higher on the fetal compared to the maternal side reflecting the known physiology of the organ: the placenta's fetal compartment is characterized by the villous trees, so it is more perfused than the maternal side where the blood diffuses in the intravillous space. 26,27,35Some studies have shown a positive or quadratic correlation between f IVIM and GA in the normal placenta, 14,26,34,36 whereas other studies have found a negative correlation between f IVIM and GA. 28,30n this study, no significant correlation was found between the perfusion fraction and GA in the normal placenta, as in the study of Moore et al. 37 Indeed, Moore et al. 37 suggests that the blood volume, but also the volume of the placenta, increases with GA, so that the perfusion fraction does not change.A possible explanation for the negative correlation between f IVIM and the diffusion coefficient D requires information provided by the parameter f 2 of the two-perfusion model and it will be discussed later, in the two-perfusion subsection.
In agreement with Hutter et al., 29 no significant correlations were found between the diffusion coefficient and the GA in normal placentas.
It has previously been suggested that f IVIM may be a potential marker for placental pathology such as the FGR.4,35 In this current study, the fetal side of normal placentas had slightly (but not significantly) higher values of perfusion fraction compared to those of the accretion zone in pathological placentas, in accordance with Bao et al. 16 who found lower values of the perfusion fraction in the placenta accreta compared to the values in a healthy pregnancy.Considering that in placenta accreta, the trophoblastic invasion extends beyond the normal limit and the placental villi are not contained in the decidual uterine cells, as is normally the case, but extend into the myometrium, the perfusion fraction was expected to be a discriminatory parameter between normal and accreta placenta.
Two-Perfusion IVIM Model
Two-perfusion maps show the potential of the two-perfusion model to highlight particular placental areas, which may be useful for diagnostic purposes or to add information not obtainable from IVIM maps.
In this study, the perfusion fraction f 2 , which is related to the slowest perfusion compartment, was significantly higher on the fetal side of normal placentas than on the maternal side.This result may be interpreted by considering the trophoblastic cells: trophoblasts are responsible for nutrients exchange inside villous trees; thus, they are concentrated on the organ's fetal side.This interpretation is corroborated by the FGR placenta results.In fact, the f 2 parameter is lower in pathological subjects compared to the control group showing a lack of exchanges due to the trophoblastic infiltration on the uterine spiral arteries. 38Moreover, a negative correlation was found between the f 2 parameter and the diffusion coefficient D on the fetal side: the trophoblastic infiltration causes an increase in the blood pressure inside the spiral arteries.This pressure increases the diffusion inside the intravillous space promoting the dispersion of nutrients and decreasing the capability of exchanges between mother and fetus.In contrast to Antonelli et al., 33 no significant differences were found in the IVIM perfusion fraction f IVIM between healthy and SGA subjects.However, the f 1 parameter was significantly different between the fetal side of FGR and SGA subjects.This result may suggest an offsetting effect, whereby the SGA placenta tries to overcome the difficulty of exchange of nutrients by increasing the fetal fastest perfusion activity of the villous trees.
According to the Cohen's d, the f 1 perfusion fraction discriminates the normal pregnancies and the accreta placenta: the accretion zone is characterized by a higher value of fast perfusion fraction than the healthy subject, especially for percreta placentas where the accretism could involve surrounding organs and could cause hemorrhages during the delivery.The high values of the fastest perfusion fraction f 1 may be due to the different vascular architecture on the accretion zone. 39Conversely, the f 2 perfusion fraction is lower in the case of accretism, showing possible impediments of slow perfusions.Although f 1 and f IVIM correlated negatively with the GA, a positive correlation was found between the f 2 parameter and the GA.A possible explanation for these trends is given by the aging of the placenta.As the placenta ages, it may have less need to increase its vascularity in the anchoring area, which may bring to a decrease in the fastest perfusion activity and an increase in exchange between the infiltrating villi and the maternal blood.
The possible existence of multiple microvascular environments has been also found by Slator et al. 40 : they investigated placental tissues by simultaneously probing the diffusivity and the T2* relaxation time.The T2*-ADC spectra from the inverse Laplace transform of the signal from healthy subjects showed three separate peaks reflecting the three diffusion compartments hypothesized in the two-perfusion model.Moreover, Slator et al. 40 found the absence or reduction of one or two peaks in pathological subjects and lower values of T2* suggesting a deficiency in these compartments.This trend is in accordance with our results since we found that the slowest perfusion fraction f 2 was lower on FGR than in healthy placentas.
A later study by Slator et al. 11 suggested that anisotropic models would better describe placenta physiology.However, these need more gradient directions, which would result in an important increase in the total acquisition time of the experiment.In this work, the quantification of several perfusion and diffusion components in different placenta sites, tries to study the placenta's complex vascularization to be potentially useful for the medical diagnosis of placental impairment.
Limitations
In general, the limitations of this work are related to the limitation of IVIM technique.The most important disadvantage of the IVIM technique is the lack of standardization of the acquisition parameters and the various algorithms used for the quantitative analysis of the images.Furthermore, the sensitivity of the IVIM MRI depends on the number and the distribution of the b-values used.Therefore, due to the lack of standardization of the IVIM technique, significant variance in the calculated parameters was observed between studies, and, to date, no values for normal organs have been well established.In particular, in this work, the number of pathological subjects was low and no measure of inter-observer reproducibility was presented.Nevertheless, results reported here suggest that the two-perfusion model may be useful for studying tissues characterized by two perfusion compartments, one slower, because it is modulated by the passage of fluids through a membrane, and a faster one, in general, associated with perfusion of blood in microcapillaries.
Conclusions
The two-perfusion IVIM model, where f 1 is the fastest perfusion fraction related to perfusion in microcapillaries and villi and f 2 is the slowest perfusion fraction related to the trophoblastic cells' perfusion, may provide complementary information to IVIM parameters that may be useful in identifying placenta impairment.
FIGURE 1 :
FIGURE 1: Flow-chart of the study.
5 T Siemens Avanto (Erlangen, Germany) clinical scanner with an eight-channel body coil without any parallel MRI reconstruction techniques.The acquisition protocol consisted of a diffusion weighted echo-planar imaging spin echo sequence with TR/TE = 3900/74.8msec, bandwidth 1184 Hz/px, matrix size of 192 Â 192, FOV = 220 Â 220 mm 2 , slice thickness of 5 mm, and 18 to 30 slices, depending on the placenta's size.The diffusion encoding gradients were applied along three non-coplanar directions using 10 different b-values (0, 10, 30, 50, 75, 100, 200, 400, 700, and 1000 sec/mm 2 ), and the signal over the three directions was averaged.The number of averaged signals was NS = 4 for each b-value, thus the total duration of the protocol was $15 minutes.
FIGURE 2 :
FIGURE 2: Denoising of DWI.The original DWI of a normal placenta at b = 50 sec/mm 2 is shown, (a); the same slice following noise correction (b).A plot of SNR vs.(b) is shown in (c).
FIGURE 3 :
FIGURE 3: Example of a fit to DW-data obtained in the fetal ROI.The error bars were evaluated propagating the uncertainty on the noise c σ 2 L on the voxels inside the ROI.The Akaike weights are w mono AIC ð Þ¼6:2e À 06, w IVIM AIC ð Þ¼5:9e À 2, w 2perf AIC ð Þ¼0:94.
FIGURE 4 :
FIGURE 4: (a) The DWI section shows three different slices of the same normal placenta (GA = 22.4 weeks): the first (upper) slice shows the fetal brain, the umbilical cord (outlined in red) and a section of the placenta divided into maternal (blue) and fetal (green) sides.The second slice focuses on the umbilical cord insertion (in red), and the third slice is an overview of the entire central placenta (in red the umbilical cord).(b) f 1 parametric maps: the maternal decidua shows the highest values maybe due to the spiral arteries insertion; (c) f 2 parametric maps.(d) f IVIM parametric maps; (e) The diffusion D maps shows high values of diffusion in the amniotic liquid as it is free water like.(f) D1* parametric maps; (g) D2* maps show interesting patterns that can be interpretated as the cotyledon structures in the placental surface.(h) D*IVIM maps.
FIGURE 5 :
FIGURE 5: (a) Zoomed images of the fetal brain visible in the upper placenta slice of Fig. 4. (b) The f 1 map shows lower values in fetal brain compared to those in ventricular space.(c) The perfusion fraction f 2 is higher around ventricles' space, showing a probable exchange between ventricles and brain's tissues through the ventricular membrane.(d) f IVIM map; (e) the diffusion coefficient D map.(f) D1* perfusion coefficient map; (g) The D2* perfusion coefficient map shows perfusional activity inside the ventricles; whereas, it shows lowest values in the cerebral tissue.(h) D* IVIM perfusion coefficient.
FIGURE 6 :
FIGURE 6: (a) DWI of an FGR placenta (GA = 19.7 weeks, outlined in red) and a percreta placenta with bladder infiltration (GA = 28.6 weeks, the bladder is outlined in yellow).(b) f 1 parametric maps: the FGR placenta shows a placental lacuna (outlined in light blue) where perfusion fraction is higher due to a placental lacuna; the accretion zone on the placenta accreta is outlined in dark blue and shows high values of f 1 .(c) f 2 parametric maps: the FGR placenta has the highest values on the fetal side; the accretion zone (dark blue) is characterized by the slowest values of f 2 .(d) f IVIM parameter maps.(e) Diffusion D parameter maps: the placental lacuna of the FGR subject is clearly visible and this pattern could be due to the trophoblastic invasion that increases the diffusivity inside the maternal side of FGR subjects causing placental lacunae.(f) Perfusion coefficient D1* maps.(g) D2* parametric maps show the lobes' structures of the accreta subject.(h) IVIM D* maps.
FIGURE 7 :
FIGURE 7: Boxplots of the perfusion fractions for the fetal and maternal placenta ROIs: (a) f 1 parameter; (b) f 2 parameter; and (c)f IVIM .The group with placenta accretism was highly heterogeneous as shown in the plot of f 2 vs. f 1 (d).The percreta and increta placenta group had the highest f 1 values (d) and a large size effect (Cohen's d = À2.66)(f).A large effect size was also found for the f 2 parameter between the normal fetal and increta and percreta placenta groups (Cohen's d = 1.12) (g).(e) f IVIM has a small effect size between normal and increta and percreta groups (Cohen's d = 0.32).
TABLE 1 .
Pregnancy Women Cohorts
TABLE 2 .
ANOVA with Dunn-Sidak Post Hoc Correction and Cohen's d Values Note: Statistical significant p-values are in bold.FGR: fetal growth restriction; SGA: small for gestational age. | 2023-06-18T06:17:07.779Z | 2023-06-17T00:00:00.000 | {
"year": 2024,
"sha1": "cdea2ccffa411dc19babee19aea7eab343c7572c",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jmri.28858",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "40b433cad7320ea565db1f10dcf28f0b5d9be571",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
88515895 | pes2o/s2orc | v3-fos-license | Analyzing Non-proportional Hazards: Use of the MRH Package
In this manuscript we demonstrate the analysis of right-censored survival outcomes using the MRH package in R. The MRH package implements the multi-resolution hazard (MRH) model, which is a Polya-tree based, Bayesian semi-parametric method for flexible estimation of the hazard rate and covariate effects. The package allows for covariates to be included under the proportional and non-proportional hazards assumption, and for robust estimation of the hazard rate in periods of sparsely observed failures via a"pruning"tool.
Introduction
The hazard rate, defined as h(t) = lim ∆→0 P (t ≤ T < t + ∆ | T ≥ t)/∆ = f (t)/S(t) (where S(t) is the survival function for T and f (t) = −S ′ (t)), can be critical in assessing how the risk of a disease changes over time. However, it can be difficult to estimate reliably, particularly over the course of a study with few observed failures during the follow up period and when the effects of covariates change over time.
(For examples, see [2,27,21,16]. ) To this end, the MRH package in R ( [12,13]) has three overarching features: 1. Estimation of the hazard rate and the associated credible intervals (as well as the corresponding survival function and cumulative hazard estimates and credible intervals).
2. Joint estimation of the effects of predictors incorporated under the proportional hazards (PH) and non-proportional hazards (NPH) assumptions. 3. A "pruning" tool that combines portions of the hazard rate that are similar, providing robust estimates through periods of sparse failures, and allowing for faster computation times.
The underlying statistical approach employed in the MRH package is the multiresolution hazard (MRH) model, a Bayesian semi-parametric hazard rate estimator previously presented and used in [4,3,7,6,5,11,10], and compared to other packages in [13]. This model for survival data is based on the Polya tree methodology, and is flexibly designed for multi-resolution inference capable of accommodating periods of sparse events and varying smoothness. The MRH model accommodates both proportional and non-proportional effects of predictors over time and also uses a pruning algorithm presented in [5] and [11], which performs data-driven "pre-smoothing" of the hazard rate by merging time intervals with similar hazard levels. Pruning has been shown to increase computational efficiency and reduce overall uncertainty in hazard rate estimation in the presence of periods with smooth hazard rate and low event counts ( [5]).
The following sections of this manuscript are organized as follows: In Section 2 we provide background on the multi-resolution hazard (MRH) model. In Section 3 we briefly discuss the tongue cancer data set we use to demonstrate the MRH package, and in Section 4 we cover use of the MRH package, including fitting and plotting fitted models, pruning the prior, assessing convergence of the MCMC chains, and the effects adjustment of the prior parameters. Lastly, we discuss our conclusions in Section 5.
Multi-resolution hazard modeling
The MRH model is a Bayesian, semi-parametric survival model that produces an estimate of the hazard rate and covariate effects. The MRH prior is closely related to the Polya tree prior ( [8,20]), which is an infinite, recursive, dyadic partitioning of a measurable space Ω. (In practice, this process is terminated at a finite level M .) The MRH prior is a type of Polya tree in that it uses a fixed, pre-specified partition and controls the hazard level within each bin through a multi-resolution parameterization.
To facilitate the recursive dyadic partition of the multiresolution tree, we assume that J = 2 M . Here, M is an integer, set to achieve the desired time resolution, or through model selection criteria or clinical input (for example, see [4,6]). The hazard function is parametrized by a set of hazard increments d j , j = 1, . . . , J. where d j represents the aggregated hazard rate over the j th time interval, ranging from (t j−1 , t j ). In standard survival analysis notation, d j = tj tj−1 h(s)ds ≡ H(t j ) − H(t j−1 ), where h(t) is the hazard rate at time t. The cumulative hazard, H, is equal to the sum of all 2 M hazard increments, which are denoted as d j , j = 1, . . . 2 M . The model then recursively splits H at different branches via the "split parameters" R m,p = H m,2p /H m−1,p , m = 1, 2, . . . , M −1, p = 0, . . . , 2 m−1 −1. Here, H m,q is recursively defined as H m,q ≡ H m+1,2q + H m+1,2q+1 (with H 0,0 ≡ H, and q = 0, . . . , 2 m − 1). The R m,p split parameters, each between 0 and 1, guide the shape of the a priori hazard rate over time. The complete hazard rate prior specification is obtained via priors placed on all tree parameters: a Gamma(a, λ) prior is placed on the cumulative hazard H, and Beta prior on each split parameter R m,p , Be(2γ m,p k m a, 2(1 − γ m,p )k m a). This parametrization ensures the self-consistency of the MRH prior at multiple resolutions ( [4,5]). The basic MRH model was extended in [7] into the hierarchical multiresolution (HMRH) hazard model, capable of modeling non-proportional hazard rates in different subgroups jointly with other proportional predictor effects. The pruning methodology for combining similar hazard bins was developed in [5] for individual hazard rates, and combined with the HMRH model in [11]. The pruning algorithm detects consecutive time intervals where failure patterns are statistically similar, increasing estimator efficiency and reducing computing time. The resulting method produces computationally stable and efficient inference, even in periods with sparse numbers of failures, as may be the case in studies with long follow-up periods.
MRH likelihood function
We denote T i as the minimum of the observed time to failure or the rightcensoring time for subject i. Each subject belongs to one of the L covariate strata, and within each stratum we employ the proportional hazards assumption such that: Here, h ℓ denotes the baseline hazard rate for treatment strata ℓ, X represents the z × n ℓ matrix of z covariates (other than those used for stratification) for the n ℓ patients in the stratum ℓ, while β denotes the z × 1 vector of the covariate effects. For subject i in stratum ℓ with failure time at T i ∈ [0, t J ), the likelihood contribution is: where X i is that subject's covariate vector, S base,ℓ is the baseline survival function for the stratum ℓ, and δ i is the censoring indicator that equals 1 if subject i had an observed failure, and 0 otherwise. Thus, the log-likelihood for all n patients in all L strata together (n = L ℓ=1 n ℓ ) is where S ℓ denotes the set of indices for subjects belonging to the stratum ℓ, and H base,ℓ (T ) = − log S base,ℓ (T ). In this model, the L hazard rates are estimated jointly with all the covariate effects. The non-proportional covariate effect is then calculated as the log of the hazard ratio between different covariate strata in each bin.
Pruning the MRH model
The MRH prior resolution is often chosen as a compromise between the desire for detail in the hazard rate, and the number of observed (i.e. uncensored) fail-ures. As the resolution increases, the number of observed failures within each bin decreases. While useful for revealing detailed patterns, a large number of intervals is a large number of model parameters, which will generally require longer computing times and may result in estimators with lower statistical efficiency ( [5]). "Pruning" starts with the full MRH tree prior, and merges adjacent bins that are constructed via the same split parameter, R m,p , when the hazard increments in these two bins (H m+1,2p and H m+1,2p+1 ) are statistically similar. This is inferred by testing the hypothesis H 0 : R m,p = 0.5 against the alternative H a : R m,p = 0.5, with a pre-set type I error α, using Fisher's exact test. If the null hypothesis is not rejected, that split R m,p is set to 0.5 and the adjacent hazard increments are considered equal and the time bins declared "fused". The hypothesis testing can be applied to all M levels of the tree or just a higher resolution subset of the tree. Because bins are fused a priori, this method can then reduce the number of parameters sampled in the MCMC routine, possibly decreasing computation time and increasing the robustness of the estimator.
Estimation in the MRH model
Estimation is performed in two steps: the pruning step and the MCMC steps. The pruning step is run only once for each of the L hazard rates at the beginning of the algorithm as a pre-processing step in order to finalize the MRH tree priors. The R m,p;ℓ parameters for which the null hypothesis is not rejected are set to 0.5 with probability 1, while the rest are estimated in the Markov chain Monte Carlo (MCMC) routine. Details on the MCMC routine and prior values can be found in [4,7] and [11].
Tongue cancer data
To demonstrate the different features of the package, we use the "tongue" data set available in the R data set available in the R survival package. The data set contains 80 subjects with tongue cancer who had a paraffin-embedded sample of the cancerous tissue taken at the time of surgery, with survival times recorded for each patient (in weeks), as well as the tumor DNA profile (aneuploid or diploid) (see [26] for details). The study went for 400 weeks, with a median survival time equal to 69.5 weeks (SD = 67.3), and 33.8% of the subjects were censored. Between 250 and 350 weeks there were zero failures (censored or uncensored), and after 200 weeks there were zero uncensored failures. This data set is also presented in [18] and analyzed [22]. Table 1 summarizes the data in more detail.
In the code shown in Section 4, the tongue data is named "tongue", and has the following variable names: • time: Survival time (in weeks) from time of surgery. • type: Indicator denoting which tumor group the patient belongs to ('1' = aneuploid, '2' = diploid).
• delta: The censoring indicator, which equals '1' if the failure is observed, and '0' if right-censored.
Using MRH
In this section, we provide code and discussion on how to analyze survival data using the MRH package, demonstrating analyses using the tongue data to quantify survival times post-surgery.
Selecting the time resolution (specifying M)
Because the MRH methodology is based on a binary partition, it divides the total study time into J = 2 M time intervals (or "bins"). The first step to fitting an MRH model in R is to determine how many bins are required. (The current version of the MRH package assumes all bins are of equal length.) The choice of M is can be determined through biological rationale or using penalized likelihood criteria (such as DIC, see [28]). In some cases the user might wish to explore how different choices of M affect the bin lengths and implications on the model interpretation. In these instances, the FindBinWidth() function can help. Below, the bin width is calculated for values of M ranging from 2 to 10 for different time units (seconds, minutes, hours, days, weeks, months, years). The user must provide the vector of survival times, and specify the original unit of the survival times (seconds ('s'), minutes ('m'), hours ('h'), days ('d'), weeks ('w'), months ('m'), and years ('y')).
data(tongue)
FindBinWidth(time = tongue$time, delta = tongue$delta, time.unit = 'w') [ While there are many acceptable bin widths for an analysis, we generally aim to use one that will allow us to observe a maximum amount of detail in the hazard rate while still remaining computationally feasible and biologically plausible. In a typical analysis, we would be inclined to use a model with M = 6, creating bins that are 6.25 weeks long. However, for the purposes of this manuscript, we reduce the number of bins to 16 (i.e. M = 4) to reduce computing time while still allowing for adequate demonstration of the package.
Fitting the MRH model
Once M has been determined, the MRH model is fit using estimateMRH().
In the examples below, M is equal to 4, with each bin representing 25 weeks. We examine MRH models including the treatment covariate under the proportional and non-proportional hazards assumptions, as well as with various levels of pruning.
Proportional hazards model
In this example, we include the tumor-type in the model under the proportional hazards assumption. The code to fit the model is below: fit.PH = estimateMRH(Surv(time, delta)~type, data = tongue, M = 4, maxStudyTime = 400, outfolder = 'MRHresults_PH') The user is notified of the approximate running time and after every 5,000 iterations completed by the routine. The output for each model is placed into a sub-folder created by the estimateMRH() routine within the working directory. The default folder is named "MRHresults", however, in this example we have specified the output folder as "MRHresults PH" through the "outfolder" option. (The folder name can also be added to a pathname, but the path must be accessible from the working directory.) Because parameters are estimated through MCMC sampling, convergence of the fitted model should be assessed after the routine has finished (see Section 4.6 for details). The fitted model can be examined using both the summary.MRH() function as well as the plot.MRH() function (see Figure 1 for plot results): # Get the estimated model results results = summary(fit.PH) names(results) [1] "hazardRate" "beta" "SurvivalCurve" "CumulativeHazard" [5]
Non-proportional hazards model
In many studies with long-term follow-up, the effects of certain covariates change over time, violating the proportional hazards assumption. A figure of the Kaplan-Meier curves ( [17]) (see Figure 2, left) show evidence that the effects of treatment may not be proportional. In addition, we can examine if the treatment effect seems to be proportional by creating a cox model (using coxph() in the survival package) and then examine the Schoenfeld residuals and smoothed line (see [9]), which should be straight if the effects of tumor type were proportional (see Figure 2, right). Results from these figures provide evidence that the treatment effect is not proportional.
Because of this evidence of non-proportionality between the two treatment group hazard rates, we modify the model above using the nph() function in the formula portion used to fit the model: In creating a model with at least one NPH covariate, it is important to note that the non-proportional covariate must be a nominal categorical variable. More than one non-proportional covariate can be entered in to the model, using repeated nph() functions in the formula, and the routine will merge the NPH variables into a single interaction variable, jointly estimating stratified hazard rates for each combination of levels. See vignette("MRH") for more information. Summaries for the NPH fitted model are displayed using summary.MRH(), with separate estimates provided for each hazard rate, as well as the estimated log-ratio between the hazard rates: vival curves do not appear proportional to one another, particularly as they get closer in value towards the end of the study. In addition, each '+' sign indicates a censored observation, so many subjects do not have observed biochemical failure. RIGHT: A plot of the Schoenfeld residuals (based on work done by [9]) shows a smoothed line that does not appear to be straight, indicating non-proportionality between the hazards. The information provided by these two graphics warrants analyses including the treatment covariate under a non-proportional hazards assumption. Plots of the estimated hazard rates and the log-hazard ratio of the NPH covariate can also be created using plot.MRH(). Example graphing code is below (see Figure 3 for plot results): # Plot the default graph (the hazard rate of each treatment group) plot(fit.NPH) # Plot the log-hazard ratio of the treatment effect plot(fit.NPH, plot.type = 'r') # Plot the hazard rates each on a separate graph plot(fit.NPH, combine.graphs = FALSE) Note that the plotting options shown in Section 4.2.1 can also be used in the NPH plots. Details on convergence checking, output generation, and other specifications on the MCMC chains are found in Section 4.6.
Pruning the MRH model
As mentioned previously, in instances where the number of observed failures is small, estimates of the hazard rate can be difficult to obtain due to lack of information. In the examination of biochemical failure, the number of subjects who have observed biochemical failure is small, particularly towards the end of the study where a lot of censoring occurs. In this particular data set, only 50% of subjects have an observed biochemical failure, and only 13% of the observed failures occur after 5 years.
In the MRH package, bins can combined using the "pruning" method ( [5]), implemented using the "prune" option in the estimateMRH() function. It is possible to prune one to all levels of the prior tree based on biological rationale and the degree of possible smoothing the user desires. By default, all levels of the tree are pruned using a significance level α = 0.05. However, the number of levels pruned can be adjusted using the "prune.levels" option, and the significance level can be controlled using the "prune.alpha" option. (Note that smaller values of α will lead to smoother prior trees, as the null hypothesis that the two bins are similar will not be rejected as often.) Below, we show code for pruning the MRH prior tree for the NPH model, combining bins in the bottom 3 levels and combining bins in all levels. In the NPH model, each hazard rate is pruned separately: Results of the estimates for the three different models can be seen in Figure 4.
As expected, the model with no pruning (left graph) has the highest variability among the three models. The model with the bottom 3 levels pruned shows more variability than the model with all levels pruned, however few bins were merged in the top 3 levels of the tree, so results for the two models (shown in the center and right graphs) are similar.
Prior parameter effects
In the estimateMRH() function, the user has the option to adjust the parameters of the prior distribution for the split parameters. The prior for the R m,p parameters is a beta distribution, with the form: It is possible to sample the parameters k and/or γ using the estimateMRH() routine. Alternatively, specific fixed values of k and γ can be defined a-priori to influence the shape and smoothness of the hazard rate (the default is fixed values equal to 0.5 for all parameters). Details on the role of these hyperparameters can be found in [4,3,5,11]. Below is a brief explanation of the assumptions that stem from using different fixed or sampled values of k and γ.
Comparison and selection of k
In the MRH package, by default k is fixed 0.5, implying zero a-priori correlation among the hazard increments. However, k can be fixed with values greater than 0.5, implying the increments are positively correlated a priori (typically leading to smoother estimated hazard functions). Alternatively, if k is fixed with a value less than 0.5, the hazard increments are negatively correlated a priori and generally cause estimated hazard rates that are less smooth. It is also possible to sample the parameter k if the user desires. The user may change the default value of k = 0.5 by entering a value for"k.fixed," with k ∈ (0, ∞), in the estimateMRH function. Below, we show example code for sampling k, and fixing k at 0.1 and 10: Note that in the NPH models, a vector of k values can be entered, with one k for each subgroup hazard rate. However, if only one value of k is specified, that value will be used for all hazard rates. Alternatively, if "k.fixed" is set to FALSE, the routine will sample the k parameter(s), putting an exponential hyper prior on k. Estimated log-hazard ratios of the treatment covariate from the models coded above can be observed in Figure 5. As anticipated, the estimate from the model where k is sampled (upper right) has the highest variability. In contrast, the model with a very large k value (lower right) has the lowest variability and also a very smooth estimated function.
Comparison and selection of γ mp
The mean of the prior for each split parameter R m,p is E(R m,p ) = γ m,p , with γ m,p ∈ (0, 1). This allows the user to a priori "center" the baseline hazard increments in each bin at a desired value (please see [3] for details).
The default value for the γ parameters is 0.5, however the user may change the default values via the "gamma.fixed" option. In general, unless the user has specific information that would help in the adjustment of the γ parameters, we recommend keeping the value fixed at 0.5. In the case of user input, a γ m,p value must be specified for each of the split parameters. In NPH models, a matrix of fixed γ values may be entered, with each column representing the desired values for each subgroup hazard rate. However, this is not required; if only a vector is specified, that vector will be used for all hazard rates. Below, we show code for sampling the γ parameters, putting a beta hyper prior on each γ m,p : The estimated hazard rates and log-hazard ratio can be observed in Figure 6. The estimates do not differ that dramatically, but the variation in the model where the vector of γ parameters is sampled is greater. This is expected, since this model estimates 2 M−1 × 2 more parameters than the default model. Time (weeks) Figure 6: Comparison of the estimated log-hazard ratio and hazard rates (solid lines), and 95% credible intervals (dashed lines) for both treatment groups for the default MRH model (left column) and the MRH modeled with sampled γ values (right column). While the estimated hazard rates and log-hazard ratios do not differ dramatically, the variation in the model with sampled γ values is greater. This is expected, as the second model estimates 2 M −1 × 2 more parameters than the default model.
Model comparison
In comparing different models, it may be desirable to calculate and compare the Deviance Information Criterion (DIC, [28]), the Akaike Information Criterion (AIC, [1]), and the Bayesian Information Criterion (BIC, [25]). In the MRH package, there are two methods for obtaining these values. The fitted model contains the different information criteria values (under the "DIC", "AIC", and "BIC" labels), or the DIC() function that can be used on the fitted model or on
Settings and convergence diagnostics of the MCMC chains
The MRH package allows the user to control different properties of the MCMC chain (i.e. the burn-in, the thinning value, and the maximum number of iterations). In addition, the estimateMRH() routine checks for possible convergence, and can modify the properties of the MCMC chain. Plots are also provided in the output folder that allow the user to assess if the chains have converged. Below, we outline the different methods for fixing the properties of the MCMC chain and assessing convergence.
Setting parameters of the MCMC sampling
The user may use the default values for the burn-in ("burnIn"), thinning value ("thin"), and maximum number of iterations ("maxIter") of the MCMC chains, or they may specify and fix these values. The default maximum number of MCMC iterations is set at 500,000, with a burn-in value of 50,000 and a thinning value of 10. However, based on evidence of chain convergence, these numbers may be changed by the routine unless otherwise specified by the user.
Checking for evidence of convergence
After the first 100,000 MCMC iterations, the chains are checked for autocorrelation and evidence of convergence. (If the maximum number of iterations specified by the user is less than 100,000, then convergence is checked when the maximum number has been reached.) The convergence checking routine is performed via the Geweke diagnostic test ( [29]) and the Heidelberger-Welch diagnostic test ( [14]) using the geweke.diag() and heidel.diag() functions available in the coda package ( [23]). The convergence algorithm can be seen in the Convergence Algorithm table.
There may be instances in which the user wants to fix burnIn, thin, or maxIter so that the routine does not change these values in the process of checking for convergence, and so that the user is guaranteed the MCMC chain will run maxIter times. These values can be fixed (simultaneously or individually) by setting the "fix.burnIn", "fix.thin", or "fix.max" options to TRUE. (By default, these are set to FALSE.) In addition to the convergence checking performed by the algorithm, diagnostic figures containing trace, density, moving average (calculated by 100), and autocorrelation plots for each parameter are included in the output folder for the model. An example of these plots (for three parameters) can be seen in Figure 7. This graphic allows to user to also assess if convergence has been reached, or if certain parameters are more problematic then others.
Continuing chains
If the routine does not detect evidence of convergence and maxIter is reached, then the following message is shown to the user: Figure 7: Trace, density, moving average (calculated in increments of 100), and autocorrelation plots for three of the parameters of the NPH 3-level pruned model. (This graph can be found in the "MRHresults NPH prune3" output folder, and is titled "convergence-Graphs1.pdf".) This graphic allows to user to also assess if convergence has been reached, or if certain parameters have a more robust estimate than others.
In this instance, the user may want to continue running the MCMC routine using the previously sampled chain values. This can be done using the same call as the in the first model and by setting "continue.chain" equal to TRUE. (Note that this will only work if the output folder name is the same as with the previous model. In addition, if the user specifies new thinning or burn-in values, these will be ignored -the values from the previous chain will be used instead.) If the chain is continued, the routine reads in the chains from the previous set of iterations and initializes the parameters using the last line of retained MCMC values, appending new samples to the previous existing text file store in the output folder. Example code and output for this is below: To shorten the run time, re-run with fewer iterations or a smaller number of bins.
Gelman-Rubin diagnostic testing
To provide robust estimates of the hazard rate and covariate effects, it is common to run the MCMC routine for the same model multiple times, using the results from all chains to produce estimates and to check for convergence. The MRH package provides a number of options that allow the user to do this more easily.
In the estimateMRH() routine, the user can set the "GR" (Gelman-Rubin) option equal to TRUE (by default this is FALSE). In doing this, the routine automatically fixes the chain parameters (thin, burnIn, and maxIter) to what has been specified by the user in the function call, and also adjusts the initial values of the parameters to cover the parameter space. (These are necessary qualifications for the use of the Gelman-Rubin convergence test.) After all MCMC chains have been sampled, AnalyzeMultiple() is available for analyzing multiple MCMC chains for the same model. The AnalyzeMultiple() function accepts multiple chains as an input parameter, and then returns to the user the estimated parameter values and α-level credible intervals and the Gelman-Rubin diagnostic information. Within each set of chains, the median, α/2%-tile, and 1 − α/2%-tile of the marginal posterior distribution is calculated for each parameter. Then, the median of the medians and percentiles is calculated, and these are the numbers reported as the estimates and credible intervals for each parameter. Example code for this is below: An examination of the initial starting values shows that the initialized values cover the parameter space (which can be accessed with fit.NPH1$initialValues).
Using the MCMC text files
The computation time for convergence of the MRH model may be longer than the user would like to spend waiting (particularly in cases where the number of bins and/or the number of subjects is high). In these instances, the user may want to run the model as a background job, in which case the fitted model results will not be available in the console. Under these circumstances, the user can read in the MCMC chains from the output folder, convert them to an MRH object, and then may use existing functions for plotting and summarizing the chains. The only difference between the usage of the functions is that the maximum study time ("maxStudyTime") must be entered for accurate estimates: # Read in the file from the output folder mcmc.NPH.prune3 = read.
Discussion
In this manuscript, we have highlighted the main features of MRH package and demonstrated its use on the tongue cancer data set. Use of the pruning tool and the accommodation of non-proportional hazards makes this package idea for estimation of right-censored survival outcomes when the number of observed failures is small and/or the follow-up period is long.
There are a few other packages that provide a non-or semi-parametric estimate of the hazard rate, and that accommodate non-proportional hazards. Namely, these are bayesSurv ( [19]), DPpackage ( [15]), and timereg ( [24]). While these packages provide their own unique strengths to the analysis of right-censored survival data, the covariate interpretations for all three models are different than that of the MRH model: bayesSurv implements AFT survival models, LDDPsurvival() in the DPpackage package estimates covariates in an ANOVA-like fashion using a Dirichlet Process prior, and the timecox() function in the timereg package produces cumulative covariate estimates. (For a thorough comparison of these packages, see [13].) The MRH package provides a useful tool for estimation of the hazard rate and covariate effects. | 2016-03-01T16:48:56.000Z | 2016-03-01T00:00:00.000 | {
"year": 2016,
"sha1": "1952efac6402e70bf7f23e31efc2bb42394506e1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1952efac6402e70bf7f23e31efc2bb42394506e1",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
258012514 | pes2o/s2orc | v3-fos-license | Modified orca predation algorithm: developments and perspectives on global optimization and hybrid energy systems
This paper provides a novel, unique, and improved optimization algorithm called the modified Orca Predation Algorithm (mOPA). The mOPA is based on the original Orca Predation Algorithm (OPA), which combines two enhancing strategies: Lévy flight and opposition-based learning. The mOPA method is proposed to enhance search efficiency and avoid the limitations of the original OPA. This mOPA method sets up to solve the global optimization issues. Additionally, its effectiveness is compared with various well-known metaheuristic methods, and the CEC’20 test suite challenges are used to illustrate how well the mOPA performs. Case analysis demonstrates that the proposed mOPA method outperforms the benchmark regarding computational speed and yields substantially higher performance than other methods. The mOPA is applied to ensure that all load demand is met with high reliability and the lowest energy cost of an isolated hybrid system. The optimal size of this hybrid system is determined through simulation and analysis in order to service a tiny distant location in Egypt while reducing costs. Photovoltaic panels, biomass gasifier, and fuel cell units compose the majority of this hybrid system’s configuration. To confirm the mOPA technique’s superiority, its outcomes have been compared with the original OPA and other well-known metaheuristic algorithms.
Introduction
The likelihood of rising energy distribution disparity due to renewable energy sources is significant, ensuring long-term continuous energy distribution while simultaneously reducing emits of greenhouse gases. This is why most researchers are working on the design and development of distributed energy resources based on the use of renewable sources (photovoltaic panels (PV), wind turbine (WT), diesel generator (DG), biomass gasifier (BG), battery bank, and fuel cell (FC), etc.), especially in rural areas, to be utilized in feeding the required loads [1]. The hybrid energy systems need to be appropriately built and sized for distributed generation microgrids to operate securely, dependably, and economically [2]. This turns determining the hybrid energy system's ideal size into an optimization issue that includes a set of objectives. In order to optimize hybrid energy systems, many techno-economic issues have been taken into account. Examples of costs taken into account in hybrid systems optimal sizing study include net present cost (NPC), energy cost (COE), annualized system cost (ASC), and life cycle cost (LCC). The main drawback of renewable energy sources is their reliance on the environment. This drawback is reduced by using hybrid energy system combinations, guaranteeing a standard power supply to the loads. This can be achieved by combining sustainable energy sources with being used if a more potent source fails and integrating green energy sources with conventional energy sources. Several studies focus on the effectiveness of hybrid renewable energy systems and different metaheuristic optimization techniques. Most research investigations examine whether energy production can meet load demands [3]. The technical indices used as a result are the loss of power supply probability (LPSP), the loss of load expected (LOLE), loss of energy expected (LOEE), deficiency of power supply probability (DPSP), loss of load hours (LOLH), unmet load (UL), equivalent loss factor (ELF), and renewable energy fraction [3]. Currently, examining the effectiveness of hybrid renewable systems is the subject of numerous papers, as are various optimization algorithms, with the goal being to ascertain the optimal size of the system's constituent parts and enhance the critical technical and economic indicators in the system's design. Several papers have presented different operation strategies for many hybrid systems designs that apply different metaheuristics and improved optimization strategies. For example, a standalone hybrid energy system based on PV, WTs, and FC configuration has been described in [4]. This study offered a power management approach that controls the power flow between the various system parts using the metaheuristic optimization technique called the mine blast algorithm (MBA). The obtained findings of the MBA method were compared with other techniques, namely, PSO, artificial bee colony (ABC), and cuckoo search (CS). Based on the same previous system combination, the authors in [5] proposed an optimal sizing optimization strategy for a hybrid system situated in the Ataka area in the Suez Gulf region, Egypt. A novel modified algorithm based on improving the performance of the traditional Artificial Ecosystem Optimization (AEO) method called Improved Artificial Ecosystem Optimization (IAEO) is utilized for this hybrid system. The major objective functions of this hybrid system are to reduce the COE, LPSP, and excess energy while satisfying the operational constraints. To demonstrate the IAEO technique's effectiveness, a comparison of IAEO, the original AEO, PSO, Salp Swarm Algorithm (SSA), and Gray Wolf Optimizer (GWO) has been performed.
Moreover, the work in [6] developed a new hybrid system strategy based on combining the biomass system as the primary source and the FC as a backup unit. This suggested hybrid system has been offered to supply the electric power of a microgrid in a small tourist hamlet in Hurghada city, Egypt. A Multi-objective Particle Swarm Optimization (MOPSO) method minimizes the COE and the LPSP. HOMER software tool has been utilized in [7] to apply a techno-economic analysis for a new different configurations hybrid system based on PV/WT/BG/Biogas/ FC/battery components. This hybrid system is developed to be practical in rural and remote places. This work provided optimal configuration for reducing COE and NPC.
In [8], four different metaheuristic algorithms (PSO, DE, the water cycle algorithm (WCA), and GWO methods) for determining the best size for an isolated microgrid in rural locations are tested for effectiveness and adaptability. In four separate AC-coupled isolated microgrids for a distant community in South Australia, these algorithms maximize the PV, WT, DG, fuel tank, and battery energy storage capacity. In terms of capacity optimization of the system's components, the PSO and GWO algorithms produced comparable results. While the DE method was unreliable. To obtain the optimal sizing for an isolated hybrid system consisting of PV, WT, and battery units, the authors in [9] applied ten optimization methods which are simulated annealing (SA), Jaya algorithm, moth-flame optimization (MFO), GA, CS, harmony search (HS), firefly optimization algorithm (FOA), flower pollination algorithm (FPA), the simplified squirrel search algorithm (S-SSA), and the brainstorm optimization in objective space (BSO-OS) algorithm. Abd El-Sattar et al. [10] developed a hybrid algorithm, namely the Gradient Artificial Hummingbird Method (GAHA), that combines the Gradient-Based Optimizer (GBO) with the Artificial Hummingbird Algorithm (AHA). This modified GAHA method is utilized to ascertain the optimal size of PV, WT, biomass system, and battery units for a standalone area in the new Tiba city, Luxor, Egypt, considering the reducing the COE and LPSP. In [11], an enhanced Arithmetic Optimization Algorithm known as IAOA was created by updating the original AOA with the aid of the Aquila Optimizer's leading operators (AO). This developed IAOA technique was used to determine the best design scenario for a standalone hybrid system made up of PV, WT, DG, and battery units in the El Kharga region, Egypt. The authors in [12] concentrated on determining the optimal sizing for an off-grid hybrid system using a novel PV/BG/FC construction. The objective functions of the proposed method are to reduce the COE and minimize CO2 emissions. A novel Mayfly Optimization Algorithm (MOA) has been utilized to obtain the optimal size of this hybrid system. To prove the effectiveness of the suggested MOA method, its outcomes were contrasted with those of the Sooty Tern Optimization Algorithm (STOA), Sine Cosine Algorithm (SCA), and Whale Optimization Algorithm (WOA).
Various hybrid configurations and techno-economic analysis approaches may be used to build diverse renewable systems in the best possible ways. Accordingly, researchers have discovered in recent years that metaheuristic algorithms, which are all-purpose and straightforward to use, can tackle challenging real-world problems. Because metaheuristics are very accurate and straightforward, they have drawn much attention in various challenging optimization issues in engineering, communications, medical, and social sciences [13]. Moreover, metaheuristic algorithms are also used to improve solutions for a variety of problems, such as global optimization [14], energy applications [15], power flow systems [16], image segmentation [17,18], deep learning-based classification [19], scheduling microgrid systems [20], economic emission dispatch (EED) problems [21], and feature selection [22,23]. Unlike deterministic approaches, metaheuristic algorithms use randomly generated search agents and specialized operators to find the best solutions in the search space. These operators take inspiration from natural occurrences such as swarm behavior, social behavior, physical theories, and evolutionary principles. There are three primary types of metaheuristic algorithms: (a) Swarm methods contain swarm-based strategies that simulate the social behavior of groups of animals, birds, and humans; (b) evolutionary methods; and (c) natural phenomenon algorithms imitate physics and chemistry principles [24,25]. Particle Swarm Optimization (PSO) [26], a popular algorithm in this family of algorithms, is regarded as the origin of numerous other optimization methods. For the evolution-based techniques, researchers represent various operators based on the guides of the evolution theory. The well-known evolution-based methods are the Genetic Algorithms (GA) introduced in [27] by Holland and the Differential Evolution (DE) [28]. The physics-based algorithms were driven by the principles of physics and chemistry, such as the laws of gravity and electrical charges. Several algorithms have been presented to address real-world issues based on this inspiration, for example, Gravitational Search Algorithm (GSA) [29], Multi-verse Optimizer (MVO) [30], and the Gradient-Based Optimizer (GBO) [31].
The performance of a metaheuristic algorithm typically refers to the level of the optimized solution, and the time needed for the algorithm to converge. Even though many MAs have produced good results, optimization issues have grown more complex (as the number of optimized variables has grown) while still adhering to various constraints and requirements. However, despite the advantages of algorithms, numerous existing metaheuristic algorithms only sometimes guarantee the globally optimum solution. In addition, in solving the problems of conducting the parameters of the hybrid energy systems, no algorithm can be considered the better quality in determining the optimal sizing of an isolated hybrid system with reducing the COE within the limitations of LPSP. Therefore, developing new metaheuristic algorithms can effectively handle the issue of computing the optimal sizing for an off-grid hybrid system comprising PV/BG/Hydrogen Tank units (HT)/FC/Electrolyzer (ELE) modules. The hybridization concept of two or more metaheuristics and modified or improved existing algorithms effectively addresses the current optimization challenges [32]. Although hybridization enhances the performance of optimization, it must be carried out with suitable algorithms. So choosing the algorithms is a crucial step. Moreover, it is standard to choose them based on how well they function independently.
Therefore, to develop a more effective algorithm used for solving the hybrid energy system problems, we have studied more recent algorithms and features. In particular, Orca Predation Algorithm (OPA) is a novel algorithm that has begun to attract interest. The OPA algorithm has several advantages. The performance of OPA is evaluated using 23 well-known unconstrained benchmark functions, recent CEC2015 and 2017 benchmark functions, and five constrained engineering issues. Although the OPA algorithm has achieved encouraging results, it is not entirely impervious to the flaws that metaheuristics may experience. Indeed, despite being effective and powerful optimization tools, metaheuristics can run into problems. Nonetheless, any original metaheuristic algorithm has some drawbacks that impair functionality and cause slow convergence or trapping in local optima. As a result, these algorithms must be enhanced by changing the original method [33] or combining two algorithms to adjust the search techniques [34,35]. According to the optimization problem, the main problems mentioned in the studies are the algorithm's slow convergence speed, its tendency to get stuck in local optima, how much algorithm parameters affect algorithm performance, and how poorly exploration and exploitation are balanced. So, this paper proposes a modified Orca Predation Algorithm (mOPA) to address these limitations. The Lévy flights (LF) strategy has shown good results in enhancing metaheuristics performance [36]. Also, OBL [37] is one of the most beneficial methods for improving the search performance of the metaheuristics [38]. The mOPA method was utilized to compute the optimal sizing for an off-grid hybrid system comprising PV/BG/Hydrogen Tank units (HT)/FC/Electrolyzer (ELE) modules to demonstrate this modified approach's usefulness. The results obtained from utilizing the mOPA are compared with results from the original OPA method and other techniques used in [12] for the same hybrid system. This work brought attention to the possibilities of using biomass systems with fuel cell technology for energy storage. Utilizing recent developments in data science and modeling methodologies, which give the required technical tools for informing decision-making, will help to enable the successful realization and deployment of these sophisticated technologies.
The contributions of the paper are summarized in the following points: -This paper proposed an improved mOPA algorithm that combines two enhancing strategies: Levy flight and OBL. -mOPA is applied to solve global optimization problems. Moreover, we compare its performance with different well-known metaheuristic algorithms. -The performance of mOPA compared to competitors is demonstrated using the CEC'20 test suite problems. -The proposed mOPA is applied to develop an optimal design for an isolated hybrid PV/BG/HT/FC/ELE system for supplying a load in Abu-Monqar region, located in Egypt. This paper is structured as follows: Section 2 describes the mathematical model for the original OPA method required to construct the suggested modified algorithm, the OBL concept, and the Lévy flight method. The mathematical model of the suggested mOPA algorithm is presented in Sect. 3. Section 4 discusses the real-world application, divided into four subsections. These subsections describe the suggested hybrid system components design, the optimization problem formulation, the operation energy management strategy, and the description of the site of the project study, respectively. Section 5 discusses the design findings, and the discussion contains the performance results of the proposed mOPA on CEC'20 benchmark functions and the results of the proposed hybrid system. Finally, Section 6 provides this paper's conclusion and future work.
Preliminaries
This section will discuss the techniques essential to building the proposed method. The original Orca Predation Algorithm (OPA) mathematical model and the OBL idea, and the Lévy flight technique are described in detail.
Orca predation algorithm (OPA)
OPA is a novel bio-inspired metaheuristic algorithm developed by Yuxin et al. [39]. It mimics the hunting manners of orcas and simplifies it into three phases: driving, encircling, and attacking phase. OPA gives various qualities to the driving and encircling in parameter modification to balance the exploitation and exploration processes. In the attacking phase, the best solution may be determined without sacrificing the particles' variety after considering the positions of many best orcas and several randomly chosen orcas.
The detailed mathematical formulations of the OPA algorithm are as follows: 1. Development of a colony of orcas A group of N n oracs are used in OPA and represent in 1D, 2D, 3D, or extradimensional space. It is represented in Eq. 1 X ¼ ½x1; x2; x3; ::::::; xN n ¼ where X denotes the orca population of the candidate solutions. xN n denotes the position of the N th individual orca, and xN n ; Dim is the position of the Dim th dimension of the N th population. 2. Chasing phase This phase is divided into two steps: driving and encircling. p 1 is used to adapt the probability of the orca performing these two steps individually. It is a constant in [0, 1], and a random number is generated between [0, 1]. If this random number is greater than p 1 , the driving process is applied; other else, the encircling process will be applied. 3. Driving process In order to prevent the orca group from straying from the target, it is also vital to regulate the orca group's central position and keep it near to the prey. The moving speed of the orca and the relevant position is shown in the following equations: where t indicates the number of iterations, V t chase;1;i represents the chasing speed after choosing the first chasing step. V t chase;2;i represents the chasing speed after choosing the second chasing. Moreover, a, b, and d are randoms in [0, 1]. e is a random value in [0, 2], F is equal to 2, and q is in [0, 1] that used for choosing the chasing technique. While, M indicates the average location of the orca population as shown in Eq. 4.
There are two chasing methods depending on the orca population size. If the orca is large, i.e. ðrand [ qÞ, the first process is applied; otherwise, if ðrand qÞ, the second process is applied as shown in Eq. 6.
. Encircling of prey In this step, the orcas updating using the positions of three randomly orcas that can be expressed as follows: where Maxitr is the maximum iteration numbers, d1, d2, and d3 represent the three randomly selected orcas from N n orcas and d1 6 ¼ d2 6 ¼ d3. x t chase;3;i is the position after selecting the third chasing technique. 5. Position changes during the encircling phase The positions are adjusted according to the following equations: where f ðx t chase;i Þ is the fitness function relevant to x t chase;i , and f ðx t i Þ is the fitness function relevant to x t i . 6. Attacking of prey The four best-attacking positions in a circle are represented by four orcas. The following equations are used to determine the orca's movement speed and location during an attack.
where V t attack;1;i and V t attack;2;i are the speed vectors, x t 1 ; x t 2 ; x t 3 ; and x t 4 are the four orcas in the best position, d1; d2; and d3 are the three randomly choosen orcas from N n in the chasing step and d1 6 ¼ d2 6 ¼ d3, x t attack;i identifies the position after the attacking process, g 1 is a random number in [0, 2], and g 2 is a random number in ½À2:5; 2:5. 7. Position changes during the attacking phase The position of the orca is determined by the lower boundary l b of the problem that can be identified using the following scheme: where p2 is a value in [0, 1].
To conclude, the following steps illustrate the implementation of the OPA algorithm Step 2 of the preceding procedures will be repeated if the optimal output solution is not reached.
Opposition-based learning (OBL)
Metaheuristic algorithms often begin with a random initial population, and over-optimization process iterations, population agents increase the chance of reaching the best solutions. Since these algorithms have a stochastic nature, the convergence time is mainly related to the distances between the initial guesses and the promising or optimal solution. As a result, the more used initial solutions close to the best solution, the more the metaheuristic algorithm quickly converges through problem search space and vice versa. OBL is a metaheuristic optimization technique used to improve a search algorithm's convergence rate. It is an effective technique to prevent stagnation in candidate solutions [37]. The main idea of OBL is to evaluate candidate solutions in pairs, with one solution representing the current best solution and the other solution representing the ''opposition'' or ''antithesis'' to the current best solution. Comparing these two solutions allows the algorithm to identify promising areas of the search space more quickly, thereby speeding up the convergence process toward an optimal solution. HR. Tizhoosh developed the idea of the OBL [40]. OBL improves the exploitation ability of a search mechanism. In metaheuristics, convergence typically occurs when the initial solutions are nearer the optimal site; otherwise, late convergence is anticipated. By considering opposing search regions, which may be nearer to the global optimum, the OBL technique discovers better solutions in this case. The OBL works by traversing the search space in both directions. These two approaches make use of one of the initial solutions, while the opposite approach establishes the other path and then takes the most optimal solution [41]. The following describes the concept of OBL: • Opposition number OBL is defined as being explained by the concept of opposite numbers. The following expressions can present the opposition-based number.
Consider _ x is a real number belongs to the range [u, w], ½u; w 2 R , the opposite number of _ x is defined by Eq. (14) • Opposition point Assuming, • The Opposition in optimization In the optimization method, the opposite number _ x is replaced by the equivalent point _ x according to the objective function. If f ð _ xÞ is better than f ð _ xÞ, then _ x not replaced; otherwise, _ x = _ x, so, the solutions of the population are updated according to the best value of _ x and _ x [42].
In OBL, the optimization process is completed using the solution with the best fitness determined by simultaneously evaluating both current candidate and opposition-based solutions. This comparison process helps to increase the speed at which the optimizer converges to promising solutions.
Lévy flight (LF)
Paul Lévy proposed the LF, and Benoit Mandelbrot elaborated on it. The step lengths in LF follow a probability distribution with hefty tails. It is one of the most common flight patterns in naturally occurring surroundings [43]. The step sizes of LF calculated using the probability function as shown in Eq. (16) [44]: where x j denotes the flight length, and ð1\a 2Þ is the power exponent. The probability density of Lévy step in integral form is shown in Eq. (17): a decides the distribution index, and q sets the scale unit. When (a = 2), it signifies Gaussian distribution, and when (a =1), it signifies a Cauchy distribution [45]. Equation (17) uses a series expansion technique only when x has a vast value as Eq. (18): where C is the gamma function.
For an index distribution, a value between 0.3 and 1.99 is a practical approach to generate a Lévy stable process. Mantegna et al. [46] creates random numbers method based on the Lévy distribution by Eq. 19: where x and y represent normal distribution parameters with the standard deviations of r x and r y given in Eqs. (20 and 22).
Shortcomings of mOPA
The original OPA algorithm has shown good performance in solving global and real-world optimization problems. However, it suffers from the lack of exploration problem as it searches inside the search region identified by the orcas. This behavior of the basic OPA algorithm stuck the whole population into local optima and may lead the algorithm to premature convergence, especially in complex and highdimensional problems. So, referring to the No Free Lunch (NFL) that promotes the concept that no stronger optimization algorithm can perform well at all the optimization problems. Therefore, the Lévy flight strategy and the OBL mechanism have been introduced in the proposed mOPA algorithm to overcome the shortcomings of the original OPA.
Initialization phase of mOPA
According to the OPA algorithm, the mOPA algorithm begins by developing an initial population (N n ); each population has a dimension (Dim) in the search space limited by the lower and upper boundaries (l b and u b ). The positions of orcas are defined according to l b and u b as shown in Eq. 23 in addition to the maximum number of iterations Maxitr and the selection probability p1 and p2.
Then, the mOPA's diversity was improved in the search process using the OBL strategy during the initialization phase to enhance the search operation by Eq. 24: where Opp s is a vector obtained by performing the OBL.
The fitness evaluation phase of mOPA
Each orca's fitness value is determined, and the best one is selected as X best .
Chasing phase
This phase is divided into two steps: driving and encircling process. p 1 is used to adjust the probability of the orca performing these two steps individually. It is a constant in [0, 1], and a random number is generated between [0, 1]. If this random number is greater than p 1 , the driving phase is applied; other else, the encircling phase will be applied.
Perform the LF
LF is applied to acquire new positions during the chasing phase in the driving step using the following equations: There are two chasing methods based on the orca population size as illustrated in Eq. 27.
Encircling step
After applying the LF strategy in the previous step, the position updated using equations from Eqs. 7 to 9 as shown in Sect. 2.1.
Attacking phase
The orcas changed their positions throughout the attacking phase as disscussed in Sect. 2.1. The positions are updated using Eqs. 10-12, and their positions are then replaced by lower bounds l b as shown in Eq. 13.
Termination criteria of mOPA
The proposed mOPA optimization process is repeated until the stopping criteria is met. The pseudo-code of the proposed mOPA algorithm is provided in Algorithm 1. Also, the flowchart of the mOPA algorithm is presented in Fig. 1.
The proposed hybrid system design
In order to service a tiny remote area in Egypt, an off-grid hybrid system is simulated and evaluated to determine the best size and satisfy the electricity demand while reducing costs. This hybrid system's primary components are PV, BG systems, ELE units, HT, and FC, which is described in detail as below:
Photovoltaic solar module (PV)
The following equations can be utilized to compute the PV array's generated power P PV ðtÞ and cell temperature T CELL [47][48][49]: T CELL ðtÞ ¼ ISRðtÞ ISR(t) is the solar radiation intensity that is present at any given time (t), N PV is the PV units number, P rat PV is the PV rated power,g w is the wiring efficiency, and g PV is the PV module efficiency. While T c , T N , and T A are the maximum power temperature coefficient for PV modules, the cell temperature at normal operating conditions, and the ambient temperature, respectively. The main characteristics of the PV and inverter unit are indicated in Table 1.
Biomass system (BG)
A small-scale downdraft gasification technique was employed in this study, which transforms solid biomass into a gaseous fuel (called producer gas or syngas) that is used to power turbines. The system performance can be expressed as follows [50][51][52]: The following expression is used to describe the amount of power produced from renewable sources P RS : where g sy is the producer gas efficiency. LHV sy and LHV B are, respectively, the product gas's and biomass material's lower heat values. m sy and m B are the mass of the producer gas flowing and the biomass material, respectively.N BG is the biomass generators number, P G is hourly biomass consumption rate, B rat ðtÞ is the rated power of biomass generators, P Grat denotes the rated power of biomass generators, and F m and F 0 represent, respectively, the marginal and no-load fuel usage.F BG ðtÞ is the generator rated power, E BG presents the yearly (8760 hours) power generated by the biomass generator, and GF denotes the gasifier utilization factor. The main characteristics of the BG unit are indicated in Table 2.
Electrolyzer (ELE), hydrogen tank unit (HT), and fuel cell (FC) modeling
The electrolyzer (ELE) is a technology that uses an electric current to pass through the liquid, creating a chemical reaction. In this study, a water ELE is used in order to produce ultra-pure hydrogen in an unpolluted manner, where the hydrogen gas is produced and collected at a pressure of 30 bar [53]. This hydrogen cannot be transferred directly to the FC because the reactant pressures within it are up to 1.2 bar [53]. Therefore, a hydrogen tank is used to link directly with the ELE [7,54]. Following the process of separating hydrogen and oxygen, the hydrogen gas is moved and saved in HT before being utilized in the FC to produce energy. The energy transferred to the HT from the ELE (P ELE/HT ) is stated as [5,53]: where g ELE indicates the ELE efficiency, and P RS=ELE symbolizes the renewable energy that powers the ELE. The lowest and highest parameters constrain the stored hydrogen mass in the HT (m HT ) during the operation [5], according to this formulation: The hydrogen power (P HT ðtÞ) and mHT(t) kept in the HT at a time interval (t) are stated as follows [7,55]: where P HT/FC ðtÞ is the power that the HT sends to the FC, g HT is the efficiency of the HT, dt is the simulation's time period, and HHV H is the stored hydrogen gas's higher heat value. The major characteristics of the ELE, HT, and FC units are indicated in Table 3. Depending on the fuel cell's total efficiency (g FC ), the amount of electricity that FC produces (P FC ) can be stated as follows [7,55]:
Objective function
The main objective purpose of this system is to minimize the COE and estimate the chance of insufficient power supply operation using the LPSP, and minimizing the dummy load's (L dum ) consumption of extra energy (P EX ) to keep the system's cost as low as possible. The values of the objective functions are determined using the following expressions [5,56]: LPSP ¼ X 8760 1 P dem ðtÞ À P RS ðtÞ À P FC ðtÞ P dum ðtÞ ð42Þ a indicates the value of each objective function's weight factor (a 1 = 0.3, a 2 = 0.5, and a 3 = 0.2), Z stands for the optimization problem's control variables, and p dem ðtÞ is the power load demand (kWh).
Constraints
According to the decision variables' higher and lower bounds, the optimization method operates within the limitations listed below:
Cost analysis
The total COE, which is created by microgrid components, is regarded as an objective function that should be reduced in this work. The hybrid microgrid's total annual cost (TAC), the NPC ($), and COE ($/kWh) are expressed as follows: CRFðr; sÞ ¼ rðr þ 1Þ s ðr þ 1Þ s À 1 ð52Þ where CRF indicates the capital recovery parameter, r represents the interest rate (r = 6 %), and s denotes the lifespan of the proposed hybrid system (s = 25 years). C y OM represents the total operation and maintenance cost for each component (PV, BG, ELE, HT, and FC), C y rep is the total replacement cost for each unit, C y an-cap denotes the annualized cost of each subsystem, and C an-fuel is the biomass unit's annual fuel costs.
Operation energy management strategy
The three key cases that make up the operational strategy of the suggested hybrid system are as follows and are indicated in Fig. 2 which shows its flowchart: 1. When the power generation produced from renewable sources (PV and BG) P RS ðtÞ covered the load requirement, in this situation, the produced power is delivered to satisfy the necessary load demand p dem , and no power is required from the FC or supplied to the ELE; 2. The ELE will be fed with excess energy when the amount of renewable energy produced exceeds the load requirement; the hydrogen produced from this process is then utilized to charge the HT units; 3. In this situation,the FC will utilize hydrogen contained in the HT to compensate for the lack of energy production when the power generated from renewable sources is not enough to fulfill the load requirement. Once the HT capacity is at its lowest point, there is a loss of load.
The site of the project study
The suggested hybrid system is located in Abu-Monqar region, Egypt, as indicated in the map in Fig. 3. Figures 4, 5, 6, and 7 indicate the profile load and the meteorological conditions.
Design results and discussion
Before applying the proposed mOPA to estimate the optimal sizing for an off-grid hybrid system comprising PV/ BG/HT/FC/ELE modules, we assess its efficiency at the IEEE Congress on Evolutionary Computation 2020 (CEC 2020) [57]. The proposed mOPA's results are compared with those obtained with WOA [58], SCA [59], the Tunicate Swarm Algorithm (TSA) [60], the Slime Mold Algorithm (SMA) [61], the Harris Hawk Optimization algorithm (HHO) [62], the Runge Kutta optimization algorithm (RUN) [63], and the original OPA. We choose the comparing algorithms according to different criteria, such as the size and complexity of the optimization problem and the algorithms' convergence speed. In addition to the robustness of the optimization algorithms, these comparison algorithms have recently gained much popularity in several engineering and complex applications fields and have already been applied to the same problem.
Parameter settings
The algorithm settings are shown in Table 4. Since the study in [64] demonstrates that default values are an appropriate parametrization for algorithm comparison, we set them to their default values. Additionally, default values reduce the possibility of comparison bias because no algorithm might benefit from improved tuning. Simulation is independently run 30 times to ensure fair benchmarking comparison. Both qualitative and quantitative measurements assess the efficiency of algorithms. Experiments are done using ''Windows 10 (64 bit)'' running on ''CPU Core i7 with 8 GB of RAM,'' and ''Matlab 2016b'' is used.
CEC'20 benchmark functions
We used the benchmark functions of the CEC'20 [57] to validate the performance of the proposed algorithm since they are among the most recent benchmarks and are challenging to solve. Table 5 presents the CEC'20 functions with their corresponding optimum valußes ''Fi*'' [65].
Statistical results analysis
This subsection presents the comprehensive results and comparisons demonstrating the basic exploratory and exploitation of mOPA compared with OPA and other wellknown algorithms. Table 6 reports the mean fitness, standard deviation, and best and worst results for mOPA compared with other algorithms over 10 CEC'20 test functions with a dimension (Dim = 10). The best values (minimum) are highlighted in bold. As demonstrated in Table 6, the proposed mOPA algorithm achieves the optimal value for the unimodal F-1 test function. While for multi-modal functions F-2, F-3, and F-4, RUN gives the optimal value on the F-2 function, SMA obtains the optimal values on the F3 function, and the proposed mOPA indicates superior performances on F-4 test functions. Moreover, for the hybrid F-5, F-6, and F-7 test functions, the proposed mOPA algorithm is performing better than the remaining algorithms. For the composite functions F-8, F-9, and F-10, the mOPA outperforms other algorithms and gives the optimal values for F-8 and F-9. In contrast, the original OPA achieves the optimal values for the F-10 test function. Generally, the results demonstrated that mOPA outperformed other algorithms in solving eight of the CEC'2020 benchmark functions in terms of mean, standard deviation, and best and worst values. Additionally, mOPA acquired the first rank in the Friedman mean rank-sum test.
Boxplots behavior analysis
Boxplots are regarded as effective analysis tools because they interpret the data distributions to quartiles to show the realistic distribution of the data in a graphical representation. We displayed the data distribution with boxplots to further analyze the results of Table 6. The algorithm's minimum and maximum data points constitute the lowest and highest whisker's edges. The ends of the rectangles separate the low and high quartiles. A narrow boxplot indicates a high level of data agreement. Figure 9 presents the boxplots of the data for CEC 2020 test functions from F-1 to F-10 for Dim ¼ 10. The boxplots of mOPA are pretty narrow for most functions, with the lowest values among all comparison algorithms. It is noticed that the distribution of boxplots achieved by the mOPA algorithm is narrower for most test functions and achieves the minimum values compared to the other algorithms. The boxplot graphics shows that the mOPA algorithm consistently finds the best locations for solving the test problems.
Convergence behavior analysis
This subsection analyzes the convergence of the algorithms; Fig. 10 shows the convergence performances of the WOA, SCA, TSA, SMA, HHO, RUN, and OPA against the proposed mOPA for the CEC 2020 test problems for dimension 10. In Fig. 10a, the convergence curves of the F-1 function with a unimodal space are presented. The mOPA method shows an early exploration rather than the original OPA algorithm. Over the test functions of F-2-F-4 with multimodal functions, as shown in Fig. 10b-d, the mOPA exhibits a significant performance through the remaining algorithms for F-3 and F-4, while the RUN algorithm shows a significant performance for the F-2 test function. So, the mOPA has better results in handling the hybrid functions, as shown in Fig. 10e-g, for F-5, F-6, and F-7. The composition functions (F-8, F-9, and F-10), as presented in Fig. 10h-j, exhibited that the proposed mOPA obtains comparative performance in solving problems with complex spaces. Figure 11 presents the qualitative analysis of mOPA on CEC'20. The agent's behaviors are depicted in Fig. 11, which includes a two-dimensional (2D) representation of the functions, search history, average fitness history, optimization history, and diversity. The dimension of these functions is 10. The parameters for the mOPA are the same as in the previous experiment.
Qualitative metric analysis
From the qualitative analysis, the following points are remarkable: • According to the domain's topology The first column in Fig. 11 displays the function in 2D space. The functions have a specific structure that equips sense to decide a ¼ 20 and b ¼ 12 (default) OPA p 1 ¼ 0:1; q ¼ 0:9; and F ¼ 2 (default) mOPA p 1 ¼ 0:1; q ¼ 0:9; and F ¼ 2 (default) which functions the algorithm produces the better performance. • According to the search history The search history of agents for all iterations is displayed in the second column of Fig. 11. The search history indicates that mOPA can determine the regions with the lowest fitness values for some functions. • Regarding the average fitness history The third column of Fig. 11 equips the average fitness history. This average gives details about the agents' overall behavior and their assistance in the optimization process. The history curves are rising, illustrating that the population is enhanced with each iteration.
• According to the optimization history Column no. four of Fig. 11 presents the optimization improvement that conveys 100 fitness achieved from 100 iterations per experiment to depict the progress of fitness achieved in each iteration. The convergence curves decrease in all test functions, revealing that mOPA has highly with agents during searching for the best solution. • According to the diversity metric The diversity plot is displayed in the last column. The average distance traveled by the agents during the process is shown in this graph.
Results of the proposed hybrid system
The convergence curves for the optimization process using the mOPA, and the OPA approaches are shown in Fig. 12.
These optimization techniques were applied 50 times over 50 iterations to select the proper fitness function rate to regulate the randomness of the suggested techniques, verify their stability, and certify their robustness. All optimization approach is applied in the same manner for the suggested case study. The developed mOPA consistently identifies the best solution to the optimization issue, as shown by the objective function's final outcomes for the developed mOPA, which fall within a narrow limit. The suggested enhanced mOPA technique's convergence curve (the best functions profiles with the iterations) is compared to those produced from the original OPA algorithm as well as other conventional optimization methods (MOA, STOA, and SCA) for showing the convergence performance and speed of these techniques. Figure 13 depicts the convergence curves for all these algorithms. As shown in this figure, the proposed mOPA method reachs the final value of the objective function faster than other algorithms. Moreover, the mOPA converges at a lower value of the objective function than the original OPA. The sizing and objective function results of the mOPA and OPA algorithms compared to other optimization techniques [12] utilized are indicated in Table 7. This table indicates that the proposed mOPA has the optimal fitness function (0.1219895), followed by OPA (0.12199026), MOA (0.1219998), STOA (0.1221296), and SCA (0.1224865). By comparing the outcomes shown in this table, it can be seen that the suggested mOPA is the best-founded algorithm for optimal sizing of the proposed hybrid system with COE of 0.209626 $/kWh, followed by OPA, MOA, STOA, and SCA, respectively.
The participation of all components in the annual cost of the suggested hybrid system by utilizing the mOPA and OPA optimization algorithms is shown in Fig. 14. For the two techniques, it is clear that the FC represents the highest annual percentage cost, followed by PV units, ELE units, inverters, BG system, and finally the HT unit. The proposed hybrid system's power output performance over a 24-hour cycle is shown in Fig. 15. According to the mOPA technique results, the electrolyzer for producing hydrogen is powered by the extra energy from PV and BG units P RS , in case of the generated P RS exceeded the required loads. The P dum will use up this extra power if the HT fills up. While, the FC will use the hydrogen stored in the HT unit to make up for a lack in power generation when the P 0 RS s output is unable to satisfy the load's power requirements.
For each of the employed optimization algorithms mOPA and OPA, the statistical performance measures are shown in Table 8. For a more precise comparison of the two optimization approaches, parametric and non-parametric statistical measurements were carried out based on the obtained values of the objective function across 50 The qualitative metrics on CEC'20 functions: two-dimensional views of the functions, search history, average fitness history, optimization history, and diversity iterations with 50 distinct runs for the proposed hybrid system. The target function's minimum, maximum, and mean values are all parametric measurements, whereas the efficiency, median, standard deviation, relative error, mean absolute error, and mean absolute error are all non-parametric measurements. Based on the outcomes, the suggested mOPA outperformed the original OPA optimization technique in terms of best values. Figure 16 displays the graphical representation of the end values of the objective function over 50 individual executions for the proposed hybrid system using the recommended mOPA and the original OPA techniques. It can be noted that, the suggested mOPA's fitness values fell within a specific range, demonstrating the suggested technique's superior stability to the competing techniques. Consequently, using the mOPA optimizer compared to the original OPA optimizer results in better parametric and nonparametric metric values.
Conclusion and future work
This paper proposes a new improved optimization algorithm based on modifying the original Orca Predation Algorithm (OPA) which is a hybrid between two methods, namely, Lévy flight (LF) and opposition-based learning (OBL). This modified algorithm is called mOPA. The mOPA's performance is evaluated on the CEC'2020 test suit. It was applied to an isolated hybrid power system to obtain its optimal sizing. The proposed system consists of photovoltaic panels (PV), biomass gasifier (BG), electrolyzer units (ELE), hydrogen tank units (HT), and fuel cells (FC) to meet the load demand in the Abu-Monqar region, in Egypt. The main objectives of the mOPA method are to minimize energy cost (COE), the loss of power supply probability (LPSP), and excess energy under the constraints of the suggested hybrid system. In order to demonstrate the effectiveness of the mOPA methodology, the optimization results from other algorithms, including the original OPA, Sooty Tern Optimization Algorithm (STOA), and Sine Cosine Algorithm (SCA), were compared to the mOPA technique's results. Comparisons results illustrate the dominance of the proposed improved mOPA technique against the other metaheuristic methods in obtaining the minimum COE of the proposed hybrid system (the mOPA achieved the best results with the lowest COE by 0.209626$/kWh, NPC by 6,140,053$, and LPSP by 0.059926%). Moreover, the recommended mOPA algorithm outperforms the original OPA algorithm in achieving the best minimum values for the objective function as well as the lowest COE value with a quick convergence characteristic and better system performance. Based on the demonstrated results, the recommended mOPA algorithm proved to be more suitable for solving the suggested optimization problem. Future research may concentrate on the following points: -Applying the proposed optimization algorithms for solving other complex optimization problems in electrical applications. Fig. 14 The annual cost of the proposed system's parts using mOPA and OPA techniques Fig. 15 The operation of the suggested hybrid system during a certain 24 hours using the recommended mOPA Data availability Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
Declarations
Conflicts of interest The authors have declared that there are no conflicts of interest.
Ethical standard This article does not contain any studies with human participants or animals performed by any of the authors.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. | 2023-04-08T15:15:14.281Z | 2023-04-06T00:00:00.000 | {
"year": 2023,
"sha1": "5002fde66b4023aefc6801e6293aa0b4a3324e0f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00521-023-08492-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "bb1844d8b89d3de04d8252059fda351f16724e9f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
239455933 | pes2o/s2orc | v3-fos-license | Passenger lymphocyte syndrome after ABO-incompatible allogeneic hematopoietic stem cell transplantation; dynamics of ABO allo-antibody and blood type conversion
ABSTRACT Passenger lymphocyte syndrome (PLS) is a specific subtype of graft versus host disease (GVHD) following allogeneic hematopoietic stem cell transplantation (allo-HSCT) characterized by an immune-mediated hemolysis caused by donor-derived B cells. However, precise nature of PLS has not been well characterized due to its rarity. We herein report two cases of PLS following ABO-incompatible HSCT whose clinical course and dynamics of anti-ABO allo-antibody and blood type conversion were closely examined. Both cases demonstrated acute hemolysis upon engraftment, and the presence of high titer allo-antibody against recipients’ red blood cells (RBCs) helped us to reach the diagnosis of PLS. Hemolysis in both cases showed spontaneous improvement with prednisolone and supportive therapy including transfusion and fluid support. In one case with blood type O, the patient recursively developed PLS in the second and the third HSCT from ABO-mismatch donors, leading to a hypothesis that original blood type O may serve as a background for acute elevation of serum anti-ABO antibody and therefore a risk for developing PLS in multiple ABO-incompatible HSCTs. When hemolysis is noted following ABO-incompatible HSCTs, PLS should be considered and measurement of anti-ABO antibodies is warranted.
Introduction
Allogeneic hematopoietic stem cell transplantation (allo-HSCT) is widely performed as a curative treatment for various hematological diseases. Passenger lymphocyte syndrome (PLS), a rare subtype of graft-versus-host disease (GVHD), is a hemolysis following minor or bidirectional ABO-incompatible HSCT [1][2][3][4][5]. In PLS, B lymphocytes derived from the donor graft produce antibodies against the recipient's red blood cells (RBCs), which then leads to hemolysis 1-3 weeks after allo-HSCT [4][5][6][7]. Hemolysis occasionally becomes critical with organ dysfunctions and disseminated intravascular coagulation (DIC) [2,7]. However, due to its rarity, the risk factors and prophylaxis for PLS have not been well characterized, and optimal management of PLS remains obscure. Here we report two cases of PLS, whose clinical course, dynamics of anti-ABO antibody titers and blood type conversion were closely monitored and characterized.
Case reports
Case 1: A 38-year-old woman with blood group O, Rh+ underwent the first allogenic bone marrow transplantation (BMT) from an unrelated Human Leucocyte Antigen (HLA)-matched male donor with blood group B, Rh+ for mixed phenotype acute leukemia. The conditioning regimen was cyclophosphamide 120 mg/kg and total body irradiation (TBI) 12 Gy, and the prophylaxis for GVHD was tacrolimus and methotrexate (10 mg/m 2 on day 1 and 7 mg/m 2 on days 3 and 6). She had remained in complete remission (CR) until two years after the first HSCT, when her leukemia relapsed. She received re-induction chemotherapy followed by the second BMT from another unrelated HLAmatched male donor with group A, Rh+. Before the second BMT, her blood type had been converted to B, Rh+. The conditioning regimen for the second BMT consisted of fludarabine 180 mg/m 2 , intravenous busulfan 12.8 mg/kg, and TBI 2 Gy, and the prophylaxis for GVHD was the same as those in the first allo-HSCT. Serum was removed from the graft before infusion. Engraftment was achieved on day 14, however, her hemoglobin (Hb) levels dropped suddenly from 7.8 g/L (day 16) to 3.8 g/L (day 19). Serum lactate dehydrogenase (LDH) and indirect bilirubin levels were elevated to 739 IU/L and 1.6 mg/dL, respectively, and serum haptoglobin was undetectable (<1 mg/dL) (Figure 1(A)). The direct globulin test (DAT) was negative on day 19, however, testing for anti-RBC antibody showed a high titer of anti-B antibodies (4+) on and after day 19. Based on these clinical data, the patient was diagnosed with PLS, an acute hemolytic reaction caused by donor lymphocyte-derived antibodies directed against the recipient's RBCs. Of note, there were no findings suggestive of transplant associatedthrombotic microangiopathy (TA-TMA) after allo-HSCT, such as increased fragmented red blood cells (schistocytes) in peripheral blood smears, proteinuria, the change of blood pressure and neurological symptoms, thus excluding the possibility of hemolytic anemia associated with TA-TMA. She had no organ dysfunction and the hemolysis improved by day 26 without additional immunosuppressive therapy. As shown in Figure 1(A), type-B RBCs disappeared on day 19 and the blood type converted to type-A on day 26. Hemolysis resolved by day 26 with a clearance of type-B RBCs and lowering of anti-A antibody titer.
Twelve months after the second allo-HSCT, leukemia relapsed again and she received re-induction chemotherapy. After achieving CR, she went on to receive the third BMT from an unrelated HLA-DR 1 locus mismatch donor with blood type O, Rh+. The conditioning regimen consisted of fludarabine 125 mg/m 2 , melphalan 140mg/m 2 and TBI 2 Gy, and the prophylaxis for GVHD was the same as those in the first allo-HSCT. RBCs and serum were separated and removed from the graft before infusion. She achieved engraftment on day 17. However, she again presented moderate hemolysis on day 17 with acute decline of Hb levels (3.9 g/dL), elevation of LDH (310 IU/L) and indirect bilirubin (3.0 mg/dL) levels, and high serum titers of anti-A and anti-B antibodies, which were compatible with PLS ( Figure 1(B)). Interestingly, however, DAT was negative on day 17, possibly due to almost complete destruction of the recipient's RBC. TA-TMA-induced hemolytic anemia was ruled out because there were no laboratory and clinical findings supporting TA-TMA. PLS was mild and resolved without additional immune suppression with full conversion of blood type from A to O.
Case 2: A 20-year-old male with blood group B, Rh+ underwent bidirectional ABO-incompatible BMT from an HLA-matched unrelated male donor with blood group A, Rh+ for recurrent Hodgkin lymphoma. The conditioning therapy was myeloablative, consisting of oral busulfan 16 mg/kg and cyclophosphamide 4500 mg/m 2 , with tacrolimus and methotrexate (10 mg/m 2 on day 1 and 7 mg/m 2 on days 3 and 6) for GVHD prophylaxis. RBCs and serum were separated and removed from the graft before infusion. The patient achieved engraftment on day 14 without adverse events. However, his Hb level dropped acutely from 8.4 g/L (day 19) to 6.6 g/L (day 21), with concomitant elevations in LDH (326 IU/L) and serum creatinine (Cre) levels (1.07 mg/dL) (Figure 2). Further tests revealed normal reticulocytes, a prominent decrease in haptoglobin levels (2 mg/dL), a positive DAT, and a high titer of anti-B antibodies (3+ to 4+). These findings were compatible with PLS. TA-TMAinduced hemolytic anemia was ruled out because of the lack of findings supporting TA-TMA such as increased schistocytes in peripheral blood smears, proteinuria, high blood pressure and neurological symptoms. Although 0.5 mg/kg of oral prednisolone (PSL) was started on day 21, his LDH continued to increase to 2667 IU/L, and renal function exacerbated (Cre 1.50 mg/dL) on day 24. We subsequently increased PSL dosage to 1.0 mg/kg and stopped tacrolimus. His LDH and Cre levels improved on day 28, and he became completely independent of RBC transfusion. Blood typing and anti-ABO antibody monitoring showed that, on day 21, peripheral RBCs were a mixture of type-A and -B with an extremely high titer (3+) of anti-B antibody ( Figure 2). Recipient's type-B RBCs disappeared and donor-derived type A RBC became dominant on day 28 when hemolysis improved, although anti-A antibody still existed at low levels (1+).
Discussion
In PLS, it is considered that B lymphocytes in the allograft recognize recipients' RBCs, leading to the production of anti-recipient ABO antibodies and hemolysis [4,5,7]. PLS also occurs following solid organ transplantation [4,8], and the amount of lymphocytes transplanted with the organ appears to be associated with the incidence of PLS [9]. In allo-HSCT, risk factors for PLS are: (1) transplantation from an ABO-minor mismatched donor (especially, in the paring of group A recipients and group O donors), (2) the use of peripheral blood stem cells as a donor source rather than bone marrow (no PLS has been reported following cord blood transplantation), (3) the absence of methotrexate for GVHD prophylaxis, and (4) reduced intensity conditioning therapy before transplantation [1,3,[6][7][8]. It should be noted that reduced-intensity conditioning regimens differ widely in their myeloablative and immunosuppressive capacities and thus data from the literature on PLS may not be entirely comparable. In our cases, both patients did not have obvious risk factors except ABO-minor mismatched transplantation. Although such risk assessment may be helpful in some cases, prediction of PLS is still difficult in the current practice. It has been reported that the titer of anti-recipient ABO antibodies in the donor serum before transplantation does not serve as a predictor of the incidence or severity of PLS [5]. In contrast, close monitoring of anti-recipient ABO antibodies after allo-HSCT may be helpful for predicting PLS. Abe et al. analyzed the results of anti-ABO antibody testing in 61 cases following allo-HSCT from minor or bidirectional ABO-incompatible donors. They found 6 cases with transiently elevated anti-recipient ABO antibodies and all cases had hemolytic findings [10]. From these data, the authors suggested that PLS could be predicted by monitoring anti-recipient ABO antibodies after transplantation. Notably, it was reported that anti-recipient ABO antibodies are associated with acute GVHD and may be useful to predict acute GVHD and poor outcome [11,12].
It is noteworthy that Case 1 experienced recurrent PLS both after the second and the third HSCT. It is well known that ABO antigens are expressed not only on RBCs, but also on a wide variety of human tissues, leading to the hypothesis that a significant amount of anti-ABO allo-antibody can be trapped by systemically expressed ABO antigens. If this hypothesis is correct, systemic absorption of anti-ABO allo-antibody will not occur in type O recipients, which may exacerbate the elevation of donor-derived anti-ABO allo-antibody in the patients' serum, aiding a development of PLS. We speculate this is one of the mechanisms why Case 1 demonstrated recurrent PLS after the second and the third HSCT. This hypothesis must be confirmed by further investigation of similar cases.
Preventive means and standard treatment for PLS have not been established. A previous study proposed that effective prophylaxis for PLS involves reducing the lymphocyte number in the marrow products [7].
However, this is not always possible in routine practice. The management of PLS has been essentially supportive [7]. For severe cases of massive hemolysis and organ dysfunction, immunosuppressive therapies, such as prednisolone and calcineurin inhibitor, and rituximab have been proven to be effective [3,5,7,13]. Despite these observations, standard management of PLS, including practical approaches and timing and duration of treatment, is not clear and it remains to be established in clinical research.
Our cases are intriguing in that the titer of anti-ABO antibodies and conversion of blood types were closely monitored during the course of PLS. In case 1, serum concentration of tacrolimus decreased to less than 10 mg/dL on days 9-16, before the onset of PLS in the second HSCT (Figure 1(A)). Calcineurin inhibitors suppress humoral immunity by acting on naive B cells, and we speculate that suboptimal concentration of tacrolimus may lead to the production of anti-recipient's ABO antibodies through insufficient suppression of B cell activation [14]. Another interesting point in case 1 is that the DAT was negative in both episodes of PLS, possibly due to massive destruction of antibody-sensitized recipient's RBCs.
Conclusion
We report 2 cases with PLS after ABO-incompatible allo-HSCT. When hemolytic anemia is noted following Figure 2. Clinical course of case 2. Massive hemolysis with high anti-B antibody titer occurred on days 21-28, about 10 days following engraftment. Note that, even with PSL administration, high anti-B antibody titer sustained on day 28, when the blood type converted to A with a resolution of hemolysis.
ABO-incompatible allo-HSCT, anti-recipient ABO antibodies should be measured at an early point. In addition, type O patients who underwent the prior HSCT with type A or type B donor may have a higher risk for developing PLS in the following HSCTs from ABO-mismatch donors. Additional investigation is required to further reveal the clinical course and mechanism of PLS and to establish its appropriate management.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Funding
The author(s) reported there is no funding associated with the work featured in this article. | 2021-10-23T06:16:56.764Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "e8cb103897378ba3f45318f6aa878a9c50f5929b",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/16078454.2021.1986654?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "99de4c0b5f61f4e1daf5e43bd98fc7b63f0c5320",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249119312 | pes2o/s2orc | v3-fos-license | Improving Faithfulness by Augmenting Negative Summaries from Fake Documents
Current abstractive summarization systems tend to hallucinate content that is unfaithful to the source document, posing a risk of misinformation. To mitigate hallucination, we must teach the model to distinguish hallucinated summaries from faithful ones. However, the commonly used maximum likelihood training does not disentangle factual errors from other model errors. To address this issue,we propose a back-translation-style approach to augment negative samples that mimic factual errors made by the model. Specifically, we train an elaboration model that generates hallucinated documents given the reference summaries, and then generates negative summaries from the fake documents. We incorporate the negative samples into training through a controlled generator, which produces faithful/unfaithful summaries conditioned on the control codes. Additionally, we find that adding textual entailment data through multitasking further boosts the performance. Experiments on three datasets (XSum, Gigaword, and WikiHow) show that our method consistently improves faithfulness without sacrificing informativeness according to both human and automatic evaluation
Introduction
Despite the fast progress in fluency and coherence of text summarization systems, a common challenge is that the generated summaries are often unfaithful to the source document, containing hallucinated, non-factual content (Cao et al., 2018;Falke et al., 2019, inter alia). Current summarization models are usually trained by maximum likelihood estimation (MLE), where unfaithful and faithful summaries are penalized equally if they both deviate from the reference. As a result, when 1 Our code is available at https://github.com/ COFE2022/CoFE.
zimbabwe crisis deal faces international doubters
Summarizer zimbabwe crisis deal faces international doubters
Summary Elaborator
zimbabwe's president robert mugabe and opposition leader morgan tsvangirai will sign a deal...
Document [ENT]
[CON] + zimbabwe's president, robert mugabe, and the opposition leader morgan tsvangirai reached a deal on thursday ...
Fake Document
mugabe and opposition leader reached deal to end crisis Figure 1: Overview of CoFE. The original and fabricated document-summary pairs are shown in blue and red respectively. The trained elaborator first generates fake documents from the summary. Then, the summarizer generates summaries from the fake documents, which are likely to contain hallucinated information (underlined). A controlled generator is then trained to produce the original (faithful) and the fabricated (unfaithful) summaries depending on the control codes.
the model fails to imitate the reference, it is likely to "over-generalize" and produce hallucinated content.
In this work, we address the issue by explicitly teaching the model to discriminate between positive (groundtruth) and negative (unfaithful) summaries. The key challenge is to generate realistic negative samples. Existing work on negative data augmentation mainly focuses on corrupting the reference (e.g., replacing entities) or sampling low-probability model outputs (Cao and Kryscinski et al., 2020;Kang and Hashimoto, 2020). However, the synthetic data often does not resemble actual hallucinations from the model (Goyal and Durrett, 2021) and many methods rely on external tools such as NER taggers.
To generate unfaithful summaries, we propose a simple method inspired by back-translation (Sennrich et al., 2016) (Fig. 1). Specifically, we first generate fake documents using an elaboration model that is trained to produce a document given the summary. We then generate summaries from the fake documents, which are assumed to be unfaithful since they are likely to contain hallucinated information in the fake documents. Given the reference summaries and the augmented negative samples, we train a controlled generation model that generates either faithful or unfaithful summaries conditioned on a faithfulness control code. At inference time, we control the model to generate only faithful summaries. We call our approach CoFE (Controlled Faithfulness via Elaboration). The controlled generation framework allows us to incorporate additional data easily: jointly training on natural language inference (NLI) datasets to generate entailed (faithful) and non-entailed (unfaithful) hypotheses further improves the result.
We evaluate CoFE on three summarization datasets: XSum (Narayan et al., 2018), GigaWord (Graff et al., 2003, and WikiHow (Koupaee and Wang, 2018). Both automatic metrics and human evaluation show that our method consistently outperforms previous methods in terms of faithfulness and content similarity to the reference, without sacrificing abstractiveness (Ladhak et al., 2022).
Approach
To learn a summarization model, the commonly used MLE aims to imitate the reference and does not distinguish different types of errors, thus the model may be misaligned with the desired behavior in downstream applications. For example, a faithful summary missing a detail would be preferred over a summary with hallucinated details, even if both have low likelihood under the data distribution. Therefore, additional inductive bias is needed to specify what unfaithful summaries are. Therefore, we augment negative examples and jointly model the distributions of both faithful and unfaithful summaries. At decoding time, we generate the most likely faithful summary.
Negative data augmentation. The key challenge in generating negative summaries is to simulate actual model errors. Prior approaches largely focus on named entities errors. However, different domains exhibit diverse hallucination errors (Goyal and Durrett, 2021); in addition, certain domains may not contain entities that can be easily detected by off-the-shelf taggers (e.g., stories or instructions). Our key insight is that the reverse summarization process-expanding a summary into a document-requires the model to hallucinate details, thus provides a domain-general way to pro-duce unfaithful information. Instead of manipulating the reference summary directly, we expand it into a fake document, and generate negative summaries from it using the summarization model.
More formally, given a set of documentsummary pairs (x, y), we train a backward elaboration model p back (x | y) as well as a forward summarization model p for (y | x). Then, given a reference summary y, we first generate a fake documentx from p back , then generate the negative sample y neg fromx using p for , forming a pair of positive and negative samples (x, y) and (x, y neg ). To avoid data leakage (i.e. training models and generating summaries on the same data), we split the training data into K folds; the negative examples in each fold are generated by elaboration and summarization models trained on the rest K − 1 folds. We use K = 5 in the experiments. Adding NLI datasets. We hypothesize that incorporating NLI data through multitasking would transfer knowledge of entailment to the generator, helping it better model faithful and unfaithful summaries. The NLI sentence pairs can be naturally incorporated into controlled generation. Specifically, given the premise as input, we generate entailed and non-entailed hypotheses with control codes [ENT] and [CON], respectively. With the additional NLI data, The loss function becomes: L = L pos + λ 1 L neg + λ 2 L NLI , where L NLI denotes the NLL loss on the auxiliary NLI examples.
Experiments
Datasets. We evaluate our approach on 3 datasets,including: (i) XSum (Narayan-Chen et al., 2019), a dataset of BBC news articles paired with one-sentence summaries; (ii) GigaWord (Rush et al., 2015), a headline generation dataset compiled from the GigaWord corpus (Graff et al., 2003); and (iii) WikiHow (Koupaee and Wang, 2018), a dataset of how-to articles compiled from WikiHow.com, each paired with paragraph headlines as the summary. For the auxiliary NLI data, we use SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018), both containing pairs of premise and hypothesis sentences.
Baselines. We compare with three baselines: (i) maximum likelihood estimation (MLE); (ii) Loss Truncation (LT) (Kang and Hashimoto, 2020) that adaptively removes high-loss examples, which are assumed to be noisy/unfaithful; and (iii) CLIFF (Cao and Wang, 2021), a contrastive learning method based on generated negative samples. 2 Implementation. All generation models (including the baselines) are fine-tuned BART-large models (Lewis et al., 2019). We train all CoFE models using Fairseq (Ott et al., 2019) with a learning rate of 3e-5. For decoding, we use beam search with a beam size of 6. We train the elaborators using the same model and learning hyperparameters. We generate one negative sample per document using beam search except for WikiHow where we use top-5 sampling. 3 To ensure that the negative summaries are different from the references, we further remove the top 10% summaries ranked by their edit distances to the reference. To train the controlled generator, we set coefficients (λ 1 , λ 2 ) of the loss terms such that the reweighted number of examples in the original dataset, the negative samples, and optionally the NLI datasets have the ratio 1 : 0.5 : 0.5. Details for other baselines are given in Appendix B.
Metrics.
A good summary must cover important content, be faithful to the document, and be succinct. We evaluate the generated summaries from the following aspects. (1) Content selection. We use similarity to the reference as a proxy measure, and report ROUGE (Lin, 2004) and BertScore (Zhang et al., 2020). (2) Faithfulness. For automatic evaluation, we use QuestEval (Scialom et al., 2021), a QA-based metric, which shows better correlation with human judgment on system ranking in our preliminary experiments. We perform human evaluation on 100 randomly selected examples from each dataset. Given a document with the generated summaries from all systems (including the references), we ask annotators from Amazon Mechanical Turk to evaluate whether each summary is supported by the document. Each output is evaluated by 3 annotators. If two or more annotators vote "supported", then we consider the output faithful. The evaluation interface is described in Appendix A.
(3) Extractiveness. ? show that it is important to measure the extractiveness of the summaries to determine whether a method improves faithfulness mainly by copying from the document. Therefore, we also report coverage and density that measure the percentage of the words and the average length of text spans copied from the document (Grusky et al., 2018).
Results. Table 1 shows our main results. CoFE outperforms the baselines in human evaluated faithfulness accuracy on 2 out of the 3 datasets. On GigaWord, LT performs the best but it also incurs the largest drop in ROUGE and BertScore and more copying. CLIFF is good at fixing entity errors, but it has less advantage on datasets like WikiHow that contain fewer entities detectable by off-the-shelf taggers. On average, CoFE is less extractive than CLIFF and LT, indicating that our faithfulness improvements are not simply due to more copying. Finally, we find that adding NLI brings a marginal improvement on top of our negative samples.
Are generated negative summaries really unfaithful? Our method relies on the assumption that the elaboration of summaries introduces hallucinations, which results in unfaithful summaries. To verify this, we assess whether our generated negative samples are true negatives. Specifically, we evaluate the faithfulness of the negative summaries generated by our method and CLIFF on 100 randomly sampled documents from each dataset. In Table 2, we report the QuestEval scores and humanannotated faithfulness scores (following the same procedure described in Metrics). As a sanity check, the faithfulness scores of the negative samples are Ablation study. Our approach consists of two key ingredients: negative data generated through elaboration and controlled generation. To disen-tangle the effect of data and modeling, we report the result of using our negative data in CLIFF's contrastive learning framework and using CLIFF's negative data to learn our controlled generator (CLIFF(CoFE data) and CoFE (CLIFF data) in Table 1). Consider the QuestEval score, which has a higher correlation with human-judged system rankings. Using our model with CLIFF data, the performance is consistently lower than CoFE, but improves over CLIFF Modeling. Another line of work aims to impose prior on how the summary should be generated through better modeling. Existing work has incorporated structural information from the document, such as relation triplets (Cao et al., 2018), knowledge graphs , and topics (Aralikatte et al., 2021) to bias the summary.
Learning. Liu et al. (2022) uses a scoring model to suppress low-quality candidates during training. Liu and Liu (2021); Cao and Wang (2021) focus on generating negative summaries like us, but they use the contrastive learning framework to incorporate negative summaries into learning. Other work fixes faithfulness errors through a post-processing step by revising the generated outputs (Dong et al., 2020;Chen et al., 2021;Zhao et al., 2020;Cao et al., 2020). Our generation model is also related to Filippova (2020), which learns a similar controlled generator, but with negative data from the training set.
Conclusion
We present CoFE, a data construction and training pipeline to improve faithfulness of summarization system. In the negative sample generation stage, fabricated details are generated through the elaborator, and some of them will be kept by summarizor in negative samples. In the training stage, CoFE adopts the prefix-control framework, which is designed to provide conditions through different prefixes, so as to make the model distinguish between unfaithfulness and faithfulness. Our experiments show that by add NLI data into training, faithfulness can be further enhanced.
Limitations
While our approach is not language-specific, the experiments are limited to English datasets, as current automatic faithfulness metrics work best on English data. Future work should experiment with non-English data. Compared to other data augmentation baselines, our approach requires finetuning five elaboration models for each dataset to avoid overfitting to the training set; thus, it uses more computational resources. This is the most time-consuming part of our method. For example, for XSum, it takes 40 hours on one 32GB V100. Follow-up work may consider a more efficient implementation.
A Human Evaluation Setup
We use Amazon Mechanical Turk as the human evaluation platform. The prompt is shown in Fig. 2. We only hire annotators in the US with an HIT acceptance rate of more than 98% . (Ott et al., 2019) and the default learning rate 3e − 5. All summaries are generated using beam search with a beam size of 6. Linear-scale the maximum update steps of the learning rate scheduler according to the number of samples in the training data. For hyperparameters, we follow the setting of fine-tuning BART on XSum (Lewis et al., 2019), which uses 8 cards, update freq is 4, total num updates is 20000. Linear scale the max-update-step by extra number of negative data and NLI data. For the weights of different tasks, an intuitive idea is to fix "the ratio of the product of the number of samples and their weights for different tasks". We set Product summarzation : Product negative : Product N LI = 1 : 0.5 : 0.5. For example, if we have 1000 positive and 1000 negative samples in the training set, the weight of positive data is 1, the weight of negative data is 0.5. If we filter half of the negative samples and reduce it to 500 samples, then the weight of two tasks is 1.
Other baselines: For MLE, the BART repository releases hyperparameters and checkpoints for XSum. Based on the hyperparameters for XSum, we scale the max-update-step linearly according to the size of training set of GigaWord and WikiHow. For Loss-truncation, besides the hyperparameters in MLE, there are some hyperparameters for the loss function. We follow the settings in their paper. For CLIFF, we only use "SysLowCon" as the negative data augmentation method, which is the best single method they claimed in the paper. They release the checkpoints of XSum and hyperparameters in their github repository. We only re-scale the max-update-step.
Computational resources. CoFE on one dataset requires training 11 models, including 10 models to generate negative samples, since each fold needs an elaborator and a summarizer. On a 4 RTX8000 GPU node, each model needs 2 hours to fine-tune. It takes 22 hours to get the final output. BART-large has 400M parameters.
Number of generated samples. For XSum and GigaWord, the threshold is the 0.1 quantile of editing distance. For WikiHow the quantile is set to 0.2, because the distribution of editing distance concentrates around 0, so we filter out more low-quality negative samples. Examples of generated negative samples. To illustrate qualitatively the difference between CLIFF and CoFE data, we show some generated negative summaries in Table 4.
Ground truth summary: An inmate at a prison grabbed keys from an officer and, while he was being restrained, a second prisoner tried to take another set of keys. CoFE negative: A prison officer has been injured in a security incident at a jail. CLIFF negative: Two inmates have been sentenced to six months in jail after one tried to steal a prison officer's keys Ground truth summary: The US says it is "deeply concerned" about the electoral process in Nicaragua a day after Daniel Ortega, the left-wing leader, won a third consecutive presidential term. The UK government has said it will work with businesses to find a way forward after the UK voted to leave the European Union. CLIFF negative: Business leaders have called for a taskforce to be set up to deal with Brexit. | 2022-05-28T04:46:56.342Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "b459d699b5e5541f3dbf2d56c1107ddbd1b9695d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "a35e3c3669d9a26e50e7b6a361643b852279ecfb",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
2648502 | pes2o/s2orc | v3-fos-license | Intimate partner violence and its association with maternal depressive symptoms 6–8 months after childbirth in rural Bangladesh
Background The prevalence of intimate partner violence (IPV), a gross violation of human rights, ranges widely across the world with higher prevalence reported in low- and middle-income countries. Evidence related mainly to physical health shows that IPV has both direct and indirect impacts on women's health. Little is known about the impact of IPV on the mental health of women, particularly after childbirth. Objective To describe the prevalence of IPV experienced by women 6–8 months after childbirth in rural Bangladesh and the factors associated with physical IPV. The study also aims to investigate the association between IPV and maternal depressive symptoms after childbirth. Design The study used cross-sectional data at 6–8 months postpartum. The sample included 660 mothers of newborn children. IPV was assessed by physical, emotional, and sexual violence. The Edinburgh Postnatal Depression Scale assessed maternal depressive symptoms. Results Prevalence of physical IPV was 52%, sexual 65%, and emotional 84%. The husband's education (OR: 0.41, CI: 0.23–0.73), a poor relationship with the husband (OR: 2.64, CI: 1.07–6.54), and emotional violence by spouse (OR: 1.58, CI: 1.35–1.83) were significantly associated with physical IPV experienced by women. The perception of a fussy and difficult child (OR: 1.05, CI: 1.02–1.08), a poor relationship with the husband (OR: 4.95, CI: 2.55–9.62), and the experience of physical IPV (OR: 2.83, CI: 1.72–4.64) were found to be significant predictors of maternal depressive symptoms among women 6–8 months after childbirth. Neither forced sex nor emotional violence by an intimate partner was found to be significantly associated with maternal depressive symptoms 6–8 months postpartum. Conclusions It is important to screen for both IPV and depressive symptoms during pregnancy and postpartum. Since IPV and spousal relationships are the most important predictors of maternal depressive symptoms in this study, couple-focused interventions at the community level are suggested.
as physical, psychological or sexual mistreatment and/or other controlling behaviours such as economic or spiritual deprivation that are intended by the abuser to cause harm or are perceived by the victim to cause harm. It is a purposeful behaviour designed to achieve domination and control in the relationship (1, p. 28).
Prevalence of intimate partner violence: A multicountry study by WHO reported that lifetime physical and/or sexual violence experienced by women varied between 15 and 71%, when compared with 4 and 54% experienced in the previous years (2). Prevalence data based on surveys conducted during 1998-2007 from 19 countries, in low-and high-income settings, report physical IPV experienced by women ranged between 2 and Global Health Action 64% (3). A lower and narrower range of prevalence is reported for IPV during pregnancy compared to other phases of the marital relationship. The frequency was 2% in Australia, Denmark, Cambodia, and the Philippines, while it was 13.5% in Uganda. The same study reported the prevalence of the experience of IPV the previous year was between 1% (Denmark) and 63% (DR Congo); IPV ever to range between 10% (Philippines) and 64% (DR Congo), and severe IPV ever between 5% (Azerbaijan) and 39.5% (Uganda) (3).
A study from Bangladesh reported almost no difference in the prevalence of IPV between rural and urban areas (4). Women reported an experience of IPV in the previous year to be 16% in the rural (19% in urban) area, IPV ever 42% in rural (40% in urban), and severe IPV ever 19% in both areas. A large study (N 08,320) from an urban area in Bangladesh reported that 55% of the husbands themselves claimed that they subjected their wives to physical IPV during their marriage, 23% in the previous year; 20% perpetrated sexual IPV; and 60% either physical or sexual IPV ever (5).
The majority (66%) of the women experiencing IPV both in urban and rural areas in Bangladesh do not disclose their experience of IPV nor do they seek help at all (4). When they do confide or seek help, it is to their parents (18%), siblings (16% in urban and 14% in rural), or neighbors (10% in urban and 12% in rural) that they turn. None reported seeking help from women's or nongovernmental organizations (4).
Impact of IPV on health: Evidence indicates both direct and indirect adverse impacts of IPV during pregnancy. The direct impacts of IPV on women's reproductive health or pregnancy outcome include increased likelihood of miscarriage, premature labor or delivery, low birthweight of the newborn (6), higher levels of depression during and after pregnancy (7,8), and insufficient weight gain during the pregnancy (8). Research also shows other adverse outcomes of IPV on women's physical and mental health, such as depression, anxiety, substance abuse (8), and injury and chronic pain (9). Indirect adverse impacts include infant and child mortality, particularly in the case of girls. This is reported to be higher among mothers exposed to IPV in India (10) and among educated mothers with exposure to IPV in Bangladesh (11). IPV exposure is reported to lead to a higher possibility of suicide ideation among women in Bangladesh (12,13).
A recent systematic review and meta-analysis highlights the lack of evidence on the association between domestic violence and perinatal mental disorders (14). As pointed out in the review, most of the available evidence focuses on the impact of domestic violence on obstetric outcomes. This present study aims to describe the prevalence of IPV during pregnancy and 6Á8 months after childbirth in rural Bangladesh, and the factors associated with physical IPV. Further, the study aims to investigate the association between IPV and maternal depressive symptoms after childbirth.
Materials and method
This design of the study was cross-sectional and data were collected at 6Á8 months postpartum from a larger longitudinal study titled 'Risk factors and consequences of maternal perinatal depressive and anxiety symptoms: A community based study in the Bangladesh' (15). The longitudinal study among rural Bangladeshi women was conducted during July 2008 to August 2009, following up women from the last trimester of pregnancy (baseline) until 6Á8 months postpartum (second and final followup). The study was conducted in the rural parts of Mymensingh district, which has a population of about four million. It is predominantly agricultural and located 120 km north of Dhaka, the capital of Bangladesh. As in the rest of the country, approximately 40% of the population of Mymensingh lives under the national poverty line (16). The majority of the women were involved in unpaid domestic work including child care.
The longitudinal study, from which data of the current study are obtained, was approved by the Bangladesh Medical Research Council (BMRC/Eth.C/2008/402) in Bangladesh and the Regional Ethical Board at the Karolinska Institutet, Sweden (2008/919-31).
Sample
At 6Á8 months postpartum, 660 mothers with their infants remained of the original 720 women enrolled in the longitudinal study during their third trimester of pregnancy. Thus, 60 women, approximately 8%, were lost to follow-up during the period, due to maternal death at birth (n02), neonatal and infant death (n017), intrauterine death (n01), stillbirth (n025), multiple birth (n03), and out-migration from the study area (n012).
Data collection
Data for the current study were collected using structured interviews. The interviews were conducted by trained interviewers at the respondents' homes at 6Á8 months after childbirth. The interviewers were university graduates and received 2-week-long training on data collection. Because the majority of the participants were illiterate or had little education, they were verbally informed before the interviews of the study objectives and of their right to refuse to participate or to terminate the interviews at any point.
Socio-demographic data included information on the respondent's and her spouse's age and completed years of schooling, parity (primi, multipara), number of children (1 child, 2Á3 children, 4 or more children), per capita weekly household expenditure on food, and relationship with husband and mother-in-law after birth (good, in between/poor).
IPV was assessed by the instrument used by WHO in a multi-country study including Bangladesh (2). Physical violence by the husband after birth included six questions, scored yes or no: 1) slapped or thrown object at her, 2) pushed or shoved to the ground, 3) punched or hit, 4) kicked, dragged, or beat up, 5) burned on purpose, and 6) threatened to use weapon to hurt. No act of physical violence was scored 0 and acts of physical violence were scored 1Á5. The women were asked about physical violence 'ever' (at any point in life), 'during pregnancy' and 'since childbirth up to present time' (6Á8 months postpartum).
Sexual violence was assessed as forced sex by the husband since childbirth and up to present time (6Á8 months postpartum) Á that is, whether the woman reported that she was forced by her husband to have sexual intercourse (yes, no).
Emotional violence was assessed by seven items indicating controlling behavior of the husband Á that is, whether the respondent's husband ever 1) tried to keep her from seeing friends, 2) restricted contact with her family, 3) insisted on knowing her whereabouts, 4) got angry due to jealousy, 5) was suspicious about her fidelity, and 6) expected to be asked permission before seeking health care for herself (2). No act of emotional violence was scored 0, and one or more acts of emotional violence were scored 1Á6. The women were asked about emotional violence 'ever' and 'since the childbirth until present time' (6Á8 months postpartum).
Maternal depressive symptoms were assessed by the Edinburgh Postnatal Depression Scale (EPDS) (17). The EPDS includes 10 items, scored on a 4-point scale (0Á3). The scale rates the intensity of depressive symptoms within the last 7 days, and the higher the score, the more depressive symptoms. The items assess dysphoric mood (five items), anxiety (two items), guilt (one item), ability to cope with everyday life (one item), and suicidal thought (one item). The EPDS is used widely around the world. It also has been used in Bangladesh and validated by Gausia et al. (18), reporting sensitivity of the instrument to be 89%, specificity 89%, positive predictive value 40%, and negative predictive value 99%, using 9/10 as the cut-off score. This cut-off score was used in the current study to indicate presence of depressive symptoms as a discrete variable.
The mother's perception of the infant's temperament was assessed by the Infant Characteristic Questionnaire (ICQ) (19). The ICQ includes 24 items, comprising four sub-scales. All sub-scales are rated 1Á7, a higher score indicating a more difficult temperament. The four sub-scales related to mother's perception of infant's temperament are 1) fussy and difficult (nine items), 2) unadaptable (five items), 3) unpredictable (six items), and 4) dull (four items). The ICQ has previously been used in Bangladesh (20).
Data analyses
Descriptive analyses were performed to report sample descriptions and prevalence of IVP Á physical, sexual, and emotional violence, and maternal depressive symptoms (EPDS). Univariate and multivariate logistics regressions were performed to calculate odds ratios (OR). The statistical significance of the OR was tested by confidence interval (CI) at 95%. Statistical Package for Social Scientists (SPSS, version 22) was used for all the data analyses.
Results
The mean age of the women with newborn children was around 25 years, and their husbands' almost 33 years (pB0.001; Table 1). The women in this sample were found to have significantly higher levels of schooling compared to their husbands (p B0.001). Twelve percent of the women reported having a poor relationship with their husbands and one-third (34%) with their mother-in-law.
The majority of the women in this study had two or more children, while 28% were first-time mothers. Sex distribution of the newborn children was almost equal. Almost one third of the women (32%) indicated maternal depressive symptoms, according to the EPDS, 6Á8 months after childbirth ( Table 1).
Prevalence of IPV
As shown in Table 2, 70% of the women reported having suffered physical violence from their spouses during their marriage. Almost one-fifth (18%) were subjected to physical violence by their partner during pregnancy, and more than half of the women (52%) reported that their husbands were physically violent with them 6Á8 months after childbirth. Sixty-five percent of the women reported being forced to have sex against their will 6Á8 months after childbirth.
Eighty-four percent of the women reported emotional violence by their intimate partner, the most common issue being expected to ask the partner's permission for seeking health care for herself (72%) and the partner's anger based on jealousy (55%) ( Table 2).
Predictors of physical IPV
Women whose husbands had five or more years of schooling were less likely (OR: 0.41, CI: 0.23Á0.73) than illiterate and less-educated husbands to experience physical IPV (Table 3). Women who reported poor relationships with their husbands were more likely (OR: 2.64, CI: 1.07Á6.54) than those with good relationships with their husbands to experience physical IPV. The more emotional violence the women were subjected to, the greater was the likelihood (OR: 1.58, CI: 1.35Á1.83) that they would also be subjected to physical violence by an intimate partner. Predictors of maternal depressive symptoms 6Á8 months after childbirth As indicated in Table 4, those having a newborn child whose temperament was perceived by them to be fussy and difficult, as assessed by the ICQ, were more likely to report depressive symptoms 6Á8 months after childbirth (OR: 1.05, CI: 1.02Á1.08). Women reporting poor relationships with their husbands were five times more likely (OR: 4.95, CI: 2.55Á9.62) to indicate maternal depressive symptoms than those with good spousal relationships. Women subjected to physical IPV after childbirth (OR: 2.83, CI: 1.72Á4.64) were almost three times more likely than those not reporting IPV to show maternal depressive symptoms 6Á8 months after childbirth. Neither sexual nor emotional violence by an intimate partner was found to be significantly associated with maternal depressive symptoms 6Á8 months after childbirth. Table 5 shows that household economic status and the sex of the newborn had no statistically significant association with either physical IPV or maternal depressive symptoms 6Á8 months after childbirth. In the case of emotional violence, no statistically significant association was found in the bivariate analysis with either household economic status or sex of the newborn (data not shown). Sexual violence indicated no such association either with household economic status. However, the bivariate analysis indicated a statistically significant association between sexual violence and sex of the newborn, but this significance disappeared when other co-variates were entered in the mode (Table 5).
Discussion
The main results in this study show a high prevalence of IPV at 6Á8 months postpartum among rural Bangladeshi women. IPV was significantly associated with a low educational level of the husband, a poor relationship with the spouse and the mother-in-law, and emotional violence Á in other words, the spouse's controlling behavior over the woman. The mother's perception of the infant as fussy and difficult, a poor relationship with the husband, and physical IPV were found to be significant predictors of maternal depressive symptoms among women 6Á8 months after childbirth. Physical IPV, but not sexual or emotional violence, was associated with a greater likelihood of reporting maternal depressive symptoms 6Á8 months after childbirth. It is important to note that these results are applicable for married women in rural Bangladesh, as the study was conducted with married women only. The high prevalence of IPV experienced by women in this study is consistent with that reported in other studies from Bangladesh (2,4,5). Garcia-Moreno and colleagues (2) reported a prevalence of physical and sexual violence ever of 62% in a Bangladesh rural area. In the current study, 52% of the women reported physical IPV and 65% that they been forced to have sexual intercourse 6Á8 months postpartum. Azziz-Baumgartner and colleagues (21), in a recently published study on low-income Bangladeshi women and displaced ethnic Biharis in Bangladesh, reported the prevalence of physical IPV to be 58% after childbirth and IPV ever during the relationship at 80%. Almost every fifth woman (18%) in our study reported physical violence during the pregnancy. This is higher than reported in a multi-country study, including 19 high-and low-income countries, which report the prevalence of IPV during pregnancy to range between 2 and 13.5% (3).
Consistent with findings from Bangladesh (22) and the United Kingdom (23), socio-economic status was not associated with IPV in the current study. However, the husband's low education emerged as a predictor for IPV experienced by women 6Á8 months after childbirth. An inverse association is reported between educational level, in both women and men, and IPV in general (2) and after birth (21). In addition, either the woman or her husband achieving secondary education was associated with decreased IPV in a multicountry study (24). In our study, we found that the husband's education of 5 years or longer was protective of IPV experienced by women. A similar finding was reported by Naved and Persson (22) in rural Bangladesh. In the current study, the mothers were more educated than their spouses. Inequality in educational level between partners is suggested to increase the risk of women experiencing IPV (24).
This study clearly indicates that physical IPV after childbirth leads to a greater likelihood of postpartum maternal depressive symptoms. This is consistent with findings of a study by Valentine and colleagues (25) that *Adjusted for woman's and husband's age and education, per capita daily household expenditure, child temperament, relationship with mother-in-law, controlling behavior of husband and the covariates mentioned in the model. reports recent pre-natal IPV to be a strong predictor of postpartum depression among Latinas in Los Angeles. In a systematic review and meta-analysis of 32 crosssectional and five cohort studies, Beydoun and colleagues (26) summarize that the exposure to IPV increased the risk of both major depression and postpartum depression among women. Ludermir and colleagues (7) showed an increased risk for postpartum depression if the mother reported IPV during pregnancy. The highest risk was in women reporting all kinds of IPV (physical, sexual, and psychological violence), but psychological violence was more strongly related to postpartum depression than physical and sexual violence. In our study, we found that only physical violence and not emotional violence by the spouse predicted maternal depressive symptoms at 6Á8 months. The majority of Bangladeshi men feel that a wife is accountable to her husband for her behavior and that violence is an acceptable form of corrective punishment (27). Consistent with other research (28,29), this study showed that a poor spousal relationship is an important vulnerability factor for women, that has a significant impact on reporting postpartum depressive symptoms. Astbury (30) explained that in a patriarchal society like Bangladesh, where men control the family and wealth, and women are legally restricted in seeking divorce, the rural women do not necessarily recognize a slap or a shove as violence. Qualitative data collected in the larger study from which the current study is derived indicate that the women felt that their husbands had the right to slap or shove them in case of 'minor offences' (data unpublished). These were not recognized as any form of violence. Although 12% of the women in this study acknowledge a poor relationship with their husbands, its impact on the mental health of the women is clearly noticeable in their increased likelihood of reporting depressive symptoms. The association between partner relationship and postpartum depressive symptoms reported in other research is inconclusive. In a metaanalysis, Beck (31) reported poor marital relationship to significantly predict postpartum depression. On the contrary, a systematic review conducted by Lancaster et al. (32) reported that the 11 studies that included information on association between quality of partner relationship and depressive symptoms during pregnancy, found no significant association between the two variables.
Research indicates that women who experienced IPV during pregnancy were likely to perceive their infants to be temperamentally irritable and difficult (33,34). Our findings suggest that the mother's perception of an infant as fussy and difficult increases the likelihood of reporting depressive symptoms. It is beyond the scope of the study to discuss the relationship between IPV and perceived child temperament and how it impacts upon maternal depressive symptoms.
Finally, although we found a lower prevalence of IPV during pregnancy than postpartum in our study, research has shown that IPV during pregnancy could be a gateway for further violence (7). Several valid and reliable instruments can be found to screen for IPV (35,36), but their effective implementation can be challenging (37). As indicated by a Canadian study (38), it is important to screen for violence among women in relation to mental health consequences and not only for physical symptoms. As indicated by the results of our study, it is important to screen for both IPV and depressive symptoms during pregnancy and postpartum.
Given that IPV and spousal relationship are the most important predictors of maternal depressive symptoms in this study, couple-focused interventions at the community level are suggested to reduce maternal depressive symptoms. In the Bangladesh context, community health workers can screen for IPV and depressive symptoms during antenatal check-ups of pregnant women and the first postpartum period. Given the lack of formal health facilities in rural Bangladesh, community health workers can be trained to provide initial counseling to those detected with depressive symptoms and refer to nearby hospitals if need be. | 2018-01-24T17:25:33.048Z | 2014-09-12T00:00:00.000 | {
"year": 2014,
"sha1": "c3402aff589fc4b3245b0d78c4fc74952ddebeb3",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.3402/gha.v7.24725?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2816b15a370ec8427f5bc0da1b758c4eaa8ccc8e",
"s2fieldsofstudy": [
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
213692428 | pes2o/s2orc | v3-fos-license | PBL-team teaching on developing vocational mathematics textbook
The textbook that is used today in some Vocational School in Cirebon region is a textbook of Mathematics for Vocational School from certain Publishers. Although the government once published textbooks for Vocational School, but almost no more teachers use it. Teachers generally complain about the available textbooks in terms of their practicality and effectiveness. According to them, the existing book has too broad a material coverage. In addition, there is no specific book available for use by vocational students to make vocational high school students are not motivated to learn math. This paper explores the development of mathematics textbook which can be used as one of learning resource for vocational students of competence of light vehicle engineering skill and business and motorcycle technique. The research method used is R & D. The research product was in the form of a mathematics vocational school textbook of Geometry & Measurement material designed according to the PBL model and presented through the team teaching method. The results showed that the book had a very valid level of validity. Through the use of this textbook, student can apply mathematical theory to vocational practice.
Introduction
Road Map Revitalization Vocational Curriculum 2017 [1] illustrates that the characteristics of Vocational School graduates are expected to have dual capabilities, 4C (critical thinking, creativity, collaboration, communication), literacy, elasticity, and global citizen. This is in line with the exposure of the World Economic Forum[2] that 21st century skills that vocational school students must possess are Character (adaptability to dynamic environment), Literacy (Competence), and Competency (complex problem solving skills). In accordance with the Road Map of Vocational School, that the challenge faced by the current Vocational School is the suitability between the implementation of the curriculum of learning in schools with the development of technology and industry needs. But especially for Mathematics subjects, curriculum implementation is currently more academic than vocational. Therefore, it is necessary to harmonize and align the curriculum implementation in accordance with the needs of industry and the standards of Content, Graduate Competence Standard, Process Standards and Appraisal Standards. Mathematics learning materials should be based on adoption, collaboration and adaptation between Core Mathematics Competencies and Vocational Competencies. Law of the national education system No. 20/2013 argues that vocational education is a secondary education that prepares learners primarily to work in a particular field. To meet that goal, vocational school curriculum aspects are developed based on integrative, student-based, constructive, flexible & tailor-made needs, focusing on optimal student development, sustainable learning and skills accumulation. Implementation of learning programs in Vocational School aims to develop all the potential students to have work insight and able to perform self-transformation of changes in the demands of the world of work. Implementation of learning can be more efficient if done by replicating the work environment as closely as possible with what happens in the actual workplace. In addition, the implementation of learning can also be more effective if students get a lot of stimulus in order to develop their thinking skills.
One medium that can be used in order to help develop students' thinking skills is through the provision of appropriate textbooks. Textbook in addition to supporting the learning process in the classroom, as well as supporting in improving students' cognitive, affective and psychomotor skills. Mathematics textbook serves as a source of information for students in learning mathematics material. The survey results show that the textbook that is used today in some Vocational School in Cirebon region is a textbook of Mathematics for Vocational School from certain Publishers. Although the government once published textbooks for Vocational School, but the survey results show that almost no more teachers use it. Teachers generally complain about the available textbooks in terms of their practicality and effectiveness. According to them, the existing book has too broad a material coverage. In addition, there is no specific book available for use by vocational students to make vocational high school students are not motivated to learn math. vocational high school students in general do not have a good interest in reading so that it has an impact on the quality of understanding and improvement of thinking ability.
Based on these conditions, it is necessary to research the development of textbook in accordance with the character and objectives of learning in Vocational School. The learning that is considered appropriate to improve students' learning outcomes of SMK is problem-based learning (PBL)[3]-[8]. Through many problem, students are stimulated by their curiosity that raises many questions in solving problems.
Mathematics subjects are often ignored by vocational students because they generally prefer vocational subjects. Whereas math is very important to support vocational skills, especially for students of SMK technology & engineering. Growing motivation to learn math for vocational students was not easy[3], [9]. Therefore, it is necessary to use learning strategy that is able to accommodate knowledge needs for vocational students. PBL presentation is packed more specifically for vocational students that is by using team teaching method.
PBL-Team Teaching
Various researches on the implementation of PBL in education have shown that PBL is effective to improve learning outcomes and the ability of vocational students[10], [11]. Through PBL students are able to demonstrate a positive attitude in achieving conceptual and procedural knowledge [12]. PBL is implemented with reference to the stages of problem orientation, organizing students, guiding the investigation, developing and presenting the results, analyzing and evaluating [13].
Lessons learned should depend on the idea that students learn comprehensively and integrally [14]. PBL-team teaching is a learning model that refers to PBL learning steps with team teaching method involving more than 1 expert as a teacher. PBL-team teaching on SMK mathematics learning involves 1 mathematics teacher and at least 1 vocational teacher as a learning resource. PBL-team teaching will greatly enable the occurrence of science collegialization from the interdisciplinary aspect. Type of team teaching that can be selected i.e. semi team teaching and full teaching team [15]. The developed textbook is well suited to the full team teaching method.
Textbooks
Judging from its form, teaching materials are grouped into printed materials and electronic materials. The textbook is one of the teaching materials in the form of printed materials containing the study that must be mastered by the learners [16].
Textbooks of mathematical materials of geometry and measurements developed, assessed by expert validators based on the following aspects: 1
) Presents a bibliography 3) Presents the table of contents 4) Present the introduction 5) Presents a material summary
c. Servicatics follows the flow of thought from local to global d. Compliance with the principle of student learning center 1) Encourage the interaction with the students 2) Encourage students to learn in groups e. Compliance with the use of good language rules 1) The accuracy of spelling use 2) The accuracy of the use of sentence structure f. Legibility 1) Presentation of sentences that are easy to understand students 2) Structure of sentences that fit with students' understanding 3) Using language that can be understood by students
Methods
The method used in this research is research and development method (R & D). Textbooks developed that have special characteristics for students of vocational school competence in the expertise of Light Vehicle Engineering and Motorcycle Engineering. The material developed is limited to Geometry and Measurement materials. Developing steps that are carried out in accordance with the development step according to Sugiyono [17] that has been modified namely, (1) potentials and problems, (2) data collection, (3) product design, (4) design validation, and (5) revision design. The textbook that has been developed, validated by 5 validators namely 3 lecturers of mathematics from the University of Swadaya Gunung Jati, 1 mathematics teacher from SMKN 1 Jamblang, and 1 math teacher from SMK Samudra Nusantara. Data analysis technique in this research is by using qualitative data analysis and quantitative data analysis. Qualitative data obtained through criticism and suggestions from the validator while the quantitative data obtained through a validation questionnaire.
Akbar [18] revealed that the presentation of textbook achievement level can be calculated using the following formula.
V − ah = Expert Validation Tse = Total Score Achieved TSh = Total Expected Score (maximum)
The description of textbook
Development of textbook of vocational school is done by using development steps according to Sugiyono [17]. The development steps consisting of ten development steps are modified into five developmental steps: potential and problems, data collection, product design, design validation, and design revisions. The textbooks which are developed are students and teacher's book. Vocational High School textbook that has been developed is a Geometry & Measurement Mathematics textbook that is designed in accordance with the characteristics of PBL and presented with the teaching team of vocational teacher. The content of teaching materials includes distance from point to line and from point to field in building space. Textbook contain problems and activities that are specifically in accordance with the needs of vocational high school students, especially the group of technology & industry, Vehicle Engineering and Motorcycle Engineering Program. As a support, textbooks are equipped with student activity books and student activity sheets.
The properness of textbook
Based on validation by four validator, result validation data that is, expert validator I with validation criterion 90,73%, validator expert II with validation criterion 83,87%, validator expert III with validation criterion 91,13%, validator expert IV with validation criterion 96.37%, and expert V validator with 96.37% validation criteria. The combined validation is obtained through the average calculation of the results of the four validators with the validation criterion of 90.53%, the validity level is very valid. Based on the combined validation data, the textbook of vocational school mathematics on Geometry & Measurement materials can be used in the learning process with not much revision. In addition, the validation of the four validators on the textbook vocational school mathematics declared very valid which means that textbook mathematics vocational school already meet the aspects that have been determined in the validation of teaching materials.
Conclusion
Based on the results of research, it can be concluded that the development of textbooks vocational school mathematics on materials Geometry & Measurement done in accordance with the steps of development according Sugiyono [17]. The development steps are modified from ten development steps into five development steps, namely potential and problems, data collection, product design, design validation, and design revision. Potentials and problems are the first step in research to identify problems and establish the concepts that will be developed in vocational school teaching textbooks. The data collection stage is done to obtain information in designing the textbook of vocational school mathematics so that it can solve the problem. Textbook of mathematics vocational school that has been prepared, then done validation of teaching materials by expert validators. Based on the validation results of the five expert validators, the validation of combined textbooks is obtained with validation criterion 90.53% then obtained the validity level is very valid so that the textbook of mathematics vocational school on Geometry & Measurement materials can be used in the learning process. | 2019-11-28T12:36:28.427Z | 2019-11-01T00:00:00.000 | {
"year": 2019,
"sha1": "c6e81e5160ddf47b009f95700493b3482f3d6752",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1280/4/042007",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "c5e66d6a4e27a4aeef8866ff6e7dcc9dcd65e868",
"s2fieldsofstudy": [
"Mathematics",
"Education"
],
"extfieldsofstudy": [
"Physics"
]
} |
270046057 | pes2o/s2orc | v3-fos-license | The hepatoprotective effect of 4-phenyltetrahydroquinolines on carbon tetrachloride induced hepatotoxicity in rats through autophagy inhibition
Background The liver serves as a metabolic hub within the human body, playing a crucial role in various essential functions, such as detoxification, nutrient metabolism, and hormone regulation. Therefore, protecting the liver against endogenous and exogenous insults has become a primary focus in medical research. Consequently, the potential hepatoprotective properties of multiple 4-phenyltetrahydroquinolines inspired us to thoroughly study the influence of four specially designed and synthesized derivatives on carbon tetrachloride (CCl4)-induced liver injury in rats. Methods and results Seventy-seven Wistar albino male rats weighing 140 ± 18 g were divided into eleven groups to investigate both the toxicity profile and the hepatoprotective potential of 4-phenyltetrahydroquinolines. An in-vivo hepatotoxicity model was conducted using CCl4 (1 ml/kg body weight, a 1:1 v/v mixture with corn oil, i.p.) every 72 h for 14 days. The concurrent treatment of rats with our newly synthesized compounds (each at a dose of 25 mg/kg body weight, suspended in 0.5% CMC, p.o.) every 24 h effectively lowered transaminases, preserved liver tissue integrity, and mitigated oxidative stress and inflammation. Moreover, the histopathological examination of liver tissues revealed a significant reduction in liver fibrosis, which was further supported by the immunohistochemical analysis of α-SMA. Additionally, the expression of the apoptotic genes BAX and BCL2 was monitored using real-time PCR, which showed a significant decrease in liver apoptosis. Further investigations unveiled the ability of the compounds to significantly decrease the expression of autophagy-related proteins, Beclin-1 and LC3B, consequently inhibiting autophagy. Finally, our computer-assisted simulation dockingonfirmed the obtained experimental activities. Conclusion Our findings suggest that derivatives of 4-phenyltetrahydroquinoline demonstrate hepatoprotective properties in CCl4-induced liver damage and fibrosis in rats. The potential mechanism of action may be due to the inhibition of autophagy in liver cells.
Background
The liver is a vital organ that plays a fundamental role in supporting physiological processes through various metabolic activities [1].It actively participates in the metabolism and processing of drugs and other foreign substances, utilizing its metabolic capabilities to break them down and facilitate their excretion from the body [2].Cytochrome P450s (CYPs) constitute a superfamily of enzymes involved in the bioconversion of a wide range of endogenous and exogenous compounds in the liver [3].Among these numerous CYPs, cytochrome P450 2E1 (CYP2E1) holds a unique position in liver pathophysiology [4,5].
Liver fibrosis is a pathological condition characterized by the abnormal accumulation of extracellular matrix proteins (ECM) in the liver due to chronic injury or inflammation [6].During liver injury, hepatic stellate cells (HSCs) undergo phenotypic changes and are transformed into myofibroblasts, which participate in the excessive production of ECM [7].Autophagy is another important mechanism for maintaining homeostasis by breaking down and recycling damaged organelles and other cellular components [8].This process is mediated by the activation of autophagy-related proteins, such as Beclin-1 and microtubule-associated protein 1 light chain 3 (LC3B) [9].Lipophagy is a specialized form of autophagy that regulates lipid metabolism by selectively engulfing and degrading lipid droplets within cells.It is involved in the activation of HSCs and the progression of fibrosis as it supplies them with the energy needed for their activation [10,11].In this context, autophagy inhibition has emerged as a promising targeted pathway for the development of hepatoprotective agents [9][10][11].
Carbon tetrachloride (CCl 4 ) is a prevalent toxic chemical that can induce immediate damage to the liver [12].The enzymatic action of CYP2E1 initiates the metabolism of CCl 4 within the liver.This enzymatic reaction leads to the formation of highly reactive radicals such as trichloromethyl (CCl3⋅), which triggers an entangled interaction of oxidative stress, inflammation, apoptosis, and fibrosis, making CCl 4 -induced hepatotoxicity a widely used model for evaluating hepatoprotective agents [13,14].
Tetrahydroquinoline (THQ) derivatives hold a considerable significance in medicinal chemistry.Owing to their diverse pharmacological activities, THQs are often used as scaffolds for designing several bioactive compounds, encompassing antimicrobial, anticancer, anti-inflammatory, and acetylcholinesterase inhibitory activities [15][16][17][18][19]. Tetrahydroquinolines have been also reported as an important scaffold possessing hepatoprotective properties (Structure I, Fig. 1) [20].In addition, the discovery of the ability of several 4-phenyltetrahydroquinoline derivatives (THQ II, Fig. 1) synthesized in our laboratories [18,21,22] to decrease elevated levels of ALT in tested rats further drew our attention to the importance of such a moiety.The structure of these compounds was inspired by the acetyl cholinesterase inhibitor (AChEI), tacrine.Finally, Fig. 1 Rational for the synthesis of compounds 1a, 1b, 2a, and 2b reviewing the literature further emphasized the importance of tetrahydroquinoline derivatives as hepatoprotective agents (Structure III, Fig. 1) [23,24].
Studying the structures of the above-mentioned compounds revealed the importance of the presence of substituents at the 2-and 4-positions of the tetrahydroquinoline nucleus.Therefore, all the aforementioned discoveries have motivated us to design and synthesize four compounds (1a, 1b, 2a, and 2b) bearing different substituents on the phenyl group present in our previously designed 4-phenyltetrahydroquinolines in addition to different substituents at the 2-position of the pyridine ring for further investigation of the effect of these substitutions on the hepatoprotective activities of the synthesized compounds.Consequently, the objective of this study is to explore the hepatoprotective effect of these four novel 4-phenyltetrahydroquinoline derivatives and assess their safety profile on hepatocytes.Moreover, we aim to elucidate the potential mechanism of action underlying their effect and their structure-activity relationship.
Compounds synthesis
All reagents and solvents were purchased from commercial and international suppliers.Compound melting points in open glass capillaries were obtained using the Thomas-Hoover melting point equipment.Thin-layer chromatography (TLC) using silica gel-precoated aluminum sheets (Type 60 GF254; Merck; Germany) were used to monitor the reactions and determine the purity of the chemicals employed in the study; compounds were identified by exposing spots on TLC sheets to an ultra-violet lamp at k = 254 nm for a few seconds.
Cell culture and MTT assay
Human hepatoma (HepG2) cells were used to evaluate the in-vitro cytotoxicity of the newly synthesized compounds compared to tacrine as their structure-inspiring drug.The HepG2 cell line was obtained from ATCC.The cells were cultured in DMEM with high glucose, supplemented with 10% FBS, 100 U/ml penicillin, and 1% streptomycin were maintained in a 37 °C incubator with 5% CO 2 in a humidified atmosphere.One day before the treatment, cells were seeded at a density of 1 × 10 4 cells/ well in a sterile flat bottom 96-well tissue culture plate.After leaving the cells to adhere for 24 h, cells were treated with different concentrations (7.8125, 15.625, 31.25, 62.5, 125, 250, and 500 μM) of each compound as well as tacrine for 24 h in the 5% CO 2 incubator.
Thereafter, cell viability was assessed using a 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-tetrazolium bromide (MTT) assay obtained from Biobasic inc (Canada) [27].Following the 24-h treatment period, MTT was added to the wells at a final concentration of 0.5 mg/ mL, and the cells were incubated for 4 h in the dark.Subsequently, the media were carefully removed from the wells, and the formed formazan crystals were dissolved by adding 100 μL of DMSO.Absorbance readings were taken using a microplate reader (BioTek, USA), and the percentage cell viability was calculated for each condition relative to the control cells treated with the same DMSO concentration.
Animal studies
Seventy-seven Wistar albino male rats weighing 140 ± 18 g were obtained from the animal house at the Institute of Graduate Studies and Research, Alexandria University, Egypt.Rats were kept at room temperature, under a 12-h light/dark cycle, in well-ventilated polypropylene cages, and had unrestricted access to food and water.This study was conducted following the ARRIVE guidelines and has been approved (Approval Code: 06-2023-2-26-1-144) by the Animal Care and Use Committee of the Faculty of Pharmacy, Alexandria University.
The dose of the newly synthesized compounds (25 mg/ kg/day, p.o.) was chosen based on the therapeutic dose of tacrine that maintained the cholinergic-mediated behaviors of rats in a previous study by Goh et al., which investigated the pharmacokinetic and pharmacodynamic properties of tacrine and other cholinesterase inhibitors [30].
Sample collection and preparation
Rats were euthanized 24 h after the last dose of treatment with an inhaled overdose of isoflurane [31].Blood was extracted through a cardiac puncture, left for coagulation, and the serum was obtained by centrifugation for 10 min at 5000 rpm (GBF501, Centrifuge Cencom-II, Selecta).Livers were collected, then dissected into portions.One portion was fixed overnight in 10% buffered formalin (Adwic-El Nasr Pharmaceutical Co., Egypt) for histological examination.Another portion was homogenized, centrifuged at 10,000 rpm for 10 min, and the supernatants were collected for further investigation.Total protein content of all homogenates were determined using the Pierce ™ BCA Protein Assay Kit (Catalog number: 23227, Thermo-Fisher Scientific Co., USA) following the manufacturer's instructions [32].Liver tissues and homogenates were stored at − 80 °C until used.
Biochemical analysis
A variety of colorimetric serum diagnostic kits were used to assess the toxicity and hepatoprotective potential of the newly synthesized derivatives.Lipid profile parameters, including total cholesterol (TC) and triglycerides (TG), as well as kidney function tests, urea, and creatinine were measured to evaluate the toxicity profile.Liver function tests, including alanine transaminase (ALT), aspartate transaminase (AST), alkaline phosphatase (ALP), and total bilirubin (TBIL) were used for the assessment of both hepatotoxicity and hepatoprotective potential.All diagnostic kits were purchased from Biodiagnostics, Egypt, and were used and stored according to the manufacturer's instructions.
The level of reduced glutathione (GSH) was measured in liver tissue homogenates using a colorimetric kit for GSH (Cat.No.E-BC-K030-M, Elabscience, USA).Malondialdehyde (MDA) levels were determined as thiobarbituric acid reactive substances (TBARS) in liver homogenates according to the method of Draper and Hadley [33].
Histopathological assessment
Serial sectioning was done on the formalin-fixed liver tissues, which were then processed into paraffin blocks.Multiple serial, 5-μm-thick sections were cut and mounted on glass slides.One section was stained by H&E stain to assess the activity, while the other was stained by Masson trichrome stain to assess fibrosis.Both activity and fibrosis were assessed using a light microscope (Olympus, CX 22LED) using the METAVIR scoring system [34].For activity, the number of inflammatory foci (lobular necrosis) and degree of portal inflammation (piece meal necrosis) were evaluated, and then a final histological activity score was given (A0 = none, A1 = mild, A2 = moderate, and A3 = sever).Meanwhile, fibrosis assessment was done on Masson trichrome stained sections.The fibrosis scored according to a 4-tiered system (F0 = no fibrosis, F1 = mild/moderate, F2 = significant, F3 = advanced fibrosis, and F4 = cirrhosis).Other pathologic features, such as the presence or absence of steatosis and/or apoptotic hepatocytes, were assessed if present.
Immunohistochemistry analysis
From paraffin-embedded blocks of liver tissues, 4-to 5-μm-thick sections were cut using a semi-automated microtome.Sections were mounted on positively charged slides.They were stained by alpha-smooth muscle actin (α-SMA) primary antibody (Clone 1A4, ready-to-use, mouse monoclonal antibody, Dako) using the autostainer DAKO link 48.Stained sections were examined under light microscopy.Five random high-power fields were photographed using a microscope adopted camera.Using Image J software, activated HSCs/myofibroblasts were counted in the five high power fields.HSCs were seen as elongated, flat cells within fibrous septa or liver lobules.Only cells with well-visible nuclei were counted [35].
Enzyme-linked immunosorbent assay
The pro-inflammatory and pro-fibrotic cytokines tumor necrosis factor alpha (TNF-α) and transforming growth factor beta (TGF-β) were evaluated in liver tissues using ELISA kits (Catalog Numbers: CSB-E11987r and CSB-E04727r, CUSABIO, USA).For further investigation of the potential mechanism of action of the synthesized compounds, the levels of autophagy-related proteins, Beclin-1 (Catalog Number: CSB-EL002658RA, CUSA-BIO, USA) and LC3B (Catalog No. LS-F19802, Lifespan Biosciences, USA), were quantified in liver homogenates.Furthermore, the activity of the CYP2E1 enzyme was determined in liver cells using an ELISA kit (Catalog Number: CSB-E09782r, CUSABIO, USA).
Quantitative real-time polymerase chain reaction
Total RNA was extracted from liver tissues using the phenol/guanidine extraction method (miRNeasy Micro Kit, catalog number: 217004, QIAGEN, USA).The cDNA was reverse transcribed from RNA using the Applied Biosystems ™ High-Capacity cDNA Reverse Transcription Kit (Catalog Number: 4368814, Thermo-Fisher Scientific).Quantitative real-time PCR was performed to measure the expression of the apoptotic markers BAX and BCL2 using Maxima SYBR Green/ROX qPCR Master Mix 2X (Catalog number: K0221, Thermo-Fisher Scientific) and primer sequences as listed in Table 1.The relative expression of the target gene was calculated according to the threshold cycle (Ct) based on the 2 −∆∆ct formula [36].
In-silico studies and docking protocol
In-silico docking experiments of the synthesized compounds against Beclin-1, LC3B, and CYP2E1 were performed and compared to the observed biological testing results.Molecular Operating Environment (MOEDock 2015) software (Chemical Computing Group, Montreal, QC) was utilized to run computer-assisted simulation docking experiments using an MMFF94X force field.Crystal structures of Beclin-1, LC3B, and CYP2E1 were obtained from the Protein Data Bank (PDB ID: 6TZC, 5GMV, and 3GPH, respectively) [37][38][39] and were used in the docking simulations.The ligand molecules were constructed utilizing the builder molecule, and the energy was minimized.Ligands were docked within the active site using the MOE Dock.
Statistical analysis
GraphPad Prism software (version 8.0.2) was used for statistical analysis.For parametric data, results were presented as means ± SEM, and the differences between groups were analyzed using one-way ANOVA followed by Dunnett's post-hoc test.Histopathological nonparametric data were analyzed using the Kruskal-Wallis test, followed by Dunn's multiple comparisons test, and presented as medians (minimum to maximum).A p-value < 0.05 was considered statistically significant.The sample size for this study was determined using G*power 3.1 software [40], taking into consideration the different mean values obtained from previous studies in the literature [24,41,42].A power analysis was conducted to determine the minimum sample size required to detect a statistically significant effect with effect size 0.55, α-error 0.05, and power 0.85 [43].
In-vitro cytotoxicity assay
The MTT assay revealed a significant increase in the IC50 values of our compounds (181.15 and 169.15 μM) for Cpds 1a, 1b, 2a, and 2b, respectively, compared to tacrine, which had an IC50 of 123.9 μM, as shown in Fig. 2.
In-vivo toxicity studies
The oral administration of the tested compounds only, groups (VIII-XI), showed no significant elevation in either of the liver enzymes, ALT or AST, compared to the control group (Fig. 3A).The new compounds also demonstrated no significant increase in total cholesterol or triglyceride levels (Fig. 3B).Nevertheless, Cpd 1a showed a positive impact on the lipid profile by significantly lowering both total cholesterol and triglyceride levels (Fig. 3B).Moreover, these compounds showed no significant effect on both serum urea and creatinine (Fig. 3C).Furthermore, histopathology assessment of the liver sections of rats showed normal lobular architecture of the liver with no observed fibrosis in any of the liver sections.Meanwhile, mild, insignificant lobular inflammation (activity grade 1) was noted in Cpd 1b and less frequently in Cpd 2b (Fig. 4), while no steatosis was noted in any of the treated groups.
In-vivo hepatoprotective activity 4-phenyltetrahydroquinolines mitigate CCL 4 -induced hepatotoxicity
In the hepatotoxicity group, liver injury was manifested as increased levels of serum ALT, AST, ALP, and total bilirubin (Fig. 5).In the groups treated with the compounds under investigation along with CCL 4 , a significant reduction in ALT, AST, ALP, and total bilirubin levels was observed when compared against the hepatotoxicity group, as shown in Fig. 5. Histological examination of liver tissues was conducted to further validate the aforementioned results using both hematoxylin and eosin (H&E) and Masson's trichrome stains for accurate assessment of hepatic cell morphology, tissue architecture, and fibrotic changes (Fig. 6).Histopathologic assessment of rats that received corn oil, control group, showed normal histology.The liver showed preserved lobular architecture, where hepatocytes are seen radiating as one-to twocell-thick cords from central veins.Portal tracts, seen in between lobules, are composed of arteries, veins, and bile ducts within scanty fibrous tissue.No inflammation was noted either in lobules or portal tracts.They were scored as A0 (activity) and F0 (fibrosis) according to the metavir scoring system.Meanwhile, in the hepatotoxicity group, the liver sections showed disorganized architecture with total loss of normal lobular hepatic structure.
The liver was formed of multiple nodules of different sizes, separated by thick fibrous septa.The hepatocytes showed evident macrovesicular steatosis in most areas, with scattered apoptotic cells.Lobular necrosis foci were seen, as well as portal and septal moderate inflammation.The liver sections showed a high activity grade (A3) and fibrosis scores (F4).Improvement of both fibrosis and activity scores was noted in different treated groups.
Regarding Cpd 1a treated rats, a mild improvement in the fibrosis score (F3) as well as activity grade (A2) were seen.Macrovesicular steatosis and apoptotic cells were still detected.Meanwhile, Cpd 1b and Cpd 2b showed better improvement in fibrosis score.Furthermore, Cpd 2a showed the best protective effect on the liver sections.The livers are nearing total restoration of hepatic architecture.The fibrosis score was (F0-F1), this was so close to the protective effect of the silymarin group, which dropped the fibrosis score to almost F0.Regarding activity, lobular and portal inflammation improved markedly in Cpd 2a treated rats.No steatosis or apoptotic cells were detected (Fig. 5E).
4-phenyltetrahydroquinolines alleviate CCL 4 -induced oxidative stress, inflammation, fibrosis, and apoptosis
A significant increase in the level of malondialdehyde (MDA) was observed In the hepatotoxicity group, indicating heightened oxidative stress.Concurrently, the treated groups exhibited a significant reduction in MDA levels compared to the hepatotoxicity group (Fig. 7A).Moreover, the hepatotoxicity group displayed a substantial decrease in the level of the endogenous antioxidant, glutathione (GSH), whereas the treated groups maintained elevated GSH levels (Fig. 7B).The assessment of inflammatory markers, TNF-α and TGF-β, showed a significant increase in both markers within the hepatotoxicity group, while both cytokines exhibited a significant decrease in the treated groups, as demonstrated in Fig. 7C, D, respectively.Additionally, in the immunohistochemical analysis of liver tissues, very few spindle cells were detected in the control group, indicating a low number of α-SMA positive myofibroblasts.In contrast, the hepatotoxicity group exhibited a dark brown stain of fibrotic septa.Upon high-power examination, numerous positive cells were detected within septa and liver lobules, indicating hepatic stellate cell (HSC) activation.This count significantly decreased in the treated groups as shown in Fig. 8.
T a c r i n e
Quantitative real-time PCR showed an elevated BAX gene expression and decreased BCL2 gene expression in the hepatotoxicity group, resulting in an increased BAX/ BCL2 ratio.In contrast, the treated groups demonstrated significantly lower BAX mRNA levels and higher BCL2 mRNA levels, leading to a notable decrease in the BAX/ BCL2 ratio (Fig. 7E).
4-phenyltetrahydroquinolines suppress the production of autophagy-related proteins without influencing the activity of CYP2E1
According to our findings, as represented in Fig. 9A, our compounds have no significant effect on CYP2E1 enzyme activity.In contrast, silymarin exerts a substantial inhibitory effect on CYP2E1 enzyme activity.Moreover, all four newly synthesized compounds as well as silymarin significantly suppress the expression of both Beclin-1 and LC3B proteins (Fig. 9B).
In-silico docking studies
Docking analysis using crystal structures revealed that compounds 2b showed the most favorable binding to the Beclin-1 receptor where it formed three hydrogen bonds followed by compound 2a showing two hydrogen bonds, a pi-H bond and a pi-pi bond with the receptor.On the other hand, compound 1a and silymarin came third in the binding interactions where they formed one hydrogen bond with Beclin-1.Finally compound 1b showed only pi-cation interaction with the receptor (Fig. 10A-E).Similarly, the docking simulation of the synthesized compounds with LC3B binding site indicated that all the compounds except for 1b showed two types of bonds with the binding site: two hydrogen bonds in case of 2b, one hydrogen bond and one pi-cation bond in case of 2a and silymarin and a hydrogen bond and a pi-H bond in case of 1a (Fig. 10F-J).Finally, investigating the docking modes of the tested compounds in the CYP2E1 binding sites showed that none of the compounds revealed any bonding with the receptor unlike silymarin which showed five hydrogen bonds with the receptor (Fig. 11).
Discussion
Liver diseases are widespread throughout the world and have become a significant global health burden [44].Liver diseases often lead to liver fibrosis, which is an initial histological alteration preceding the progression to cirrhosis that can ultimately lead to hepatocellular carcinoma and death [44,45].Carbon tetrachloride (CCl 4 ) is a well-known hepatotoxic substance [14].Owing to the metabolic activity of CYP2E1, CCl 4 is metabolized into harmful reactive species that cause liver injury through a cascade of complex events involving lipid peroxidation, glutathione depletion, as well as pro-inflammatory and pro-fibrotic cytokines expression upregulation, ultimately leading to cell death and hepatocellular injury [12][13][14].Tacrine was the first FDA-approved cholinesterase (ChE) inhibitor for treating Alzheimer's disease (AD) [46,47].However, the lack of sufficient selectivity led to adverse effects, particularly hepatotoxicity [48].Previous studies demonstrated the hepatoprotective properties of 4-phenyltetrahydroquinolines, which were designed based on the structure of tacrine [18,21,22].The appeal of these compounds stems from their diverse structural and biological characteristics [48][49][50].In this study, we investigated the toxicity profile and hepatoprotective effects of four novel 4-phenyltetrahydroquinoline derivatives and their potential mechanisms of action.The Effect of Cpds (1a, 1b, 2a and 2b) on oxidative stress, inflammation, fibrosis, and apoptosis.Oxidative stress markers, A MDA, and B GSH levels were determined spectrophotometrically.The levels of pro-inflammatory, and pro-fibrotic cytokines, C TNF-α and D TGF-β were quantified using ELISA technique.E The level of expression of both the pro-apoptotic and the anti-apoptotic genes, BAX and BCL2, respectively, was determined using qPCR and the ratio of BAX/BCL2 was calculated.Values are represented as mean ± SEM, n = 7. *p < 0.05, ***p < 0.001, ****p < 0.0001 compared to hepatotoxicity group.### p < 0.001, #### p < 0.0001 compared to control group HepG2 cells are considered to be a suitable model widely used for studying in-vitro liver toxicity, as they preserve most of the specialized functions of normal human hepatocytes [51].In-vitro cytotoxicity studies of our compounds on the HepG2 cell line revealed the relatively safe hepatic profile of the compounds A comprehensive assessment of the in-vivo toxicity profile of the compounds was conducted, utilizing both biochemical and histopathological analyses for a thorough assessment.Previous studies have suggested that an elevation in transaminases (AST and ALT), bilirubin, or alkaline phosphatase serves as an early indicator of liver damage [53,54].Similarly, serum creatinine and urea levels have commonly been employed for the detection of kidney toxicity [55,56].Our findings once again demonstrated a safe profile concerning hepatic (ALT and AST) as well as renal (urea and creatinine) biomarkers, with no significant increase observed in these markers levels [18].The safety profile of the compounds in the liver was further evidenced histologically using hematoxylin and eosin stains, as well as Masson's trichrome stain, where all compounds showed normal liver architecture and normal hepatocytes.Concerning the lipid profile biomarkers, serum cholesterol and triglycerides, which provided us with insights into potential hepatological 9 The Effect of Cpds (1a, 1b, 2a and 2b) on CYP2E1 and Autophagy.A The levels of CYP2E1 enzyme expression.B Autophagy-related proteins, Beclin-1 and LC3B.Protein expression is measured using ELISA technique.Values are represented as mean ± SEM, n = 7. ****p < 0.0001 compared to hepatotoxicity group.# p < 0.05, ## p < 0.01, ### p < 0.001, #### p < 0.0001 compared to control group issues [57], our synthesized compounds not only proved to be safe in relation to lipid profiles but also some of them exhibited the ability to reduce cholesterol and triglyceride levels.Notably, compound 1a showed the most favorable results, followed by 2a, 1b, and, finally, 2b.As mentioned earlier all compounds could be considered not only safe but also beneficial in some cases regarding the toxicity studies.As for the hepatoprotective testing, CCl 4 was used to induce hepatotoxicity in rats.A successful model was obtained as increased levels of liver biomarkers, ALT, AST, total bilirubin, and ALP were observed along with histological changes in liver tissues [28].Our results indicated that all four compounds were able to significantly reduce the elevated levels of transaminases.Moreover, compounds 2a and 2b were as beneficial as silymarin (the reference drug) in reducing the ALT and AST CCl 4 -elevated levels and bringing them back to the control level.Additionally, all compounds showed better results regarding ALP and total bilirubin levels relative to silymarin.For further validation of our biochemical results, a histopathological examination of the liver tissues revealed notable differences among different groups.The control group displayed normal liver histology with intact lobular structure, minimal fibrous tissue, and no signs of inflammation.Conversely, the CCl 4 -treated group exhibited severe liver disorganization, extensive fibrosis, and inflammation [58].The treated groups showed varying degrees of improvement, with Cpd 2a demonstrating the most substantial protection, nearly restoring liver normal architecture and significantly reducing inflammation.Importantly, the efficacy of Cpd 2a closely resembled that of silymarin, which achieved a remarkable reduction in fibrosis and inflammation.
The hepatotoxic effect resulting from CCl 4 metabolism, mainly triggered by the creation of trichloromethyl radicals, leads to increased level of MDA.The elevated MDA level serves as a clear indicator of intensified lipid peroxidation, and in conjunction with a decrease in the level of the endogenous antioxidant, glutathione, they both provide compelling evidence of oxidative stress in the liver [2,12].According to our results, it was evident that compounds 2a and 2b were highly superior to compounds 1a, 1b and silymarin in reducing MDA levels.In addition, levels of GSH of all tested compounds, regardless of the substituents, were as significantly high as silymarin when compared to the hepatotoxicity group, proving the ability of all compounds to reduce the oxidative stress caused by the reactive oxygen species [59,60].Consistent damage to hepatocytes, as a consequence of free radical formation, triggers the recruitment of pro-inflammatory and pro-fibrotic cytokines, ultimately leading to fibrosis [58].
Assessing the pro-inflammatory mediators (TNF-α and TGF-β) revealed that compounds 2a and 2b exert a better effect on decreasing the level of both cytokines relative to compounds 1a and 1b and silymarin.Persistent liver injury and inflammation stimulate hepatic stellate cell activation, a pivotal process in which quiescent HSCs undergo transformation into myofibroblasts [11].This activation involves notable changes, including the increased expression of α-SMA and the depletion of lipid droplets within HSCs [9].α-SMA serves as a marker of HSC activation and plays a central role in the excessive synthesis and deposition of extracellular matrix proteins, thereby contributing to the development of liver fibrosis [61].In the control group, minimal spindle cells were observed, suggesting a low presence of α-SMA-positive myofibroblasts.In contrast, the CCL 4 group exhibited intense brown staining in the fibrotic septa.Upon closer examination, a significant number of positive cells were identified within both the septa and liver lobules as an indication of HSCs activation.However, the count of positive cells decreased significantly in the treated groups with compounds 2a and 2b showing better levels of significance, closely aligning with silymarin, indicating a decrease in liver fibrosis.
Apoptosis occurs as a consequence of CCl 4 -induced liver damage [14].The administration of CCl 4 significantly and markedly upregulated the expression of BAX, an essential pro-apoptotic gene, while concurrently downregulating the expression of the anti-apoptotic gene BCL2.This resulted in an overall elevation of the BAX/ BCL2 ratio, indicating an increased apoptotic activity.On the other hand, the treated groups exhibited reduced BAX mRNA levels and increased BCL2 mRNA levels.The notable decrease in the BAX/BCL2 ratio within the treated groups signifies a decrease in apoptosis [13].Finally, all the results were highly consistent, emphasizing the superiority of compounds 2a and 2b to all other compounds, indicating the probable benefits of the presence of an amino group rather than a hydroxy group at the 2-position of the tetrahydroquinoline moiety.
In our attempts to explore the potential mechanism underlying the hepatoprotective effects of these compounds, we assessed the levels of the autophagy-related proteins, Beclin-1 and LC3B, as well as the CYP2E1 enzyme.Autophagy holds a significant role in the context of liver fibrosis, regulated by the activation of autophagyrelated proteins, including Beclin-1 and LC3B [10].Beclin-1 plays a crucial role in autophagy initiation by forming a complex with other proteins [62].This complex drives the formation of autophagosomes, which are essential for the sequestration of cellular components targeted for degradation [63].Likewise, LC3B, a pivotal marker of autophagy, undergoes lipidation and is recruited to autophagosomes, facilitating their maturation and cargo degradation [64].Our findings revealed an upregulation of both Beclin-1 and LC3B in the CCl 4 -treated group, suggesting an elevation in autophagic activity within hepatocytes, which subsequently led to an exacerbated activation of HSCs and the deposition of ECM [9,65].Interestingly, our results align with existing studies emphasizing the pro-fibrotic effects of autophagy, through the degradation of lipid droplets within HSCs by lipophagy, a process that supplies the energy essential for the activation and conversion of HSCs into myofibroblasts, ultimately resulting in fibrosis [9,66,67].Upon coadministration of our compounds with CCl 4 , there was a notable reduction in the expression of these autophagyrelated proteins.This observation indicates that our compounds play a significant role in inhibiting autophagy, thereby mitigating the development of CCl 4 -induced fibrosis [9].Conversely, our observed reduction in the expression of Beclin-1 and LC3B following the administration of the compounds under investigation alongside CCl 4 contradicts some studies proposing the anti-fibrotic effects of autophagy activation [11,67].Previous research has suggested that autophagy, when activated, diminishes the accumulation of type I collagen induced by TGF-β activation [68].However, our results imply that 4-phenyltetrahydroquinolines exert their anti-fibrotic effect by inhibiting autophagy, which is extra active upon CCl 4 administration [10].Furthermore, our findings once again demonstrated that compounds 2a and 2b exhibit a greater impact on inhibiting autophagy compared to the other compounds tested, effectively suppressing the expression of both Beclin-1 and LC3B.
Regarding CYP2E1, a member of the cytochrome P450 superfamily that participates effectively in the conversion of CCl 4 into free radicals, silymarin demonstrates a noteworthy decrease in CYP2E1 levels [69].This reduction in CYP2E1 levels led to a decrease in free radical production and, subsequently, mitigated the induction of liver injury [70].In contrast, our compounds showed a minimal to negligible impact on diminishing the expression and activity levels of the CYP2E1 enzyme.This implies that 4-phenyltetrahydroquinolines have no effect on the metabolism of CCl 4 [4].
Our recent findings closely supported our earlier conclusion concerning the significance of various substituents present at the 2-position of the tetrahydroquinoline moiety [18,71].It was evident that the presence of an amino group at the 2-position yielded significantly better results compared to the hydroxy substituent [24].Nevertheless, the limited number of compounds tested was insufficient to validate the conclusion.Accordingly, two more pairs of tetrahydroquinolines, featuring both 2-amino and 2-hydroxy substituents, were synthesized and examined to reassess the previously established conclusion.Fortunately, the outcomes once more demonstrated the superiority of the amino group.Moreover, additional substituents were introduced to the 4-phenyl group, including an amino and a hydroxy group, to further explore whether an amino group at the 4-phenyl ring could compensate the reduced activity observed in the 2-hydroxy compounds.Once more, it became evident that the presence of an amino group at the para-position of the 4-phenyl ring was able to compensate for its absence at the 2-position of the tetrahydroquinoline, with compound 1a consistently demonstrated superiority over 1b across multiple biochemical tests.Furthermore, the results of the in silico molecular docking of our compounds against Beclin-1, LC3B, and CYP2E1 strongly correlated with the practical testing results, demonstrating enhanced binding of the compound pair containing an amino group at the 2-position of the tetrahydroquinoline moiety.
Finally, while our study provides a promising evidence of the hepatoprotective effects of the synthesized 4-phenyltetrahydroquinoline derivatives in animal models, it is crucial to acknowledge the necessity of clinical trials to bridge the gap between preclinical findings and human application.Clinical trials will provide essential insights into the safety profile, efficacy, and potential therapeutic benefits of these compounds for human liver diseases.
Conclusion
This study shows that 4-phenyltetrahydroquinolines are beneficial in lowering CCl 4 -induced liver damage and fibrosis in rats.The molecular mechanism behind this hepatoprotective effect is suggested to be through autophagy inhibition, while silymarin achieves this effect by inhibiting both autophagy and CYP2E1 enzyme activity.Furthermore, it was observed that the presence of an amino group in the 2-position of the pyridyl ring (compounds 2a and 2b) gave much better results relative to the 2-hydroxyl substituted derivatives (compounds 1a and 1b).Such an observation will be further investigated in future studies.
Fig. 4
Fig. 4 Histopathological assessment of the safety of Cpds (1a, 1b, 2a, and 2b) on liver tissues.H&E stained liver sections (n = 5) show normal liver architecture and normal hepatocytes.Mild portal or lobular inflammation is seen occasionally in Cpd 1b and 2b (arrow).Masson trichrome-stained sections (n = 5) show no fibrosis (F0) in low power.High power view of one portal tract shows scanty blue-stained fibrous tissue around portal vessels.Scale bar for low power (× 100) is 200 μM and for high power (× 400) is 50 μM
Fig. 6
Fig. 6 Histopathological assessment of Liver Fibrosis and Metavir activity.Liver sections stained with H&E to assess liver activity, and sections stained with Masson trichrome to assess the degree of fibrosis, n = 7. H&E high power (× 400) shows macrovesicular steatosis (arrows) and apoptotic cells (arrow heads), as well as foci of inflammation (stars).In Masson trichrome stained sections, P portal tract, C central vein, arrow fibrous bridging.Scale bar for low power (× 100) is 200 μM and for high power (× 200) is 100 μM and (× 400) is 50 μM
Fig. 8
Fig. 8 Immunohistochemical assessment of α-SMA in liver tissues.A Assessment of α-SMA positive cells in α-SMA immune stained liver section, n = 3. Blood vessels in portal tracts are stained positive as an internal positive control.Arrows are pointing at the positively stained HSCs/ myofibroblasts.They are seen as long flat cells in between hepatocytes and within portal tracts.Scale bar for low power (× 100) is 200 μM and for high power (× 400) is 50 μM.B Number of α-SMA positive cells / 5 HPFs.Values are represented as mean ± SEM. *p < 0.05, ***p < 0.001, ****p < 0.0001 compared to the hepatotoxicity group.###p < 0.001, ####p < 0.0001 compared to the control group | 2024-05-27T13:14:12.550Z | 2024-05-27T00:00:00.000 | {
"year": 2024,
"sha1": "179979f84610a45ebb3afb405cf8ab205f51ef34",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "58a5bb8b44af06d33a6c998201b1856194709ab7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54035763 | pes2o/s2orc | v3-fos-license | Implementation of an eco-hydrological classifi cation in Chilean rivers
Concern has grown in Chile to protect and conserve aquatic ecosystems taking into account the high degree of habitat degradation. As a fi rst step towards the development of conservation plans, it is necessary to classify these ecosystems in order to learn about them and understand their different types and functions. Considering that hydrological regime is the main factor in the composition of river ecosystems, an Eco-Hydrological Classifi cation of Chilean Rivers (REC-Chile) has been developed. REC-Chile is based on the hierarchical superposition of the controlling factors of the hydrological pattern in Chile. REC-Chile is a multi-scale classifi cation system, allowing to represent different river patterns at different spatial scales according to the selected controlling factors. The typology of the river segment is represented as a 6-digit code, in which the position of the digit represents the controlling factor and the digit value represents the factor category. This hierarchical confi guration and the assignment of geographyindependent classes makes REC-Chile an easy way of interpreting the hydrological classes. Given the fl exibility provided by the multi-scale nature of REC-Chile, and the simplicity in the interpretation of the classes, it is expected that REC-Chile will become a very useful and suitable tool for developing conservation plans for aquatic ecosystems.
INTRODUCTION
Concern over the sustainable management of continental waters bodies is constantly increasing worldwide, in both the public and private ambit.Chile is also involved in this process, since the river ecosystems distributed over the whole length of the country have extraordinary value due to the high degree of endemicism of the species they contain, among others.About aquatic vertebrates, the endemicism reported to date is 81% for fi sh (Habit et al. 2006) and 60.7% for amphibians (Ortiz & Díaz-Páez 2006).Among aquatic invertebrates an endemicism of 91.7% has been reported for gastropods (Valdovinos 2006), 30.8% for bivalves (Parada & Peredo 2006), 74.4% for malacostracans (Jara et al. 2006), 56% for Ephemeropteres (Camousseight 2006) and 57% for Plecopteres (Vera & Camousseight 2006).Furthermore, given the increasing deterioration and/or fragmentation of habitats, freshwater species today present serious conservation problems (Habit et al. 2006, Ortíz & Díaz-Páez 2006, Parada & Peredo 2006, Valdovinos 2006, Perez-Losada et al. 2002, Bahamonde et al. 1998).
To date, few information exists about conservation measurements for Chilean freshwater species (Peredo et al. 2006, Parada & Peredo 2005, Peredo et al. 2005, Habit et al. 2002) and their habitats (Habit et al. 2006).Likewise, most of these studies lack of accuracy on local distribution (Habit et al. 2006, Parada & Peredo 2006, Valdovinos 2006), which one is indispensable information for carrying out management and conservation programmes of Chilean freshwater species and river ecosystems.
The Chilean Government has developed the National Strategy for Hydrographic Basins (Estrategia Nacional de Gestión Integrada de Cuencas Hidrográfi cas -ENGICH) as an instrument for the protection and conservation of aquatic ecosystems; its role is to contribute to the sustainable use of water resources, harmonizing the protection of ecosystems with the availability of the resource (CONAMA 2007).To ensure the appropriate implementation of ENGICH, all the water bodies existing in the country must be studied and classifi ed.
Landscape classifi cations have been developed for management purposes, as tools for understanding ecosystems.Such classifi cations allow data interpretation, inventories development, the extrapolation of information from specifi c sites to larger areas, development of objects and standards, etc. (Bailey 2009;Omernik & Bailey 1997).
There are a variety of methods for developing a classifi cation of natural aquatic systems, including taxonomic classifi cation and those based on regionalization.Taxonomic classifi cations are not necessarily based on the physical or ecological processes which govern aquatic ecosystems.Methods based on regionalization, also known as ecoregions, respond to a physical focus which considers the control variables of fl uvial processes and their patterns (González del Tánago & García de Jalón 2006;Bailey et al. 1978).The ecoregions are homogeneous zones with regard to certain characteristics or parameters on a determined spatial scale (Snelder et al. 2005;Omernik & Bailey 1997;Bailey et al. 1978).Snelder and Biggs (2002) pointed out that these ecoregions are unable to represent the longitudinal gradients in a river ecosystem, which were synthesized in the River Continuum Concept (Vannote et al. 1980); therefore the use of such ecoregions for river classifi cation are limited.
Since the 1980's several authors have addressed landscape classifi cation by means of hierarchical organization systems for rivers.They recognise that rivers belong to a basin, which one feeds them.Therefore, it is expected that the rivers are being infl uenced by the watershed characteristics, accepting that ecological processes in a river depend on physical factors which occur at several scales (González del Tánago & García de Jalón 2006, Snelder & Biggs 2002, Montgomery & Buffi ngton 1997, Omernik & Bailey 1997, Frissell et al. 1986).Snelder and Biggs (2002) developed the River Environmental Classifi cation (REC) to classify the rivers of New Zealand based on the following premises: a) Ecological patterns are dependent on a number of factors associated with the regional scale of the various physical processes; b) The ecological characteristics of rivers respond to fl uvial processes; and c) Classes are assigned to each segment of river geographically independent.In the REC, the typology of each river segment is determined by the hierarchical superposition of controlling factors, which are the principal causes of the spatial variation of the hydrological pattern at a determined scale (Snelder et al. 2005, Snelder & Biggs 2002).
There are many eco-hydrological similarities between the rivers of Chile and those of New Zealand.Hydrological regime, river geomorphology, and presence of certain fi sh taxa, e.g.Galaxias maculatus and Geotria australis (Vila et al. 2006, Dyer 2000, Mc Dowall 2000) and numerous species of macroinvertebrates (Winterbourn 1981) are among the main similarities that we found.Due to this, we hypothesized that these similarities would enable the REC to be adapted for Chile, taking into account the existing geomorphological and climatic differences, expecting that it would constitute a management tool for Chilean river ecosystems.
The present work has two objectives: to implement a REC in Chile (REC-Chile), and to interpret some of its classes for its application in management and planning of river ecosystems.
REC-CHILE CONTROLLING FACTORS
The controlling factors selected for the implementation of REC-Chile are based on those used in the REC developed by Snelder and Biggs (2002) for New Zealand, adapted to the climatic and environmental conditions of Chile.The controlling factors selected were: Climate, Source of Flow, Geology, Catchment Relative Position, Land Use and Reach Slope.Each one of these are subdivided into a categories according to controlling factors meaning.
In the hierarchy of REC-Chile, Climate is the only macro-scale controlling factor.Principally, it determines hydrological characteristics such as fl ow magnitude and frequency of fl ooding and low fl ow period (Snelder et al. 2005;Snelder & Biggs 2002).The categories used were based on Blair's climate classifi cation Arid, Semi-Arid, Mid-Wet, Wet and Very Wet (Heras 1973).
The Source of Flow factor acts at a mesoscale level, indicating the stream fl ow is originated by rainfall, snowmelt, glaciers or a combination of these (Fleming 2005, Snelder et al. 2004).This controlling factor has its origin from the topography of the watershed (Snelder et al. 2002) considering how this impacts on fl ow regime.Thus, this controlling factor is related to seasonality of fl ow regimes, sediments supply and transport process at a watershed scale (Snelder & Biggs 2002).Precipitation in Chile is mainly related to orographic effects, since the Andes mountain range acts as a barrier to the moist winds coming from the Pacifi c Ocean (Villagrán & Hinojosa 2005;Rutllant 2004).This fact makes possible associate the altitude of a watershed with its precipitation as proportion to the total precipitation of the whole basin.
The Source of Flow factor includes fi ve categories of topographical origin (Low Elevation, Valley, Mid-Mountain, Mountain and Eternal Snow) and three categories which ones due to their physical characteristics change the fl ow regime (Lakes, Glaciers and Regulations).The categories of topographical origin were assigned by a relationship between catchment mean altitude and its accumulated precipitation percentage with respect to the total rainfall basin.Thus, the Low Elevation category refers to watershed at 300 masl or lower.This category present a low accumulated precipitation, less than 25% of the total rainfall basin, and extreme seasonal variations between summer and winter.Valley corresponds to a watershed located between 300 and 1000 masl, with percentages of accumulated precipitation averaging between 25% and 40% of the total rainfall basin.This category present a higher mean annual fl ow and more moderate seasonal variations than Low Elevation.Midmountain refers to watersheds between 1000 and 2400 masl, with high percentages of accumulated precipitation (between 40 and 70%) incorporating, the contribution of snow-melt during spring-summer; this leads to the formation of a higher peak fl ow in winter (high rainfall period) and another smaller peak in spring-summer (dry period).Mountain corresponds to watershed between 2400 and 4200 masl, characterised by a high contribution of snow-melt and small amounts from precipitation, resulting in a small peak in the fl ow regime in winter and another larger peak in early summer.Eternal snow are watersheds above 4200 masl They only receive snow-melt, presenting high fl ows in summer and low in winter.
Among fl ow modifi er categories, the Glacier category is assigned to any reach river whose watershed contains a glacier with an area covering more than 20%.Lakes refer to water bodies where fl ow regime is modifi ed by signifi cant water storage, usually reducing and delaying fl ood events.To evaluate whether such a modifi cation is produced, the Lake Index (LI) is calculated (Equation 1) which estimates the precipitation on the lake as compared to the basin which drains into the river reach.
Where: A L is the lake area, A LW is the lake watershed area, V LW is the mean annual precipitation in the lake watershed, and V W is the annual precipitation of the basin.
The LI index evaluates whether or not the infl uence of water body represents a signifi cant modifi cation to the hydrograph of a watershed; it was proposed by Snelder and Biggs (2002) and modifi ed later (T.Snelder pers.comm.).To determine the threshold value for the LI in Chilean rivers a comparison between annual hydrographs of gauging stations with the same REC-Chile code, with and without the presence of a lake upstream was made.The critical value for this was estimated at 0.020, meaning signifi cant regulation effect by the lake at values above this threshold.
Given that categories of Source of Flow factor are related to hydrological patterns in a natural regime, it was necessary to incorporate the category Regulations to take into account man-made regulation structures.
The Geology factor is related, at mesoscale level, to the water geo-chemical characteristics, dominated by the geological properties of the aquifer (Herrera et al. 2006;Gonzalez et al. 1999).An important parameter of water quality in Chilean rivers is constituted by dissolved salts due to the marine and/or volcanic origin of the basins (Vila & Molina 2006), showing a pronounced north-south gradient.This factor defi ned eight categories as follow: Alluvial, Plutonic Rocks, Volcanic Rocks, Sediment and Mix of Sediment and Volcanic Rocks (Sedim and Mix Sed-Volc Rocks), Sedimentary Rocks, Fractured Volcanic Rocks and Carbonated Rocks.A category named Not Recognised was incorporated for cases where it is impossible to establish the geological origin.
Catchment Relative Position is the fourth controlling factor.It is related to the catchment area of a river reach as a proportion of the whole basin.It describes aspects of the hydrological pattern such as magnitude of mean fl ow, intensity and attenuation of fl ood fl ows, and fl ux of sediments.The assignation rule was determined according to the relative position of the catchment within the basin, defi ning the following categories: Headwater, Upper Reach, Middle Reach and Low Reach.The categories Endorreic and International (i.e.transboundary river) were assigned manually.
The Land Use factor is related to mesoscale processes, such as initial interception and evapotranspiration; it also controls physical-chemical aspects of the water quality resulting from soil leaching into, or carried down by, the river (Sliva & Dudley 2001;Johnson et al. 1997).Nine categories were defi ned: City Areas, Agriculture, Praire and Shrubs, Woods, Areas without Vegetation, Reservoirs, Snow and Glaciers and Areas without Information.
The last controlling factor is Reach Slope.This describes morpho-hydraulic patterns at a microscale level such as local transport of sediment, river-bed erosion, mean velocity of fl ow and infl uence of bank conditions.The assignation rule is determined according to the slope along the river reach in three categories: Low, Medium and High.
REC-CHILE IMPLEMENTATION REC-Chile was implemented through Geographical
Information System (GIS) in rivers represented as a digital hydrographic network at 1:250,000 scale.The information used to develop the classifi cation, as well as the river network, was provided by the Bureau Water Management of Chile (Dirección General de Aguas -DGA).This information includes precipitation contours, potential and actual evapotranspiration contours (1:250,000), elevation contours (1:250,000), watershed areas (1:50,000), basin areas (1:50,000), geology (1:50,000) and land use (1:50,000).
The REC-Chile mapping was done according to the spatial scale of each controlling factor (top-down scaling) following the assignation rules defi ned for each category.In Climate, Source of Flow and Geology categories factors, the concept of River Continuum was considered (Vannote et al. 1980) propagating downstream the characteristics of the most infl uential categories.In Climate factor, four categories were propagated which ones dominate the fl ow magnitude.The order of propagation of these categories was: Semi-Arid, Mid-Wet, Wet and Very Wet.In Source of Flow factor, seasonal characteristics of the hydrograph were propagated.First one, the topographical categories were propagated (Low elevation, Valley, Mid-Mountain, Mountain and Eternal Snow) and, second one, the categories assigned manually (Lakes, Glaciers and Regulations).In Geology factor, propagation criterion was based on the dissolved salts contribution due to the lithology of different soil types.The propagation order of these categories was: Plutonic Rocks, Volcanic Rocks, Sedimentary and Mix Sed-Volcanics Rocks, Sedimentary Rocks, Fractured Volcanic Rocks, and fi nally Carbonated Rocks.
REC-Chile was developed in basins located between 18 and 43°S.The classifi cation was not implemented in the more southerly basins (43ºS to 55ºS) due to lack of information to defi ne the controlling factors.
Once REC-Chile had been designed and implemented, two simple applications were done to show how to interpret the REC-Chile classes to determine the hydrological pattern and the natural geo-chemical quality of the waters.In a fi rst case, basic aspects of mean annual hydrograph from 4 gauging stations in natural or slightly modifi ed regimes were compared with their respective REC-Chile codes, which ones were obtained by the Climate and Source of fl ow hierarchical superposition (second level).In the second case, the mean electrical conductivity (EC) in 67 segments of river distributed throughout Chile were compared with their respective REC-Chile codes at a third level, which ones were obtained by the Climate, Source of Flow and Geology hierarchical superposition.The data for the annual hydrographs and the EC were provided by DGA.
REC-CHILE CLASSIFICATION
Table 1 shows the REC-Chile with each one of the controlling factors and its categories, assignment rules and the code associated to each category.
The resulting classifi cation presents a tree structure in which each category of controlling factor number i are divided among the categories of controlling factor i + 1.The classifi cation levels are defi ned according to the hierarchical superposition controlling factors.
The fi rst classifi cation level corresponds to the fi rst controlling factor (Climate), the second classifi cation level is the hierarchical superposition of the fi rst two factors (Climate and Source of Flow), the third level is defi ned by adding the Geology factor to the second level and so on down to the sixth and last classifi cation level defi ned by the hierarchical superposition of all the REC-Chile controlling factors.The sixth classifi cation level is refl ected in a six-digit code.The digit position represents the classifi cation level and the digit value represents the category of the controlling factor at this level.
The classes are defi ned according to the hierarchical superposition of the factor categories at the level selected.Thus for the fi rst level the classes are the Climate categories, for the second level the resulting classes are the Climate categories combined with the Source of fl ow categories, i.e.Arid is combined with the categories Low Elevation, Valley, Mid-Mountain, Mountain, Eternal Snow, Lakes, Regulations and Glacier.This procedure is repeated for each controlling factor categories until the Reach Slope factor to create all classes at sixth level of classifi cation.
The potential number of classes in any classifi cation level depends on the number of categories at that level and the number of classes in the previous levels, and thus the potential number of classes is given by the mathematical combination of the number of categories in each controlling factor.For example, the second level potentially has 40 classes (5 x 8), the third level potentially has 320 classes (5 x 8 x 8).REC-Chile has 46,080 potential classes.However, not all the potential classes occur physically in the REC-Chile domain, in which exists 3049 classes in a total area of 515,000 km 2 aproximatelly.
Due to the spatial scale of each controlling factor, the classes at Climate level defi ne larger homogenous areas than the classes at the Source of fl ow level, and this generates differences in the spatial scale of the patterns generated.Figure 1 shows the classifi cation at the fi rst two levels for two basins located in northern Patagonia (Imperial and Toltén Rivers).This shows the downstream propagation of the most infl uential categories.At the fi rst level (Fig. 1b), propagation of Very Wet (Vw) category coming from the rains which occur in the Andes mountains can be observed.Likewise, the second level (Fig. 1c) shows the propagation of the Very Wet / Mid-Mountain class (Vw/Mm) in the Imperial River and the Very Wet / Lakes class (Vw/La) in the Toltén River.On the one hand, this would mean that the hydrograph of the Imperial River should have a seasonal pattern approximately similar to the Andes mountains precipitation regime; on the other hand, the hydrograph of the Toltén River would be infl uenced by the damping effect of the Villarrica Lake.
REC-CHILE CLASSES INTERPRETATIONS
By combining certain controlling factors it is possible to interpret some eco-hydrological patterns of river system.
The second classifi cation level allows to determine the magnitude of the mean annual fl ow of a river reach in its natural regime, as well as its seasonal variation.where the mean annual fl ow should therefore be higher than all others stations.The maximum monthly mean fl ows should be during the rainy season (June to August) and the minimum mean monthly fl ows in the summer months (January to March).
The mean annual fl ows of each gauging station (Table 2) and their mean annual hydrographs (Fig. 2) are in agreement with the classifi cation.For all the stations, the shape of the hydrograph is similar to that described by the REC-Chile classes, just as are the seasonal variations too.
To characterize the natural geo-chemical quality of the water, the following aspects must be considered: local lithology, climate, vegetation and residence time of the water in the aquifer, which is a function of the climate (Drever 2002).Another important aspect to be considered is the composition of the infl ow water on the system (Allan & Castillo 2007) and the origin of the water circulating in the river.The controlling factors used to characterize the natural geo-chemical quality of the water are Climate, Source of Flow and Geology.Therefore, the third level of classifi cation enables to make a prediction of the mean Electrical Conductivity (EC, μS/cm) for each class.
Arid category of the Climate factor is expected to have a high EC value, and as the level of precipitation rises, the EC will diminish; thus Very Wet category should have the lowest EC.The categories of the second controlling factor, Source of Flow, have less effect on the EC than Climate 1.The Y axis (Qm/Q50) represents the ratio between mean monthly discharge and annual discharge with a 50% of excedence probability.For the association of the REC-Chile code with the corresponding gauging station see Table 2.
DISCUSSION
The results of this study show that it is possible to implement the REC in the particular environmental conditions existing in Chile.The controlling factors and categories developed for this country seem to represent in a suitable way the various climatic, hydrological and geological conditions.For example, using REC-Chile at the second level, i.e. combining the Climate and Source of Flow factors, it is possible to describe approximately the hydrological pattern of a segment of river.This allows compare magnitudes of mean annual fl ow between segment of river for different classes and estimate a fi rst approach of the annual hydrograph, as is shown in the present study.
Furthermore, at the third classifi cation level a good characterization of the geo-chemical quality of the water could be achieved.It is in agreement with previous studies in which it is indicated that precipitation, the lithology of the basin, the origin of fl ow and the residence time are the principal factors (Allan & Castillo 2007;Drever 2002;Custodio & Llamas 1976).
It must be stressed that REC-Chile incorporates the River Continuum Concept (Vannote et al. 1980) and represents the longitudinal changes downstream as a result of the propagation of upstream characteristics in the downstream directions.
The development of REC-Chile in a GIS environment allows the observation and easy handling of the river segment's data.One great advantage of this tool is the possibility of managing information coming from the controlling factors selected by the user according to the objectives and the scale of the work.This give it great fl exibility to obtain patterns of the distribution of hydrological and physical-chemical variables which determine the ecological characteristics of segments of river.
REC-Chile classifi cation is able to classify rivers at a microscale equivalent to river reach (km order).This prevents the REC-Chile to classify rivers considering fl uvial processes that ocurr at a lower spatial scale.If required to carry out a classifi cation at a lower spatial scale than river reach, it would be necessary to decrease the classifi cation unit (for example at mesohabitat unit).Any modifi cation to REC-Chile, either including controlling factors and/or categories will result in an increase of the total number of classes.
A joint use of the REC-Chile and a geodatabase of aquatic communities, e.g.macroinvertebrates or fi sh, brings great benefi ts in terms of its use in management and planning of aquatic ecosystems (Peredo-Parada et al. 2009a).Among other things, REC-Chile will enable estimating spatial distribution patterns for species with scarce information, because statistical techniques can relate the physical characteristics of the river segments with species presence/ absence data (Peredo-Parada et al. 2009b;Snelder et al. 2004).It is also helpful in the design of monitoring networks, in order to ensure a suitable distribution of sampling points across an area of interest (Hawkins et al. 2000), and also to investigate about habitat suitability for aquatic species at the mesoscale level.
In order that a landscape classifi cation becomes a useful aquatic management tool, it must be developed by consensus of all sectors involved in management of aquatic ecosystems.Furthermore, the landscape classifi cation must have a number of classes that facilitates handling and decision making.The great number of REC-Chile classes goes against this premise, could result in a complex management framework.Therefore, it is necessary to decrease the number of classes.This reduction must be carried out by evaluating the classifi cation strength at different number of classes and will be necessary to perform a physical interpretation of the classes.These two analysis will allow to group the classes that present a similar interpretation without missing the correct classifi cation structure.Due to the tree confi guration of the REC-Chile to achieve a great reduction in the total number of classes, the number of categories at higher level must be decreased, Other important application of the REC-Chile is the determination of river reaches in natural condition, which could be used as a reference in restoration planning (Hawkins et al. 2000).If there are no reaches in natural conditions, REC-Chile allows to assess the maximum ecological potential within a certain class using various indexes, e.g.freshwater fi sh index (Noble et al. 2007;Schmutz et al. 2007), macroinvertebrates index (Figueroa et al. 2003(Figueroa et al. , 2007)), and riparian vegetation index (Palma & Figueroa 2009;Munné et al. 1998Munné et al. , 2003)).With the application of such kind of indexes, REC-Chile could make possible to determine the hydrologic and ecological degree of alteration of river segments, in relation to reference sites.It can also be used to extrapolate ecological information within the classes (Omernik & Bailey 1997).These applications are useful for prioritizing and designing fl uvial restoration activities.
CONCLUSIONS
The REC-Chile classifi cation system is an adaptation to the climatic, topographic and geological conditions of Chile of the River Environmental Classifi cation (REC) developed by Snelder & Biggs (2002).This classifi cation system is a tool for river classifi cation with a digit-code based on the hierarchical superposition of six factors which control the hydrological pattern (Climate, Source of Flow, Geology, Catchment Relative Position, Land Use and Reach Slope).The classifi cation of river reaches is represented as a 6-digit code, where the digit position is the classifi cation level and its value is the category of the factor at that level.
REC-Chile offers various advantages with respect an ecorregion approach.First, it allows the various ecohydrological patterns of Chilean rivers to be described by the superposition of appropriate controlling factors depending on the objective of the classifi cation, thus allowing study on various spatial scales.Second, it has the ability to manage rivers as hydrological networks considering their longitudinal continuity, allowing changes to be detected from one watershed to another even if they are geographically close to one another, thus providing a more exact characterization of Chilean rivers.Third, it is implemented in a GIS environment, facilitating geographical interpretation and changes from one scale of work to another.
An easy interpretation of the REC-Chile classes at different classifi cation levels enables a simple assessment of different fl uvial patterns, e.g. at the second classifi cation level the mean annual hydrograph of river segments can be described.At the third classifi cation level, aspects of the natural geochemical quality of the water can be compared.
Finally we expect the REC-Chile classifi cation system become a useful tool for the conservation and management of river ecosystems in Chile, supporting the determination of their ecological potential state and the selection of suitable methods for assigning environmental fl ow regimes in different river types.
FIGURE 1 .
FIGURE 1. Geographic range where REC-Chile was implemented (1.a).The upper fi gure (1.b) shows the fi rst REC-Chile level (Climate) in the Imperial and Toltén River basins.The lower fi gure (1.c) shows the second REC-Chile level (Climate and Source of Flow) in the same basins.For more detail on REC-Chile class interpretation see Table 1.FIGURA 1. Rango geográfi co en el cual la clasifi cación REC-Chile fue implementada (1.a).La fi gura superior (1.b) muestra el primer nivel de la REC-Chile (Climate) en las cuencas del río Imperial y Toltén.La fi gura inferior (1.c) muestra el segundo nivel de clasifi cación de la REC-Chile (Climate y Source of Flow) en las mismas cuencas mencionadas anteriormente.Para más detalle sobre la interpretación de las clases de la REC-Chile ver Tabla 1.
Reaches in the Mid-mountain category are expected to present intermittent fl ows with bimodal hydrographs, as a result of the combined contribution of precipitation and snow-melt.Mountain and Eternal Snow categories will have moderate seasonal variation from the mean annual fl ow, with the maximum fl ows occurring when snow is melting and a minimal base fl ow during the rest of the year.Likewise Glacier category is expected to present fl ow according to the melt-rate of the glaciers.Lakes category will not be found in combination with Arid and Semi-arid categories.Reaches in the Mid-Wet category tend to have higher mean annual fl ow than the ones in the two previous categories.The categories related to rainfall usually indicate permanent fl ow with seasonal variations corresponding to rainfall regime.Mid-mountain category is related to bimodal annual hydrographs, but with a higher peak fl ow in winter than in Arid and Semi-arid categories.Mountain, Eternal Snow and Glacier categories present high fl ows in late spring and early summer.The combination Mid-Wet and Lakes categories do not exist.For Wet and Very Wet categories, higher mean annual fl ows can be expected.The presence of rainfall categories is related to continuous fl ows and an important increase at peak fl ow periods.Mid-Mountain and Mountain categories will present bimodal hydrographs; in the case of Mid-Mountain, the higher peak fl ow will occur in winter due to rainfall, whereas the opposite will be possible of the Mountain category due to the contribution of snow-melt in summer.Eternal Snow and Glacier categories will have a similar behaviour to Mountain category.Lakes category means a damping effect of the seasonal fl ow variations, and alteration fl ow timing; extreme fl ows are expected to occur in late winter and late summer.The classes combining Very Wet and Eternal Snow categories do not exist.
expected to present a low mean annual fl ow.Within these categories, those where rainfall is the main source of fl ow (Low Elevation and Valley) tend to show an intermittent or ephemeral fl ow regime, with large seasonal variations between the summer (minimum fl ows) and winter (high fl ows).
Table 2
This station may be expected to have higher mean annual fl ows than the Salado River station.The hydrograph should present two peaks, one at the start of the snow-melt season (November and December) and another smaller peak in the rainy season (June to August).The Aconcagua station is inside Wet and Glacier class (W/G), where the mean annual fl ow should be greater than previous station.The hydrograph should have a maximum mean monthly fl ow at early summer (December and January) and relatively low fl ow during the rest of the year.Finally, the Itata station belongs to Very Wet and Mid-mountain class (Vw/Mm), shows the REC-Chile codes, at the second level of classifi cation, for four gauging stations distributed along the country.Based on these codes a descriptive interpretation is made of the mean annual hydrograph.The Salado River station belongs to Arid and Eternal Snow class (Ar/Es).For this class the hydrological pattern may be expected to present low mean annual fl ows, with maximum mean monthly fl ows at snow-melt season (January to March) and minimum mean monthly fl ows in the cold season (June to August).The Choapa River station belongs to Mid-wet and Mountain class (Mw/M).
TABLE 2 .
Gauging stations and respective REC-Chile code of segments used to interpret the hydrological pattern.TABLA 2. Estaciones fl uviométricas y sus correspodientes códigos REC-Chile de los tramos de ríos que fueron utilizados para interpretar el patrón hidrológico.Mean annual hydrograph of four gauging stations associated with 4 REC-Chile codes.The mean annual volume fl ow for each station is shown in parentheses.For the interpretation of the REC-Chile code see Table 0 The numbers on the X axis indicate the code of the REC-Chile classes at the third hierarchical level, with the fi rst digit representing the Climate factor category, the second digit representing the Source of Flow factor categories and the third digit representing the Geology factor (see Table1).FIGURA 3. Valores medios, percentiles 25% y 75% y valores máximos y mínimos de Conductividad Eléctrica (μS/cm) para 6 clases de la REC-Chile.El eje X indica el código de las clases de la REC-Chile a un tercer nivel de clasifi cación.El primer dígito corresponde a las categorías del factor Climate, el segundo dígito representa al factor Source of Flow y el tercer dígito al factor Geology (ver Tabla 1).factor.Categories with a rainfall origin (Low Elevation, Valley and Mid-Mountain) present a lower EC than those coming from snow-melt (Mountain, Eternal Snow and Glacier) due to leaching from volcanic soils.Lakes category may present lower values due to the physical-chemical processes which occur in lakes.Finally, the third factor categories (Geology) presents EC values according to the presence of ions associated with each type of lithology.Thus Alluvial category will present the lowest EC values, followed by Plutonic Rocks, Volcanic, Sediment and Mix Sed-volcanic, and fi nally Sedimentary Rocks.A special case is the category of Fractured Volcanic Rocks, since this permits to occur a greater leaching of ions.Figure3shows the EC values of sampling points grouped by the third level REC-Chile code.These values appear to be in agreement with the interpretations done from the REC-Chile classes.It is observed that the classes with Arid category (Ar/Es/Mx and Ar/Es/Vf) present the highest values and that these diminish as the level of precipitation increases.The effect of Source of Flow factor is also observed, since within the Very Wet category, the Glaciers category (Vw/G/Mx) presents the highest concentration, while Lakes category (Vw/La/Mx) has a lower EC concentration.Finally in the class Ar/Es, it is seen that the Fractured Volcanic Rocks category has a higher concentration than compound rocks of volcanic and sedimentary origin. | 2018-11-30T09:02:26.231Z | 2011-01-01T00:00:00.000 | {
"year": 2011,
"sha1": "e09969651f0f6fa5fc1983a177731c250efb221c",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.cl/pdf/gayana/v75n1/art03.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e09969651f0f6fa5fc1983a177731c250efb221c",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Biology"
]
} |
259269526 | pes2o/s2orc | v3-fos-license | Updates on the Physiopathology of Group I Metabotropic Glutamate Receptors (mGluRI)-Dependent Long-Term Depression
Group I metabotropic glutamate receptors (mGluRI), including mGluR1 and mGluR5 subtypes, modulate essential brain functions by affecting neuronal excitability, intracellular calcium dynamics, protein synthesis, dendritic spine formation, and synaptic transmission and plasticity. Nowadays, it is well appreciated that the mGluRI-dependent long-term depression (LTD) of glutamatergic synaptic transmission (mGluRI-LTD) is a key mechanism by which mGluRI shapes connectivity in various cerebral circuitries, directing complex brain functions and behaviors, and that it is deranged in several neurological and psychiatric illnesses, including neurodevelopmental disorders, neurodegenerative diseases, and psychopathologies. Here, we will provide an updated overview of the physiopathology of mGluRI-LTD, by describing mechanisms of induction and regulation by endogenous mGluRI interactors, as well as functional physiological implications and pathological deviations.
Canonical mGluRI signaling is driven by the G q /G 11 -dependent activation of phospholipase C β (PLCβ), inducing the hydrolysis of phosphoinositides and generation of inositol 1,4,5-trisphosphate (IP 3 ) and diacyl-glycerol (DAG). This pathway leads to the Ca 2+ intracellular mobilization from internal stores and activation of protein kinase C (PKC) [1]. In addition, mGluRI can activate several other pathways, and also G proteinindependent mechanisms, by means of the interaction with specific adaptor proteins that recruit distinct signaling components [1]. G protein-independent mechanisms mainly lie on β-arrestin binding, favored by the receptor phosphorylation by G-protein-coupled receptor kinases (GRKs). Globally, mGluRI can activate a multifaceted list of effectors, including phospholipase D (PLD), protein kinases pathways such as mitogen-activated protein kinase/extracellular receptor kinase (MAPK/ERK), the mammalian target of rapamycin Figure 1. Group 1 metabotropic glutamate receptors (mGluRI) signaling. Scheme of the principal mGluR1 and mGluR5 signaling pathways, showing that Gq/11-dependent activation of phospholipase C β (PLCβ) mediates phosphatidylinositol hydrolysis with the generation of diacylglycerol (DAG) (that activates protein kinase C, PKC) and inositol-1,4,5-trisphosphate (IP3) (that fosters Ca 2+ intracellular release from internal stores by acting on IP3R receptors on the endoplasmic reticulum). mGluR1/5, through Gq/11-dependent mechanisms, also modulates ion channels, such as transient receptor potential channels (TRPCs), voltage-gated Ca 2+ channels (VGCC), and different types of K + channels (Kv or SK), thus affecting neuronal excitability. Additional G protein-independent mechanisms involve the recruitment of β-arrestin or other scaffolding/adaptor proteins such as Homer long isoforms, that provide mGluRI coupling with other effectors, thus fostering activation of signaling pathways, such as the phosphatidylinositol 3-kinase/Akt/mammalian target of rapamycin (PI3K-Akt-mTOR) kinase pathway, or MEK1/2-ERK 1/2 pathway, both involved in mechanisms promoting protein synthesis. mGluR1 and mGluR5 are mainly localized in the postsynaptic densities (PSD) in perisynaptic zones [3][4][5], where they form multiprotein complexes by interacting with other membrane proteins and downstream effectors. Different intracellular scaffolding proteins interacting with mGluR1 and mGluR5 have been identified. An important group is constituted by the family of Homer proteins, including long isoforms (Homer 1b, 1c, 2, 3), able to make a large multimeric assembly in the PSD, and the shorter isoform Homer 1a, that does not form protein complexes but antagonizes Homer longer isoforms connections [6,7]. Homer longer isoforms bond mGluR1 or mGluR5 to their principal signaling effectors, such as PLC, PI3K and the PI3K enhancer, PIKE-L, as well as the IP3 receptor, and various classes of ion channels including transient receptor potential-like channels (TRPCs), voltage-gated calcium channels (VGCC), and M-type potassium channels [8][9][10]. Moreover, Homer long isoforms, by linking other scaffolding proteins (most notable are PSD-95 and Shank), directly associate mGluR1/5 to other membrane receptors, including NMDARs. Besides the Homer family, other identified mGluR1-and mGluR5-interacting proteins include tamalin, norbin, preso-1, calmodulin, neuronal calcium-binding protein 2 (NECAB2), calcineurin inhibitor protein (CAIN), the ubiquitin ligase Siah-1A, various protein kinases, including PKC, GRK2, CaMKII, and cytoskeletal components, such as 4.1 G and Filamin-A [8][9][10]. mGluR1 and mGluR5 also bind intracellular regulatory proteins, such as GPCR kinases (GRKs) (GRK2 and GRK3 subtypes for mGluR5, and GRK2, GRK4, and GRK5 for mGluR1) [11][12][13], which control mGluRI internalization by phosphorylation [14], or members of the family of the regulator of G-protein signaling (RGS) (such as RGS-4), which increase the GTPase activity of Gαq and uncouple G protein-linked effectors, and thus switch-off mGluRI signaling [15]. showing that G q/11 -dependent activation of phospholipase C β (PLCβ) mediates phosphatidylinositol hydrolysis with the generation of diacylglycerol (DAG) (that activates protein kinase C, PKC) and inositol-1,4,5-trisphosphate (IP 3 ) (that fosters Ca 2+ intracellular release from internal stores by acting on IP 3 R receptors on the endoplasmic reticulum). mGluR1/5, through G q/11 -dependent mechanisms, also modulates ion channels, such as transient receptor potential channels (TRPCs), voltage-gated Ca 2+ channels (VGCC), and different types of K + channels (K v or SK), thus affecting neuronal excitability. Additional G protein-independent mechanisms involve the recruitment of β-arrestin or other scaffolding/adaptor proteins such as Homer long isoforms, that provide mGluRI coupling with other effectors, thus fostering activation of signaling pathways, such as the phosphatidylinositol 3-kinase/Akt/mammalian target of rapamycin (PI3K-Akt-mTOR) kinase pathway, or MEK1/2-ERK 1/2 pathway, both involved in mechanisms promoting protein synthesis. mGluR1 and mGluR5 are mainly localized in the postsynaptic densities (PSD) in perisynaptic zones [3][4][5], where they form multiprotein complexes by interacting with other membrane proteins and downstream effectors. Different intracellular scaffolding proteins interacting with mGluR1 and mGluR5 have been identified. An important group is constituted by the family of Homer proteins, including long isoforms (Homer 1b, 1c, 2, 3), able to make a large multimeric assembly in the PSD, and the shorter isoform Homer 1a, that does not form protein complexes but antagonizes Homer longer isoforms connections [6,7]. Homer longer isoforms bond mGluR1 or mGluR5 to their principal signaling effectors, such as PLC, PI3K and the PI3K enhancer, PIKE-L, as well as the IP 3 receptor, and various classes of ion channels including transient receptor potential-like channels (TRPCs), voltage-gated calcium channels (VGCC), and M-type potassium channels [8][9][10]. Moreover, Homer long isoforms, by linking other scaffolding proteins (most notable are PSD-95 and Shank), directly associate mGluR1/5 to other membrane receptors, including NMDARs. Besides the Homer family, other identified mGluR1-and mGluR5-interacting proteins include tamalin, norbin, preso-1, calmodulin, neuronal calcium-binding protein 2 (NECAB2), calcineurin inhibitor protein (CAIN), the ubiquitin ligase Siah-1A, various protein kinases, including PKC, GRK2, CaMKII, and cytoskeletal components, such as 4.1 G and Filamin-A [8][9][10]. mGluR1 and mGluR5 also bind intracellular regulatory proteins, such as GPCR kinases (GRKs) (GRK2 and GRK3 subtypes for mGluR5, and GRK2, GRK4, and GRK5 for mGluR1) [11][12][13], which control mGluRI internalization by phosphorylation [14], or members of the family of the regulator of G-protein signaling (RGS) (such as RGS-4), which increase the GTPase activity of Gα q and uncouple G protein-linked effectors, and thus switch-off mGluRI signaling [15]. Based on common signaling mechanisms, mGluR1 and mGluR5 have been often regarded as interchangeable, but it is now overt that they can either have separate functions [16,17] or be cross-talking, by acting occasionally in a cooperative [18] or otherwise antagonistic manner [19,20]. mGluR1 and mGluR5 are active as homodimers, i.e., mGluR1-mGluR1 and mGluR5-mGluR5, and additionally the two subtypes can be associated among them in the heterodimer mGluR1-mGluR5, or with other GPCRs belonging to adenosine-, GABA-, and dopamine receptors [21][22][23][24][25][26], by forming dimers such as mGluR1-A1 [23], mGluR1-GABA B [24], and mGluR5-D1 [25], or a trimeric receptor complex, such as mGluR5-A2A-D2 [26]. Furthermore, mGluR1/5 can assembly with ligand-gated ion channels, such as NMDARs [27], and interplay with other ion channels beyond TRPCs, N-type Ca 2+ channels, and M-type K + channels, such as small conductance calcium-activated potassium (SK) channels [28,29], and acid-sensing ion channels 1a (ASIC1a) [30]. Furthermore, mGluR1/5 cross-talk with receptor tyrosine kinases (RTKs), such as epidermal growth factor receptor (EGFR) and ErbBs, that are receptors for the neurotrophic factors EGF and neuregulins, respectively [31]. mGluR1 and mGluR5 are often co-expressed in the various brain areas and cellular populations, but they can also have different levels or segregate expression in distinct cellular populations [32,33]. Though mGluR1 and mGluR5 are usually described as postsynaptic, they can also be presynaptic [33], and are expressed in astrocytes and microglial cells in several brain areas in physiological and pathological conditions [34], thus implying a more complex scenario of potential mGluR1/5-engaged mechanisms. Furthermore, in addition to plasma membrane-anchored receptors, functional mGluR1/5 pools are located in intracellular compartments, in the endoplasmic reticulum (ER), nuclear, and mitochondrial membranes, wherein they can activate different signaling pathways, thus supporting the cytoplasm-to-nuclear translocation or mGluR1/5 mitochondrial functions [35][36][37].
In conclusion, mGluR1-and mGluR5 interplay with other membrane receptors, ion channels, intracellular scaffolding proteins, regulatory proteins, and signaling pathways, as well as differences in their cellular and sub-cellular localization rule definite roles in different contexts, shaping mGluR1/5 impact on brain functions.
mGluRI-Dependent LTD in Physiology
mGluR1 and mGluR5 manage important brain functions, by affecting neuronal excitability, intracellular Ca 2+ dynamics, protein synthesis, dendritic spines maturation, and synaptic transmission. They are prime actors in the induction of synaptic plasticity, mediating either long-term potentiation (LTP) or long-term depression (LTD) of synaptic transmission driven by diverse neurotransmitters systems, thus fine-tuning glutamatergic, GABAergic, and dopaminergic transmission.
mGluRI-dependent LTD of glutamatergic synaptic transmission (mGluRI-LTD)-a widely studied form of synaptic plasticity-underlies essential brain functions, such as learning and memory processes and complex behaviors. In this section, we first describe mGluRI-LTD's mechanisms of induction and regulation, and then report insights on mGluRI-LTD physiological implications.
Beyond iGluRs endocytosis-in which the net number of functional receptors on surface membrane is reduced-mGluRI can weaken glutamatergic synaptic transmission by inducing a "switch" of AMPARs, with the surface exposition of receptors having a different subunit composition and minor ion permeability with respect to native ones that are retrieved. Such mechanism underlies mGluRI-LTD in ventral tegmental area (VTA) dopamine neurons [52] and in nucleus accumbens (NAc) medium spiny neurons (MSNs) [53], wherein mGluR1 stimulation fosters removal of native GluA1-containing AMPARs (Ca 2+ and Na + permeable), and insertion of AMPARs having GluA2 subunits, conferring Ca 2+ impermeability and lower receptor conductibility.
A different mechanism by which mGluRI depresses glutamatergic transmission is through the regulation of glutamate release. This is commonly achieved by mGluRIinduced eCB synthesis in the postsynaptic compartment, and consequent activation of presynaptic CB1 receptors. The eCB-induced glutamate reduction underlies mGluRI-LTD in the dorsal striatum (dST) [54] and partakes in mGluRI-LTD induction in cerebellar [55] and cortical areas [56].
In addition to such conventional mechanisms, mGluRI-induced LTD could also be reliant on further mechanisms related to mGluRI interplay with other GPCR or receptors/ion channels, or receptors locations in subcellular compartments. Of note, besides the surface extracellular membrane, a large pool of functional mGluR5 is associated with intracellular membranes [35][36][37]57,58]. In the striatum, the activation of such intracellular pool, by glutamate entering in neurons via Na + or Cl − -dependent transporters/exchangers, triggers a cascade of molecular events starting with the phosphorylation of ERK1/2 and Elk-1 [36], followed by increased expression of synaptic plasticity genes c-fos, egr-1, and Arc [59]. Thus, intracellular mGluR5 activation could trigger the cascade of events underlying synaptic plasticity. Intracellular mGluR5 have also been reported in hippocampal CA1 pyramidal neurons, associated with intracellular membranes of the ER and nucleus, where they colocalize with the Na + -dependent excitatory amino acid transporter EAAT3 [60]. It has been proposed that intracellular mGluR5 activation in hippocampal CA1 pyramidal neurons contributes to LFS-induced LTD induction [60]. Knowledge of the physiological conditions fostering intracellular mGluR1/5 activation is still limited, but it can be speculated that intracellular mGluR1/5 activation could be enabled in conditions that boost glutamate release to levels exceeding membrane glutamate receptors binding and/or when ion/amino acid transporters (mediating glutamate influx also in the postsynaptic compartment) are highly expressed. Regarding mGluRI-LTD induction, intracellular mGluR1/5 activation could be fostered by stimulation protocols allowing a glutamate spillover (such as prolonged LFS or brief HFS), rather than during chemical LTD, due to DHPG's limited membrane diffusion.
AMPARs endocytosis is triggered by a PKC-dependent GluR2 phosphorylation, which decreases subunit affinity for the scaffolding protein GRIP, thus weakening AMPAR membrane docking and favoring its endocytosis [45,46]. Differently, AMPARs endocytosis underlying hippocampal mGluRI-LTD requires the activation of protein kinases other than PKC, such as ERK1/2, PI3K-Akt-mTOR, and MAPKs [47][48][49], that promote synthesis of proteins leading to AMPARs membrane retrieval. NMDARs endocytosis underlies mGluRI-LTD of NMDARs-mediated transmission at hippocampal CA3-CA1 synapses [50,51]. It occurs through an mGluR1/5-activated mechanism not requiring tyrosine kinases or phosphatases, neither intracellular Ca 2+ mobilization or protein synthesis, but possibly due to a NMDAR lateralization in PSD [51]. (a) mGluRI activation induces AMPARs or NMDARs internalization thus decreasing glutamatergic synaptic currents; (b) mGluRI activation promotes a switch of membrane-exposed AMPARs, by inducing retrieval of Ca 2+ permeable AMPARs (CP-AMPARs) and exposition of Ca 2+ impermeable AMPARs (CI-AMPARs), that have a minor ion conductivity (c) mGluRI stimulation triggers endocannabinoids (eCBs) synthesis in the postsynaptic compartment. Then eCBs, by acting on presynaptic CB1 receptors, negatively control glutamate release; (d) activation of mGluRI or other GPCR expressed on astrocytes promote release of gliotransmitters such as glutamate and ATP, that then induce LTD by acting on postsynaptic mGluRI or adenosine receptors; (e) activation of intracellular mGluRI, by glutamate entering in the cytoplasm by membrane transporters, induces classical signaling pathways underlying mGluRI-LTD; (f) mGluRI constitutive activity, independent of ligand-induced activation, stimulates signaling pathways underlying LTD.
Beyond iGluRs endocytosis-in which the net number of functional receptors on surface membrane is reduced-mGluRI can weaken glutamatergic synaptic transmission by inducing a "switch" of AMPARs, with the surface exposition of receptors having a different subunit composition and minor ion permeability with respect to native ones that are retrieved. Such mechanism underlies mGluRI-LTD in ventral tegmental area Mechanisms underlying mGluRI-LTD. Scheme of mGluRI-dependent mechanisms underlying glutamatergic LTD. (a) mGluRI activation induces AMPARs or NMDARs internalization, thus decreasing glutamatergic synaptic currents; (b) mGluRI activation promotes a switch of membraneexposed AMPARs, by inducing retrieval of Ca 2+ permeable AMPARs (CP-AMPARs) and exposition of Ca 2+ impermeable AMPARs (CI-AMPARs), that have a minor ion conductivity; (c) mGluRI stimulation triggers endocannabinoids (eCBs) synthesis in the postsynaptic compartment. Then eCBs, by acting on presynaptic CB1 receptors, negatively control glutamate release; (d) activation of mGluRI or other GPCR expressed on astrocytes promote release of gliotransmitters, such as glutamate and ATP, that then induce LTD by acting on postsynaptic mGluRI or adenosine receptors; (e) activation of intracellular mGluRI, by glutamate entering in the cytoplasm by membrane transporters, induces classical signaling pathways underlying mGluRI-LTD; (f) mGluRI constitutive activity, independent of ligand-induced activation, stimulates signaling pathways underlying LTD. While in the above described mGluRI-LTD mechanism the ligand-driven mGluRI activation represents the first event triggering downstream plastic modifications, some evidence suggests that mGluRI might induce glutamatergic synaptic depression also by other unconventional mechanisms. Actually, a ligand-independent form of mGluRI-dependent synaptic downscaling has been described, in which mGluRI activation is not due to glutamate binding, but rather by the induction of the IEG Homer1a in the postsynaptic neuron, which autonomously fosters mGluRI signaling and AMPARs endocytosis [61,62].
Remarkably, while mGluRI-LTD is typically considered an exclusive neuronal process, it should be considered that mGluR1/5 is also expressed on glial cells-astrocytes and microglia-in different brain areas, thus inferring that glial mGluR1/5-dependent mechanisms could contribute to shaping mGluRI-LTD expression. Consistent evidence points to astrocytes' essential roles in the active regulation of different forms of synaptic transmission and plasticity by gliotransmitters release [63,64]. Regarding specific astrocytes roles in mGluRI-dependent LTD, it has been recently reported that astrocyte mGluR5 signaling, by inducing ATP release, shapes HFS-induced LTD in striatal MSNs of direct pathways, consequent to adenosine A1 receptors activation [65]. Additionally, mGluR1-induced ATP release from astrocytes modulates mGluRI-LTD magnitude at hippocampal CA3-CA1 synapses and cortical layer 2/3 synapses [66].
In conclusion, the current picture identifies different cellular and molecular mechanisms underlying mGluRI-LTD ( Figure 2). They can be entirely confined to the postsynaptic compartment (as in the case of iGluRs endocytosis, AMPARs subunits changes, or intracellular mGluRI activation) or be jointly expressed between post-and presynaptic sides (as for eCB-dependent LTD), as well as involve astrocytes interplay, feasibly via astrocytesreleased gliotransmitters. Even though earlier evidence mainly supports brain area-and synapse-specific segregation of mechanisms underlying diverse mGluRI-LTD types, future research may unmask additional overlying, especially embracing the less-investigated mechanisms involving astrocytes, and eventually microglia interplay.
mGluRI Partners Affecting mGluRI-LTD
Theoretically, mGluRI-LTD magnitude is governed by all cellular events that control mGluRI expression, membrane docking, and signaling efficiency. Hence, mGluRI interplay with other GPCR, ligand-gated receptors, ion channels, and receptor tyrosine kinases, as well as intracellular protein-protein interaction, or contact with extracellular-matrix adhesion molecules can temper mGluRI-LTD expression. In spite of the increasing list of mGluRI-interacting proteins, the current knowledge of precise functional impacts of each protein-protein interaction in mGluRI-LTD induction is still partial. Nevertheless, the emergent evidence supports that mGluR1 or mGluR5 partners can have a significant functional effect on proper mGluRI-LTD induction.
As mentioned, postsynaptic scaffolding proteins can directly connect mGluR1 and mGluR5 with either signaling effectors or other PSD proteins, allowing the formation of macromolecular proteins complex in PSD. Homer long isoforms primarily fulfill this function, by allowing mGluRI bond to several effectors, and localization in the PSD. Homer1a, by the disruption of mGluRI-long Homer cross-linking, causes the constitutive, agonist-independent activity of mGluR1 and mGluR5 [61] and reduces the surface AMPARs expression [62]. Further evidence demonstrates contributions of other adaptor or scaffolding proteins to mGluR1/5-dependent LTD. Tamalin is an mGluR1-and mGluR5interacting protein subservient receptor intracellular trafficking, membrane expression, and signaling [67][68][69]. Recent evidence documents that tamalin is intrinsically involved in mechanisms underlying hippocampal mGluRI-LTD at CA3-CA1 synapses [70]. Indeed, the intracerebral injection in rats of cell-permeable peptides designed to interfere with the tamalin-mGluRI bond impairs hippocampal mGluRI-LTD expression, mainly opposing mGluR5 function, thus proving that the mGluR5-tamalin interaction is required for this form of hippocampal synaptic plasticity [70].
CaMKIIα interacts with both mGluR1 and mGluR5, regulating their endocytosis and signaling [76]. While traditionally linked to long-term potentiation, it is emerging that CaMKII activity is also required for glutamatergic LTD forms, including hippocampal mGluRI-LTD. Actually, pharmacological and genetic CaMKII inhibition counteracts mGluRI-LTD induction at hippocampal CA3-CA1 synapses [77,78].
mGluRI interplay with ion channels and ligand-gated receptors can also affect proper mGluRI-LTD expression. TRPCs are a family of non-selective cation membrane channels widely distributed in the brain. TRPC1 subtype contributes to hippocampal mGluRI-LTD modulation, as pharmacological and genetic TRPC1 inhibition reduces hippocampal mGluRI-LTD magnitude [82]. Furthermore, TRPC inhibition also affects the extinction of spatial learning [82], proposed to be correlated with mGluRI-LTD impairment.
ASIC1a is a voltage-insensitive cation channel, activated by H + and abundantly expressed in CNS. It allows a Ca 2+ influx in neurons and is involved in different hippocampal synaptic plasticity processes, including mGluRI-LTD [30,83]. ASIC1a pharmacological inhibition affects mGluRI-LTD magnitude in pyramidal neurons of hippocampal CA1 area [30].
Synaptic extracellular-matrix adhesion molecules could also contribute to shaping synaptic plasticity. While a large number of synaptic adhesion molecules have been identified, current knowledge of their involvement in mGluRI-LTD induction is rather limited. It has been reported that Ephrins and their receptors Ephrin-B2, by interacting with mGluR1 and mGluR5, cooperate in the induction of mGluRI-LTD at hippocampal CA3-CA1 synapses [91]. N-cadherines and their partners catenins also partake in the induction phase of hippocampal mGluRI-LTD, via a cofilin-mediated actin reorganization affecting AMPAR trafficking [92]. Moreover, a recent study has unveiled neuroligin 1 (NLG1) contribution to mGluRI-LTD, describing enhanced mGluRI-LTD magnitude at hippocampal CA3-CA1 synapses in NLG1 +/− mice, probably due to decreased GluA2 tyrosine dephosphorylation [93]. Overall, cumulative data highlight that mGluR1-and mGluR5 partners can critically affect mGluRI-LTD induction (Figure 3). The list of such physical and functional mGluRI interactors will feasibly be updated in the future. Along with the recognition of the functional roles of each protein-protein interaction in general mGluRI function, it would be essential to unveil their unappreciated roles in mGluRI-LTD modulation. This would provide a better definition of the complex picture of the molecular mechanism underlying mGluRI-LTD, with potential significant implications in understanding the physiological and pathological mGluRI-dependent synaptic plasticity.
Overall, cumulative data highlight that mGluR1-and mGluR5 partners can critically affect mGluRI-LTD induction (Figure 3). The list of such physical and functional mGluRI interactors will feasibly be updated in the future. Along with the recognition of the functional roles of each protein-protein interaction in general mGluRI function, it would be essential to unveil their unappreciated roles in mGluRI-LTD modulation. This would provide a better definition of the complex picture of the molecular mechanism underlying mGluRI-LTD, with potential significant implications in understanding the physiological and pathological mGluRI-dependent synaptic plasticity.
mGluRI-LTD Types and Their Functional Significance
Along with the investigation on the molecular mechanisms underlying different mGluRI-LTD forms, other studies pointed at deciphering the functional significance of mGluRI-LTD in various brain areas, providing partial comprehension of mGluRI-LTD roles in physiological brain functions.
mGluRI-LTD Types and Their Functional Significance
Along with the investigation on the molecular mechanisms underlying different mGluRI-LTD forms, other studies pointed at deciphering the functional significance of mGluRI-LTD in various brain areas, providing partial comprehension of mGluRI-LTD roles in physiological brain functions.
Earlier evidence supports that mGluRI-LTD at cerebellar PF-PC synapses underlies motor learning. mGluR1 genetic deletion causes ataxia, deficit in motor coordination and eye-blink conditioning (a cerebellum-related learning process), in parallel with mGluRI-LTD impairment [96,97,104]. Such synaptic and motor learning deficits are both rescued by conditional mGluR1 re-expression only in the Purkinje cell [105]. Other recent evidence strengthens cerebellar mGluRI-LTD role in motor learning, such as eye-blink conditioning [106] and rotarod task performances [42].
mGluRI stimulation induces LTD of the NMDARs-mediated transmission at hippocampal DG-CA3 synapses, which is reliant on a mGluR5-mediated exchange of GluN2B to GluN2A subunits [115].
mGluRI-LTD in the dorsal striatum (dST)-mGluRI-LTD of corticostriatal glutamatergic transmission in medium spiny neurons (MSNs) of dST is selectively reliant on mGluR1 activation in adult rodents [41,131], while mGluR5 can also interplay in younger animals [132]. Striatal mGluRI-LTD requires the parallel activation of mGluRI and dopaminergic D2 receptor, by DA released from nigrostriatal terminals, that globally increase intracellular Ca 2+ levels in MSNs, either through intracellular mobilization or via L-type Ca 2+ channels-mediated influx, then fostering eCBs production and the CB1-induced reduction of glutamate release [54,[133][134][135]. mGluRI signaling in striatal astrocytes can also cooperate to corticostriatal LTD forms in MSNs. Recently, it has been reported that astrocyte mGluR5 signaling contributes to shape the HFS-induced glutamatergic LTD in MSNs of a direct pathway, by promoting ATP release from astrocytes and adenosine A1 activation on MSNs [65].
The dST is involved in essential learning and memory processes, including motor learning, cognitive flexibility, and instrumental conditioning underlying goal-directed behaviors and habit formation [136][137][138]. Along with other synaptic plasticity forms, corticostriatal mGluRI-LTD can contribute to such learning and memory processes [136,139]. The link between corticostriatal mGluRI-LTD and motor learning is also sustained by evidence in Parkinson's disease (PD) animal models showing corticostriatal mGluRI-LTD impairment in association with motor deficits, that are ameliorated by promoting mGluRI-LTD expression [41,131,134].
mGluRI-LTD in the nucleus accumbens (NAc)-mGluRI-LTD of AMPAR-mediated transmission is also observed in MSNs in the nucleus accumbens (NAc). The induction mechanisms and loci of induction appear distinct if mGluRI-LTD is triggered by mGluR1 or mGluR5. mGluR1-induced LTD relies on changes in AMPARs subunits composition, and the exchange of native Ca 2+ -permeable AMPARs (CP-AMPARs) with lower conductance Ca 2+ -impermeable AMPARs (CI-AMPARs) [53]. This mGluR1-LTD form is especially observed in potentiated synapses, as following exposure to psychostimulants or drugs of abuse [53,140,141]. mGluR5-induced LTD is mediated by the eCB-induced reduction of glutamate release [142][143][144]. Along with other synaptic plasticity forms in the brain reward system, mGluRI-LTD in NAc MSNs contributes to reinforcement learning for natural rewards, such as drinking, eating, and mating, or for addictive drugs, that result in reward-seeking behaviors [145,146].
mGluRI-LTD in ventral tegmental area (VTA)-mGluR1 activation in dopamine (DA) neurons of the ventral tegmental area (VTA) depresses glutamatergic synaptic transmission, producing in naive synapses a transient depression or LTD, also based on investigated rodent species [52,147]. Reliable mGluRI-LTD in VTA DA neurons of mice is reported in psychostimulants-potentiated synapses, and is mediated by the mGluR1-induced GluA2 synthesis and incorporation in surface-exposed AMPARs that become Ca 2+ -impermeable and less conductive [148]. mGluRI-LTD in VTA DA neurons cooperates with accumbal mGluRI-LTD to direct goal-oriented behaviors and reinforcement to rewards, with major relevance in the context of drug addiction [44,146].
mGluRI-LTD in lateral habenula (LHb)-mGluRI-LTD of AMPAR-mediated transmission has also been described in LHb neurons, due to the mGluR1-induced eCB-mediated reduction of glutamate release, subsequent to PKC and presynaptic CB1 stimulation [155]. Habenular mGluRI-LTD has been proposed as a synaptic mechanism contributing to tune LHb neurons' firing activity, with implications in LHb-dependent encoding of motivated states, i.e., reward or aversion [155].
mGluRI-LTD in cortex-mGluR1 stimulation mediates LFS-induced LTD in the anterior cingulate cortex (ACC) [156]. Studies in animals and humans demonstrate that ACC concurs, with other cortical areas, to pain perception and chronic pain conditions [157]. It has been proposed that mGluR1-LTD partakes in pain states, as it is impaired following the peripheral amputation of the distal tail in mice, and that mGluR1 activation may ameliorate pain or associated brain dysfunctions [156].
The thalamo-cortical synapses in the auditory cortex (AC) show mGluRI-LTD reliant on the selective mGluR5 activation [158]. Furthermore, in the perirhinal cortex, mGluRI cooperate with mGluRII and NMDARs in LTD induction in layer II/III neurons [159]. Such LTD form, due to the functional interaction between different classes of synaptically activated Glu receptors, could be involved in recognition memory [160]. Different mGluRI-LTD types are summarized in Table 1.
mGluRI-Dependent LTD in Pathology
Unbalanced mGluRI-dependent LTD represents a shared synaptic signature of different neurological and psychiatric disorders. In this section, we will summarize current evidence on mGluRI-LTD deviances in animal models of neurodevelopmental disorders associated with autism spectrum disorders, genetic intellectual disabilities, or schizophrenia, as well as mGluRI-LTD alterations in major depressive disorder, addiction, or neurodegenerative diseases, such as Alzheimer's disease and Parkinson's disease.
Autism Spectrum Disorders, Genetic Intellectual Disabilities, and Schizophrenia
Autism spectrum disorders (ASDs) are a heterogeneous group of neurodevelopmental disorders, the core features of which are persistent deficits in social interaction and communication, restricted patterns of behavior and interests, as well as repetitive/stereotyped activities. ASDs encompass idiopathic or syndromic cases, with the latter caused by monogenic alterations, such as fragile X syndrome (FXS), tuberous sclerosis complex (TSC), Angelman's syndrome (AS), and Rett's syndrome (RS). Autistic traits and intellectual disabilities are also associated with other diseases with polygenic or more complex etiological mechanisms. In spite of heterogeneous genetic alterations, ASDs symptomatology is assumed to hang on converging mechanisms which regulate synapse formation/elimination, synaptic transmission, and plasticity. Actually, as described below, mGluRI-LTD alterations represent a commonality in different animal models of ASD and intellectual disability.
Fragile X syndrome (FXS)-the most common inherited form of intellectual disability and autism-is caused by transcriptional silencing of the fragile X mental retardation 1 (FMR1) gene, coding for the fragile X syndrome retardation protein (FMRP) [160,161]. mGluRI dysregulation, typically associated with abnormal mGluR5 activity, has been consistently reported in FXS animal models, thus inspiring the "mGluR theory" that accounts for synaptic, behavioral, and cognitive alterations related to this disorder, and for rescue effects caused by mGluR5 normalization [162][163][164][165].
Increased mGluRI-LTD at hippocampal CA3-CA1 synapses is a core synaptic feature of FXS, reliably stated in several FXS rodent models and clearly associated with FXS-related cognitive dysfunctions [166][167][168][169][170]. Decades of intense research have partially unveiled molecular mechanisms underlying aberrant hippocampal mGluRI-LTD in FXS models [161]. The most striking outcome concerns a pathological conversion of mGluRI-LTD induction mechanisms, as in physiological conditions in mature synapses, it requires the de novo synthesis of proteins instrumental to LTD induction (LTD proteins), whereas it becomes protein synthesis-independent in FXS rodent models (feasibly because expression levels of such LTD proteins are already high) [161,166,167,171]. Another key pathological feature reported in the hippocampus of FXS models is altered mGluR5 coupling to signaling proteins, due to a compromised bond to scaffolding Homer long proteins (1b/c, 2, and 3) and increased Homer1a link. Such mGluR5-Homer uncoupling is linked to aberrant constitutive mGluRI activity and protein synthesis-independence of the exacerbated mGluRI-LTD at hippocampal CA3-CA1 synapses [172]. Abnormal activation of the PI3K/AKT/mTOR pathway has been associated with different FXS phenotypes, including the exaggerated mGluRI-LTD magnitude at hippocampal CA3-CA1 synapses [173]. Beyond the hippocampus, increased mGluRI-LTD is also overt in the cerebellum at PF-PC synapses of FXS animal models [174,175]. Such synaptic defect is assumed as the pathological substrate of deficits in cerebellar learning processes, such as the conditioned eye blink, that is similarly impaired in Fmr1 KO mice and FXS patients [175].
Tuberous sclerosis complex (TSC) is a multisystem genetic disorder caused by mutations in the Tsc1 gene (9q34) or the Tsc2 gene (16p13.3) encoding the tumor suppressor proteins, hamartin, and tuberin, respectively [176,177]. TSC typical features are benign tumors diffused in several organs (brain, eyes, heart, kidney, lung, liver, and skin), and in most cases neuropsychiatric symptoms, i.e., cognitive disabilities, behavioral alterations, autism, and epilepsy [178]. Hamartin and tuberin form a heterodimer complex-TSC1/2 complex-that physiologically represses protein translation, by directly inhibiting mTOR. Thus, dysfunctional TSC1/TSC2 complex prompts abnormal protein translation, cell growth, and proliferation [179,180]. Some studies have analyzed the impact of Tsc1 or Tsc2 deletion on hippocampal mGluRI-LTD, giving insights on the physiological Tsc1/Tsc2 role in synaptic plasticity and on synaptic defeats underlying TSC pathology. Genetic Tsc2 ablation in mice impairs mGluRI-LTD at hippocampal CA3-CA1 synapses [181], and such synaptic dysfunction can be rescued by boosting mGluR5 activity via an mGluR5 positive allosteric modulator (PAM), that also ameliorates cognitive and behavioral deficits [181]. Moreover, acute local Tsc1 gene silencing in mice damages hippocampal mGluRI-LTD in CA1 area pyramidal neurons [182], and, accordingly, hippocampal mGluRI-LTD induction is prohibited by Tsc1 or Tsc2 conditional deletion in pyramidal neurons [183]. Overall, current evidence demonstrates that Tsc1 or Tsc2 mutations similarly abolish mGluRI-LTD at hippocampal CA3-CA1 synapses, without affecting other synaptic plasticity forms, such as HFS-induced LTP or NMDAR-dependent LTD. This further corroborates the notion that a selective mGluRI-LTD dysfunction, rather than general synaptic deficits, is a commonality of ASD mechanisms.
Angelman's syndrome (AS) is a neurodevelopmental disorder caused by mutations or deletions of the maternally inherited Ube3a gene and characterized by typical behavioral alterations, such as persistent social smiling, giggling, and mouthing behaviors, associ-ated with speech impairments, mental retardation, epilepsy, abnormal EEGs, atypical sleep patterns, and hyperactivity. Such neurological deficits feasibly arise from synaptic dysfunctions, in light of the key role of Ube3a in normal synaptic development and plasticity [184,185]. The investigation of mGluRI-LTD dysregulation in AS models is confined to a study describing increased mGluR5-LTD at hippocampal CA3-CA1 synapses in slices of the maternal Ube3A-deficient AS mouse model [186]. Abnormal mGluRI-LTD magnitude could result from increased mGluR5 signaling efficiency, as mGlu5 coupling to Homer 1b/c proteins is enhanced in the hippocampus of this AS mouse model [186].
Rett's syndrome (RTT) is a neurodevelopment disorder caused by de novo mutations in the methyl CpG binding protein 2 (Mecp2) gene, implied in gene transcription regulation, synapse development, and synaptic plasticity [187]. RTT is characterized by severe global regression in infant girls, resulting in lifelong severe mental retardation, language deficits, purposeful hand use, and, often, epilepsy and autism. Some studies have identified developmental-related dysregulations of hippocampal mGluRI-LTD in an RTT animal model, the Mecp2 KO mice. Specifically, mGluRI-LTD magnitude at hippocampal CA3-CA1 synapses appears preserved in adolescent (~P30) Mecp2 KO mice, despite a modification in underlying molecular mechanisms, with mGluRI-LTD becoming protein synthesisindependent [188,189]. Otherwise, hippocampal mGluRI-LTD alteration is overt in young adulthood (~P60) RTT mice, in relation to a reduced mGluR5 expression in the hippocampus at this age [190].
Phelan-McDermid syndrome (PMS) (or 22q13.3 deletion syndrome) is a rare neurodevelopmental disorder characterized by generalized developmental delay, intellectual disability, absent or delayed speech, autism, seizures, neonatal hypotonia, physical dysmorphic features, and recurrent medical comorbidities [191,192]. A more frequent cause is the deletion of the chromosomal region 22q13.3 including the Shank3 gene; supporting Shank3-related structural and functional alterations in the glutamatergic synapses might underlie the neurological and behavioral deficits observed in patients [191,192]. Investigations in animal models harboring Shank3 deletion or mutations provided insights into Shank3-induced regulation of mGluRI functions, including mGluRI-LTD. The main findings support that ASD-associated Shank3 mutations impair mGluRI-LTD induction at hippocampal CA3-CA1 synapses. Actually, hippocampal mGluRI-LTD is prevented in brain slices of mice with Shank3 exon 21 mutations [193] or with ASD-associated Shank3 R87C and R375C point mutations [194]. Differently, mGluRI-LTD magnitude at hippocampal CA3-CA1 synapses is less affected by Shank3 exon 21 deletion [195]. Shank3 deletion also impairs mGluR5 levels in PSD and HFS-induced LTD at corticostriatal synapses in dST MSNs [196]. Of note, mGluR5 stimulation with a PAM rescues the impaired HFS-LTD and ameliorates ASD-relevant behaviors [196].
The 16p.11 microdeletion syndrome-Besides monogenic alterations leading to wellrecognized syndromes with a high autism prevalence (especially FXS and TS), chromosomal copy number variations (CNVs) have been found in 5-10% of ASD patients. Variation at human chromosome 16p11.2, the most common of these CNVs, accounts for 0.5-1% of all ASD cases [197], and is associated with language impairment, intellectual disability, autistic traits, anxiety, and epilepsy [198]. A recent study described subtle alterations in hippocampal mGluRI-LTD in a mouse model of human chr16p11.2 microdeletion, the 16p11.2 +/− mouse [199]. Specifically, while mGluRI-LTD magnitude at hippocampal CA3-CA1 synapses remains unchanged, it becomes protein synthesis-independent, supporting that, in spite of similar downscaling, alterations in molecular mechanisms underlying hippocampal mGluRI-LTD represent the main synaptic defect associated with 16p.11 microdeletion [199].
Down's syndrome (DS), caused by chromosome 21 trisomy, is a common cause of intellectual disability, with overt learning and memory deficits, also associated with hippocampal dysfunctions. A recent study in a DS animal model, the Ts1Cje mouse, demonstrates enhanced mGluRI-LTD magnitude at hippocampal CA3-CA1 synapses in mouse brain slices [200].
Schizophrenia-Latest evidence suggests that shared synaptic alterations might underlie ASD and schizophrenia, a severe disease with a complex symptomatology characterized by positive (locomotor hyperactivity, aberrant sensory-motor functions), negative (social avoidance, apathy, depression), and cognitive deficits. To date, evidence on mGluRI-LTD dysregulation in schizophrenia models is quite limited. A study analyzing mGluRI function in a mouse line harboring a deletion of dysbindin 1 (dys-1), a risk gene for schizophrenia, has reported reduced mGluRI-LTD magnitude at hippocampal CA3-CA1 synapses [121]. In line with the hippocampal mGluRI-LTD contribution to cognitive processes, dys-1 genetic deletion produces deficits in object recognition memory and spatial learning (besides impaired mGluRI-LTD), that are reversed by a treatment with an mGluR5 PAM [121].
Overall, current knowledge identifies hippocampal mGluRI-LTD dysregulation as a shared synaptic signature of several neurodevelopmental disorders associated with autism, intellectual disabilities, and behavioral alterations. Strikingly, opposite pathological deviances (increased and decreased mGluRI-LTD magnitude) are visible in diverse disorders, in spite of convergent cognitive deficits and behavioral alterations. Exacerbated hippocampal mGluRI-LTD is overt in FXS, AS, and DS animal models, whereas reduced mGluRI-LTD has been found in TSC, PMS, RTT, and the schizophrenia-related sdy1 models; therefore, bidirectional shifts from an adequate mGluRI-LTD magnitude at hippocampal CA3-CA1 synapses equally converge in brain dysfunctions, ASD-relevant cognitive deficits, and behavioral alterations. In addition to better elucidate disease-specific mechanisms instrumental to hippocampal mGluRI-LTD dysregulation, detailed investigations are warranted to clarify if mGluRI-LTD abnormalities extend to other ASD-relevant brain areas, including striatum and midbrain dopamine nuclei, which are increasingly involved in ASD symptomatology but so far poorly investigated.
Alzheimer's Disease
Alzheimer's disease (AD) is the most common cause of dementia and cognitive impairment in the aging population. Synaptic dysfunctions are early pathological AD features, preceding the formation of amyloid plaques and tau deposits, and neuronal loss [201]. Abnormal synaptic plasticity, mainly reported in the hippocampal area of AD animal models, is believed to underlie AD-related cognitive deficits and behavioral alterations. Evidence in AD models proves an enhanced mGluRI-LTD magnitude at hippocampal CA3-CA1 synapses. Pathogenic amyloid-β peptides (Aβ) can directly promote mGluR5-LTD of AMPAR-mediated transmission, by fostering AMPARs endocytosis [202][203][204][205][206][207]. Such Aβ-induced LTD occludes the subsequent mGluRI-LTD due to pharmacological mGluRI activation [202,203,208], suggesting shared mechanisms of LTD expression. In addition, Aβ can foster hippocampal mGluRI-LTD by favoring synaptic mGluR1/5 activation by glutamate, by blocking extracellular glutamate reuptake [209]. Aβ-mediated mGluRI-LTD facilitation at hippocampal CA3-CA1 synapses also involves ASIC1a channels, as demonstrated in AD mouse models [210]. Aβ also facilitates mGluRI-LTD induction in the dentate gyrus at medial PF-GC synapses [211]. Besides the Aβ impact on mGluRI-LTD, additional evidence from a transgenic AD animal model supports that mGluRI-LTD dysfunction is a pathological synaptic signature of AD. The mGluRI-LTD at hippocampal CA3-C1 synapses has been investigated in the transgenic Tg2576 AD mouse model, and there is indication of an either unchanged [211] or amplified [210] mGluRI-LTD magnitude. Moreover, in a different AD animal model, the APP/PS1 mouse, hippocampal mGluRI-LTD induction is prevented, as mGluRI stimulation only induces a transient synaptic depression of fEPSPs at CA3-CA1 synapses, in spite of a stable LTD [212]. Globally, the current scenario supports mGluRI-LTD dysregulation in AD pathology, despite data appearing not to be fully exhaustive. While reliable evidence proves the Aβ facilitator role in hippocampal mGluRI-LTD, findings from transgenic mice appear divergent, thus further research is warranted to find a better definition of mGluRI-LTD abnormalities in AD models.
Stress-Induced Depressive Phenotypes
Major depression, one of the most common and debilitating brain illnesses, has recently been linked to mGluRI dysregulation [213]. Depressed patients have reduced mGluR5 density in several brain areas, including cortex and hippocampus [214] (considered a compensatory modification) [213], while rodent models subjected to stress protocols inducing a depressive-like phenotype mainly display hippocampal mGluR1 and mGluR5 upregulation [215,216] or striatal mGluR5 increase [217]. Hippocampal mGluRI-LTD dysregulation is emerging as a stress-induced synaptic adaptation. Actually, acute restraining stress gates mGluR1-LTD induction in the hippocampal CA1 area [218], and mGluR5-LTD at hippocampal CA3-CA1 synapses are potentiated in susceptible mice with a depressive phenotype due to chronic social defeat stress (CSDS), a stress protocol caused by exposition to a strange aggressive mouse [219,220].
Addiction
Maladaptive synaptic plasticity in reward-associated brain areas (mainly VTA and NAc) is instrumental to the establishment of addiction-related behaviors, and the mGluRI-LTD of glutamatergic transmission partakes in such pathological synaptic adjustments [146,221]. As previously mentioned, mGluRI-LTD in NAc MSNs has distinct induction mechanisms if driven by mGluR1 or mGluR5. Such subtype-specific mGluRI-LTD forms are differently affected by addictive drug exposure and can diversely contribute to synaptic adaptations underlying reward-seeking behaviors or the incubation of craving for drug abuse [221].
mGluR5-LTD in MSNs-due to eCB-dependent reduction of glutamate release-is abolished in mice acutely exposed to drugs of abuse (cocaine and delta 9-tetrahydrocannabinol, THC) or during withdrawal from repeated drug exposures [143,144,222,223]. Cocaineinduced mGluR5-LTD impairment is due to reduced mGluR5 surface expression in MSNs [144]. mGluR5-LTD in NAc D1-expressing MSNs directly underlies the formation of addictive behaviors, as conditional mGluR5 deletion in D1-expressing MSNs, beyond LTD, prevents the appearance of rewards-seeking behaviors, either for drug (cocaine and ethanol) or natural (saccharin) rewards [224]. mGluR1-LTD, reliant on mGluR1-induced changes of AMPARs subunitscomposition in MSNs, is instead inducible in NAc MSNs of rodents acutely exposed to addictive drugs or that have undergone an incubation of craving [221,225]. Of note, mGluR1-LTD reduces the expression of incubated cravings for cocaine [53,140,141] and methamphetamine [226]. mGluR1-LTD can also counteract addictive drugs-induced synaptic plasticity in VTA DA neurons. Indeed, cocaine exposure drives a potentiation of excitatory transmission in VTA DA neurons [227] that is downscaled by mGluRI-LTD induction [52,147,148]. Overall, this supports that eliciting mGluR1-LTD, in NAc and VTA, by mGluR1 PAM could rebalance addictive drug-induced plasticity and reduce incubated cue-induced cocaine-seeking [53,221].
Parkinson's Disease
Parkinson's disease (PD) is an age-related neurodegenerative disease, characterized by severe motor deficits (i.e., rigidity, bradykinesia, resting tremor, and postural instability) and debilitating non-motor symptoms, including autonomic dysfunctions, cognitive abnormalities, psychiatric symptoms (depression and anxiety), and sleep disorders [228,229]. Modifications in mGluR1 and mGluR5 levels have been reported in different basal ganglia nuclei, the brain circuitry whose dysregulation, starting from SNpc dopamine neuron degeneration, is mainly liable for PD symptoms [230,231]. Corticostriatal mGluRI-LTD deficits are overt in PD animal models, in line with the evidence that striatal DA release is instrumental to proper LTD expression in MSNs [41,134,232]. Corticostriatal mGluRI-LTD loss observed in PD models is ameliorated by the DA precursor, L-DOPA, or D2 agonists plus inhibitors of eCB degradation (that boost eCB levels), and such pharmacological treatments in parallel recover motor deficits [134,232]. Besides dST, some evidence describes that the mGluRI-LTD at hippocampal CA3-CA1 synapses is also affected in PD animal models. Actually, the mGluRI-LTD induction at CA3-CA1 synapses is prevented in mouse hippocampal slices treated with the toxin MPP + [233]. Deficits in hippocampal mGluRI-LTD might contribute, along with other synaptic deficits previously reported in PD animal models [234,235], to learning and memory deficits associated with non-motor PD symptoms.
Conclusions and Perspectives
Extensive evidence identifies mGluRI-LTD as a key process by which mGluRI can broadly affect neuronal connectivity, directing complex brain functions and behaviors. Emerging data suggest that, along with conventional and well-known molecular mechanisms (iGluRs endocytosis, changes in iGluRs subunits' composition, and eCB-regulated glutamate release), other processes could partake in mGluRI-LTD expression. Future research may uphold the relevance of such novel mechanisms, validating either astrocytesdependent gliotransmitters release or the activation of intracellular mGluRI located in neuronal compartments as starting events contributing to mGluRI-LTD induction.
The essential roles of mGluRI-interacting proteins (physical and functional partners) in proper mGluRI-LTD expression have been recently reported. The current picture outlines that mGluRI interplay with scaffolding/adaptor proteins, extracellular adhesion molecules, ion channels, and ErbB receptor tyrosine kinases definitively rule physiological mGluRI-LTD magnitude.
Knowledge on physiological mGluRI-LTD has been advanced with an increasing list of brain areas expressing this form of synaptic plasticity, and although the exact definition of the functional relevance in each area warrants more investigation, to date it is accepted that mGluRI-LTD is the biological substrate of essential learning and memory processes, thus guiding cognitive processes and learning-based behaviors.
mGluRI-LTD dysregulation stands as a key synaptic signature in several brain disorders, and an optimal range of mGluRI-LTD magnitude is required to sustain proper brain functions. This is particularly overt for mGluRI-LTD at hippocampal CA3-CA1 synapses, whose deviance in either direction (enhancement or reduction) is associated with similar cognitive and behavioral deficits, as mostly observed in animal models of genetic intellectual disabilities and autism. Thus, mGluRI modulation in opposite directions to counterbalance mGluRI-LTD to homeostatic (physiological) levels could be feasibly considered a valuable therapeutical strategy for diverse neurological and psychiatric disorders. Actually, even though mGluR1 and mGluR5 can influence brain processes in multiple ways, mGluRI-LTD regulation by PAM or NAM (based on the pathological mGluRI-LTD shift) can restore synaptic homeostasis, rescuing cognitive and behavioral alterations. This is demonstrated by robust evidence, mainly from animal models of FXS and other neurodevelopmental disorders, that clearly proves that mGluRI modulators (so far targeting mGluR5) can correct hippocampal mGluRI-LTD deviations, simultaneously improving some cognitive and behavioral deficits [121,[162][163][164]199].
Preclinical data have encouraged evaluations of mGluRI modulators in various human clinical trials for several brain diseases. While a general discussion on the clinical trials investigating mGluRI modulators in brain diseases is beyond the scope of this review, here it appears relevant to emphasize that outcomes from clinical trials in FXS, that is at the forefront of the clinical investigation of mGluRI modulators (namely mGluR5 NAMs), indicate that mGluR5 NAMs fail to ameliorate the most severe symptoms in humans [236], despite robust preclinical data in animal models. Nonetheless, such failures of FXS clinical trials, instead of questioning the factual pathological mGluRI relevance, should rather encourage a deeper investigation of molecular mechanisms underlying mGluRI dysregulation, to possibly disclose additional collateral processes that might limit the therapeutic efficacy of mere mGluR5 inhibition in humans, as well as reveal functional adaptations that might decrease the therapeutic efficacy during a chronic drug administration regimen.
In conclusion, by considering that mGluRI-LTD dysregulation is the foremost important synaptic feature in several brain illnesses, advances in the definition of the exact molecular mechanisms of induction and endogenous regulation that go awry in distinct dis-eases could help to establish novel and more effective therapeutical strategies. To address this point, research efforts should also be focused on novel mechanisms, including glia interplay, the compartmentalized mGluRI subcellular location, and the complex network of mGluRI interactors, and also look for potential brain area/synapse-specific distinct anomalies. Major awareness of mGluRI-LTD physiopathology would have great implications for a general understanding of brain functions in health and disease. | 2023-06-29T05:08:01.935Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "af37ce75f0c8b22c77adcdcd620977f6ae0108bf",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "af37ce75f0c8b22c77adcdcd620977f6ae0108bf",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
213740740 | pes2o/s2orc | v3-fos-license | The effects of the crosslinking position and degree of conjugation in perylene tetraanhydride bisimide microporous polymers on fluorescence sensing performance
In this study, two fluorescence conjugated microporous polymers based on perylene tetraanhydride bisimide (DP4A0 and DP4A2) were prepared via Sonogashira–Hagihara cross-coupling polymerization for the efficient detection of o-nitrophenol (o-NP). They were well characterized via FT-IR, solid state 13C NMR, elemental analysis, and other material characterization techniques. The experiments proved that both CMPs possess high thermal and chemical stability and a porous nature with Brunauer–Emmett–Teller (BET) specific surface areas of 41.3 and 402.1 m2 g−1. Importantly, owing to signal amplification by the conjugated skeleton, DP4A0 and DP4A2 exhibit extremely high sensitivity to o-NP with Ksv values of 1.83 × 104 and 1.69 × 104 L mol−1 and limits of detection of 5.73 × 10−9 and 7.36 × 10−9 mol L−1, respectively. The sensing performance of DP4A0 and DP4A2 was dependent on the position of crosslinking points and crosslinking density. Finally, super amplified quenching was considered the electron transfer mechanism and hydrogen bond interactions were also present.
Introduction
The determination of nitroaromatic compounds (NACs) has increased rapidly over the years because of its signicance in environment, national defense, and public safety. [1][2][3][4][5][6] There are some techniques and devices used for detecting NACs, such as metal detectors, trained canines, gas chromatography coupled with mass spectrometry (GC-MS), 7 electrochemical methods, 8,9 ion mobility spectroscopy (IMS), 10 high performance liquid chromatography, 11 quartz crystal microbalance (QCM), colorimetry, and surface enhanced Raman spectroscopy (SERS). 6,12 Fluorescence detection methods have emerged as one of the most promising approaches because of the quick response time, moderate price, high sensitivity and selectivity, simple operation, and portability for on-site testing. 4,6 Although traditional uorescent substances (such as organic uorescent dyes) have made some progress, they have low limits of detection (LODs) and may result in more environmental pollution due to their toxicity, poor biodegradability, presence of redundant heavy metals as well as inferior chemical stability and photobleaching. 6 Because of the "aggregation-induced quenching (ACQ)" effects, most organic dyes deplete their excitation energy in their aggregated state. By covalently incorporating dye-based monomers into conjugated microporous polymers (CMPs), uorophores can be separated spatially through porosity, thereby remarkably increasing the uorescent intensity of the monomers, which are initially affected by ACQ. 13,14 CMPs are worth the consideration for a variety of reasons: rst, compared with similar building blocks, CMPs become prominent electron donors due to the expansion of p-conjugation. Furthermore, in virtue of the p* delocalization, the enhanced donating ability of CMPs in the excited state provides a platform for the facile migration of excitation. Therefore, the interaction between CMPs and NACs will augment and the uorescence will be quenched, which is found by Swager et al., known as 'molecular wire effect'. 15 The extended p-conjugation of conjugated polymers was shown to be an up-and-coming feature of amplied signal transduction. 3,16 Second, because they are constructed by carbon-carbon or carbon-nitrogen bonds, CMPs are thermally and chemically stable. 3,17 Third, the structure and porosity of the CMPs may be adjusted by judiciously selecting building blocks, since high porosity can enhance the rapid diffusion of NACs in porous frameworks and impactful host-guest interactions, leading to improved uorescence quenching efficiency and sensitivity. 3 Since the discovery of the rst CMP in 2007, 18 there has been much interest in the synthesis and possible applications of these materials. 17,19,20 Numerous CMPs have been successfully synthesized and applied as chemosensors, particularly uorescent CMPs, which have been employed for the detection of NACs. 21,22 NACs have strong electron acceptance ability. Therefore, adding NACs will affect the excited states of CMPs, which leads to uorescence quenching. 23 Recently, we developed a uorescent CMP based on perylene tetraanhydride bisimide (DP 2 A 2 ) via a Sonogashira-Hagihara cross-coupling polymerization for sensing o-nitrophenol (o-NP). 14 In a continuation of our previous study, this paper reports the synthesis of two CMPs (DP 4 A 0 and DP 4 A 2 ) with high surface areas and microporosity (Scheme 1). The effects of different cross-linking points and densities on the uorescence sensing properties of the CMPs were studied.
According to the serial numbers in Scheme 2, the crosslinking points of DP 4 A 0 should be located at 1, 6, 7, and 12. The cross-linking points of DP 4 A 2 should be located at 1, 6, 7, 12, 13, and 14 and the cross-linking points of DP 2 A 2 should be located at 1, 7, 13, and 14.
The thermal gravimetric analysis (TGA) measurements were performed using a Mettler ATA409PC thermo-gravimetric analysis instrument between room temperature to 800 C at a heating rate of 10 C min À1 under a N 2 atmosphere. Powder Xray diffraction (PXRD) was examined on a Rigaku model XRD600 diffractometer equipped with Ni-ltered Cu Ka radiation (40 kV, 100 mA) by depositing powder on glass substrates at room temperature. The scanning speed was 5 min À1 and scanning ranges are from 2q ¼ 5 to 60 with 0.02 increments. Scanning electron microscopy (SEM) was performed on a JEOL-3400LV and S-3400N microscope (Japan) with an accelerating voltage of 5.0 kV.
The specic surface area and pore size distributions were measured using a Bel Japan Inc. model BELSORP-mini II sorption analyzer with N 2 adsorption and desorption at 77 K, and the pore parameters (BET specic surface area, pore size, and pore volume) could be estimated from the adsorptiondesorption isotherms. Samples were degassed at 150 C in a vacuum for 10 h before each measurement. Fluorescence spectra were recorded at room temperature using Hitachi F-4500 spectrophotometers. A 0.1 mol L À1 THF-NACs stock solution was prepared. A 1.0 mg mL À1 solution of the dispersion colloid was obtained by adding DP 4 A 0 or DP 4 A 2 (25 mg) to THF (25 mL), followed by ultrasonic dispersion. As soon as the NAC solution was added, the uorescence intensity of the dispersion was determined. The LODs were obtained using the equation: LOD ¼ 3s/r, where s is the standard deviation of the blank measurement and r is the slope of the relative uorescence intensity (I 0 /I) over the sample concentration.
Results and discussion DP 4 A 0 and DP 4 A 2 as well as their corresponding building blocks were prepared by following approaches found in previous reports. 14,24,25 The structures of DP 4 A 0 and DP 4 A 2 were conrmed via FT-IR spectroscopy, ss 13 C NMR spectroscopy, elemental analysis (EA), and UV-Vis absorption spectroscopy. The FT-IR spectra of both DP 4 A 0 and DP 4 A 2 ( Fig. 1) showed peaks at 2189 and 2200 cm À1 , which are the characteristic of bis-substituted acetylenes. 14,26 The polyimide rings were conrmed by the characteristic peaks of the carbonyl groups in six-membered polyimide rings ($1670 cm À1 and $1710 cm À1 ). 14,[27][28][29][30] The bands at 1331 and 1335 cm À1 are attributed to C-N stretching vibration peaks. 14,27,30 The bands at 1591, 1485/1501, and 1405 cm À1 are the C]C stretching peaks of phenyl rings. The bands at 744 and 755 cm À1 are the absorption peaks of C-H plane deformations of singly substituted phenyl rings. 14 The chemical structure of DP 4 A 0 and DP 4 A 2 was further analyzed via ss 13 C NMR spectrometry (Fig. 2). The peaks at about 162 ppm pertained to the C]O bond in the sixmembered polyimide rings. 14,[27][28][29] The signals near 132.37/ 130.81 and 128.18, 122.23/122.03 ppm pertained to other aromatic carbon atoms (benzene, perylene) from the two building blocks. 14,27 The signals at 93.40/90.97 (Ar-C^C-Ar), 82.83/83.05, and 77.87/77.64 ppm (Ar-C^C-H) correspond to the alkynyl units in the CMPs. 14,25,31 The elemental analysis of the CMPs indicated that there is a deviation from the theoretical values, which is common for CMPs probably because of the incomplete combustion or adsorption of water and gases. 32 Fig. S1a † shows the solid UV-Vis spectra of DP 4 A 0 and the corresponding building block PBr 4 ABr 0 . The DP 4 A 0 powder shows an extra peak at 288 nm, which is not found in PBr 4 ABr 0 . The relative intensity of this peak decreases in the polymer. Compared to PBr 4 ABr 0 with a peak at 272 nm, there is an obvious red-shi. 19 There were two strong peaks for both DP 4 A 2 and PBr 4 ABr 2 , in which a red-shi was observed for the main peak (Fig. S1b †). 29 The results clearly indicated that products with new optical performance were formed. 28 To ascertain the porosity of DP 4 A 0 and DP 4 A 2 , nitrogen adsorption-desorption experiments were performed at 77 K (Fig. 3). DP 4 A 0 assumes a Type I isotherm in the range of p/p 0 ¼ 0.05-0.15 according to IUPAC classications and then takes on a very at adsorption plateau. 30 The gas absorption decreases sharply at lower relative pressures. 27 In sharp contrast, DP 4 A 2 , although similar to DP 2 A 2 , displays a combination of type I and type II isotherms. With the increase in the nitrogen sorption at high relative pressure, there is a palpable hysteresis phenomenon, which might be due to the existence of a mesoporous or interparticulate void in the sample. 14 The isotherm of DP 4 A 2 exhibits sharp N 2 adsorption under low relative pressure (p/p 0 < 0.01), which means that there are substantial micropores and ultramicropores. 14,24,29,30 Using the BET model, these isotherms give apparent surface areas of 41.3 m 2 g À1 and 402.1 m 2 g À1 for DP 4 A 0 and DP 4 A 2 , respectively ( Table 1). The BET surface area of DP 4 A 2 , similar to that of DP 2 A 2 (378 m 2 g À1 ), 14 is higher than that of DP 4 A 0 although the chemical composition of DP 2 A 2 and DP 4 A 0 is the same. A reasonable explanation for this is the difference in the point position and degree of crosslinking. 27,29,30 The pore volumes were obtained using DFT methods from the nitrogen sorption isotherms. DP 4 A 2 shows a larger total pore volume when compared Fig. 1 FT-IR spectra of DP 4 A 0 and DP 4 A 2 . Fig. 2 The ss 13 C NMR spectra of DP 4 A 0 and DP 4 A 2 . Fig. 3 (a) Nitrogen adsorption-desorption isotherms measured at 77 K for DP 4 A 0 and DP 4 A 2 ; (b) the BJH pore size distribution profiles of DP 4 A 0 and DP 4 A 2 calculated using the NLDFT method.
to DP 4 A 0 and is similar to DP 2 A 2 ( Table 1). The ratio of V mic /V tot can be used as a scale for the degree of microporosity in the CMPs. DP 4 A 2 shows a high degree of microporosity, of which 63.27% of the total pore volume consists of micropores. The low volume ratio of DP 4 A 0 indicates clearly that there is a certain amount of large pores or interparticulate porosity. 29,30 The degree of crosslinking in DP 4 A 2 is 6, which is larger than that of DP 4 A 0 and DP 2 A 2 (4); therefore, DP 4 A 2 has a higher specic surface area and microporosity compared to DP 4 A 0 and DP 2 A 2 . Although the crosslinking degree of both DP 4 A 0 and DP 2 A 2 is 4, their crosslinking point position is different. The positions of the crosslinking points in DP 4 A 0 are located at the 1, 6, 7, and 12 positions, while that of DP 2 A 2 are located at the 1,7,13, and 14 positions. Because the positions of the crosslinking points in DP 4 A 0 are more crowded than that of DP 2 A 2 , thus the specic surface area and microporosity of DP 4 A 0 are much lower than that of DP 2 A 2 . The pore volume distributions as a function of the pore width for DP 4 A 0 and DP 4 A 2 were estimated via the nonlocal DFT (NLDFT) method (Fig. 3b). It can be seen that DP 4 A 0 possesses a narrow pore size distribution in the minor region with a pore size centered at 2.15 nm. DP 4 A 0 has a very low total pore volume and there are both micropores and pores in the mesopore region. 27,29 The N 2 sorption isotherm of DP 4 A 2 indicated that DP 4 A 2 includes a certain amount of pores with a relatively broad diameter distribution from 1 to 4 nm and a large proportion of the pores are centered at 1.22 nm. Although micropores dominate in DP 4 A 2 , there is a certain proportion of mesopores in the DP 4 A 2 network. 14,27 DP 4 A 0 and DP 4 A 2 are all black powders. They are not soluble in water and common organic solvents and are also chemically stable in dilute aqueous solutions of acids or bases due to their highly crosslinked structures. There are a few weight losses at 275 and 362 C due to escaping solvent that was caught inside the polymer networks. They have excellent thermal stability with decomposition temperatures (T dec ) of 453 and 547 C determined at 5% mass loss and char yields of 60.01% and 90.26% at 800 C under a N 2 atmosphere (Fig. 4). The high thermal stability is due to the covalent bonds, the high crosslinking density of the networks, and the good stability of the perylene imide backbones. 14,27-30 PXRD measurements conrmed the amorphous feature of the CMPs (Fig. S2 †), 27,30 which are expected for a typical non-dynamic covalently crosslinked CMP materials. 33 Morphologies monitored by SEM demonstrate that DP 4 A 0 displays a Tremella fuciformis-like morphology. 30 DP 4 A 2 exhibits a oppy surface and an interconnected spherical particle-like morphology (Fig. 5). [27][28][29] We investigated the emission spectra of DP 4 A 0 and DP 4 A 2 in different polar solvents (Fig. 6), including ACN, DMF, acetone, THF, chloroform, DOX, and EtOH. 3,4,34 It was found that DP 4 A 0 and DP 4 A 2 disperse liquid in DMF, DOX, chloroform, and THF and exhibited purple or yellow emission when excited with 365 nm UV light. 6,35,36 Thanks to the completely p-conjugated skeleton from the connection of the phenyl rings, DP 4 A 0 and DP 4 A 2 exhibited the strongest uorescence in THF (l ex ¼ 370 nm) and DOX (l ex ¼ 365 nm) suspensions, respectively. 22,35,36 Their ability to use uorescence sensing to detect NACs (such as o-NP) was discussed. As shown in Fig. S3, † a rapid and clearly visible uorescence turn-off response was observed upon increasing the concentration of o-NP in the solution. As the concentration of o-NP in the solution increased, a fast uorescence turn-off response occurred, which signied that DP 4 A 0 and DP 4 A 2 enable effective "real-time" detection of o-NP. 4, 37 We investigated DP 4 A 0 and DP 4 A 2 as uorescent probes for sensing NACs by successively adding NACs, such as NB, PA, DNT, p-DNB, o-NP, p-NT, m-DNB, and non-nitroaromatic analytes including PhOH. Fig. 7 and S4-S6 † show the uorescence spectra and I 0 /I plots of DP 4 A 0 and DP 4 A 2 before and aer adding NACs and different degrees of quenching were observed aer the addition of the NACs. The uorescence of DP 4 A 0 and DP 4 A 2 was almost completely quenched aer the addition of o-NP, while the uorescence was not obviously quenched aer the addition of other NACs (PA, NB, DNT, p-NT, p-DNB, m-DNB, and PhOH), indicating that DP 4 A 0 and DP 4 A 2 exhibit extremely high selectivity for o-NP. 37,38 Furthermore, the uorescence intensity of DP 4 A 0 and DP 4 A 2 were overwhelmingly affected by the molar concentration of o-NP. To qualitatively comprehend the Table 1 Pore and surface properties of DP 4 A 0 , DP 4 A 2 , and DP 2 A 2
CMPs
S BET a (m 2 g À1 ) a Specic surface area calculated from the adsorption branch of the nitrogen isotherm using the BET method with a relative pressure (p/p 0 ) range from 0.01 to 0.10. b The total pore volume was obtained from the BET data up to p/p 0 ¼ 0.99 and is dened as the sum of the micropore volume and the volumes of larger pores. c The micropore volume was calculated from the nitrogen adsorption isotherm using the t-plot method. Fig. 4 TGA thermograms of DP 4 A 0 (black) and DP 4 A 2 (red), measured in a nitrogen atmosphere at a heating rate of 10 C min À1 .
This journal is © The Royal Society of Chemistry 2020 RSC Adv., 2020, 10, 5108-5115 | 5111 Paper RSC Advances quenching inuence of o-NP on DP 4 A 0 and DP 4 A 2 , the quenching coefficients (K sv ) were calculated using the Stern-Volmer (S-V) equation: I 0 /I ¼ 1 + K sv [Q] (I 0 and I are the uorescent intensities of DP 4 A 0 or DP 4 A 2 before and aer the addition of the analyte and [Q] is the concentration of the analyte). As shown in Fig. 7c, good linear S-V relationships were observed for DP 4 A 0 and DP 4 A 2 , and the K sv were up to 4 orders of magnitude greater than before the addition of o-NP, which indicated that DP 4 A 0 and DP 4 A 2 have high sensitivity to o-NP. 5,23,34,38,39 In addition, the LODs of DP 4 A 0 and DP 4 A 2 to o-NP are 5.73 Â 10 À9 and 7.36 Â 10 À9 mol L À1 , respectively, also indicating a high sensitivity to o-NP. 4-6 Fig. 8 shows that as more o-NP was added, the tighter the aggregates of both CMPs were, meaning that there are intense interactions between both CMPs and o-NP. 14,40,41 Again it can be seen from Tables 2 and S1 † that the sensitivity to o-NP is ordered DP 2 A 2 > DP 4 A 0 > DP 4 A 2 . There are similar trends for the sensitivity to PA and DNT. 14 We speculated that the former is caused by the different positions of the crosslinking points, and the latter is caused by the excessive crosslinking density, which limits the conjugation effects of DP 4 A 2 . Since the crosslinking density of DP 4 A 2 (6) is greater than that of DP 2 A 2 and DP 4 A 0 (4), the degree of conjugation in the DP 4 A 2 network is lower than that of DP 2 A 2 and DP 4 A 0 , which is not conducive to the transfer of electrons. Therefore, although the specic surface area, pore volume, and micropore volume of DP 4 A 2 are larger than that of DP 2 A 2 and DP 4 A 0 , the sensitivity of DP 4 A 2 to o-NP is lower than that of DP 2 A 2 and DP 4 A 0 . The crosslinking density of DP 4 A 0 and DP 2 A 2 is the same but the positions of the crosslinking points are different. Because the position of crosslinking points in DP 4 A 0 is closer than that of DP 2 A 2 , the conjugate property of DP 4 A 0 is lower than that of DP 2 A 2 . Moreover, the specic surface area and micropore properties of DP 4 A 0 are much lower than that of DP 2 A 2 and the chance of contact between the active sites of DP 4 A 0 and o-NP is less than that of DP 2 A 2 , thus the sensitivity of DP 4 A 0 to o-NP is lower than that of DP 2 A 2 .
To investigate the cross-effects of other NACs, we also conducted competitive experiments to further explore the selectivity of DP 4 A 0 and DP 4 A 2 to o-NP. The I 0 /I values of DP 4 A 0 and DP 4 A 2 in different NACs mixtures with and without o-NP are recorded in Fig. 9. Aer adding other NACs (e.g. NB, m-DNB, p-DNB, p-NT, DNT, and phenol), the I 0 /I values for DP 4 A 0 or DP 4 A 2 did not change signicantly, with the exception of the PA experiment. When PA was added to a solution of DP 4 A 0 or DP 4 A 2 containing o-NP, the I 0 /I value increased signicantly. There are signs that DP 4 A 0 and DP 4 A 2 have unusually high selectivity to o-NP. 4,5 At higher concentrations of NACs the shapes of the curves are due to a superamplied quenching effect. 42 As shown in Fig. S5, † the S-V plots exhibited hyperbolic curves, which suggests that they are both static and dynamic quenching phenomena occurring at the same time in the detecting process. 4,39,43 Fluorescence resonance energy transfer (FRET) and photoinduced electron transfer (PET) mechanisms were explored to understand the sensing performance of DP 4 A 0 and DP 4 A 2 . The absorption spectra of the NACs have almost no overlap with the emission spectra of DP 4 A 0 (besides p-NT, m-DNB) and DP 4 A 2 (except for o-NP, PA), indicating that the uorescence quenching mainly resulted from PET process between electron-rich DP 4 A 0 or DP 4 A 2 and the electron decient NACs (Fig. S7 †). 34,[44][45][46][47] In order to assess the feasibility of PET from DP 4 A 2 and DP 4 A 2 to NACs, the lowest-unoccupied molecular orbital (LUMOs) energy levels and the highest occupied molecular orbital (HOMO) energy levels of DP 4 A 0 and DP 4 A 2 were calculated. 34 As shown in Fig. S8 and Table S2, † the LUMO levels of DP 4 A 0 and DP 4 A 2 are higher than that of the NACs, which allows for electron transfer from DP 4 A 0 and DP 4 A 2 to the NACs. 4 For example, the LUMO energy levels of PA and p-DNB are lower than that of DP 4 A 0 and DP 4 A 2 . 34,48 However, the LUMO levels are not the only factor affecting the PET process, it is also necessary to take into account the HOMO energy level of the NACs if it is lower than that of the CMPs. Hence, although the LUMO energies of DP 4 A 0 and DP 4 A 2 are lower than that of NB, DNT, p-NT, m-DNB, o-NP, and PhOH, a PET process can occur as well. 34,49 Generally speaking, nitro groups play an important part in uorescence quenching because their contribution to the LUMO energies is achieved by utilizing their electron withdrawing ability. Therefore, with increasing numbers of nitro groups, lower LUMO energy levels are obtained, thus improving electron transfer efficiency. 6,39 The interaction of the analytes with DP 4 A 0 and DP 4 A 2 , as well as efficient PET are considered the key parameters in the quenching mechanism.
However, the uorescence quenching phenomena cannot exclude the possibility of a FRET mechanism. O-H/p and OH/N interactions may also impact the sensitivity of DP 4 A 0 and DP 4 A 2 for detecting o-NP. 39 The NAC-containing hydroxyl groups (PA, NP, and PhOH) can effectively cause uorescence quenching. This may be due to weak interactions such as hydrogen bonding of hydroxyl groups with nitrogen, oxygen, and p-electrons in the framework. 39,50,51 The results indicated that there are three important factors for the detection of NACs, i.e., nitro groups, hydrogen bonding, and N, O/p interactions (Fig. S9 †). 6 High quenching constants and quenching rates, together with extremely low LOD values, make DP 4 A 0 and DP 4 A 2 Table 2 The equations of I 0 /I for DP 4 A 0 , DP 4 A 2 , and DP 2 A 2 with the concentration of o-NP, suspended in THF and DOX with excitation at 370 and 365 nm
CMPs
The equation Fig. 9 The selectivity and competitiveness of DP 4 A 0 and DP 4 A 2 (1.0 mg mL À1 ) for sensing NACs at the same concentration of 2.5 Â 10 À4 mol L À1 of o-NP in THF and DOX (l ex ¼ 370 and 365 nm).
This journal is © The Royal Society of Chemistry 2020 RSC Adv., 2020, 10, 5108-5115 | 5113 highly sensitive chemical sensors for detecting NACs, particularly o-NP. 39 In order to study the uorescence stability of DP 4 A 0 and DP 4 A 2 , annealing experiments were carried out. Solid powders of DP 4 A 0 and DP 4 A 2 were heated to 50 C for half an hour in the atmosphere. Then, the temperature was increased to 100 C where it was maintained for another half an hour. The temperature was then increased to 150 C and 200 C. Aer cooling, the samples were scattered in DOX or THF at a polymer concentration of 1.0 mg mL À1 . Fig. S10 † shows the uorescence spectra of DP 4 A 0 and DP 4 A 2 aer annealing in the atmosphere. As can be seen that they were still quite stable aer heating to 100 C for half an hour, and the uorescence spectra were nearly the same as before heating. As the temperature increased, the uorescence peak intensities decreased but there was no red-shi observed. Below 100 C, uorescence emission is stable. This result is due to the chemical structures of DP 4 A 0 and DP 4 A 2 and their stable perylene units. 14,52,53
Conclusions
In summary, we have succeeded in developing solvent dispersible, uorescent conjugated microporous polymers by taking advantage of Sonogashira-Hagihara cross-coupling reactions. We employed these systems as chemosensors for sensing nitroaromatic compounds. The two CMPs exhibit high BET specic surface areas and total pore volumes. They emitted strong purple and yellow uorescence under UV light. It was found that DP 4 A 0 dispersion in THF and DP 4 A 2 dispersion in DOX can be effectively utilized to detect o-NP by uorescence quenching in real-time. Specically, DP 4 A 0 and DP 4 A 2 exhibited high selectivity to o-NP over other NACs. The relationship between the uorescence intensity and o-NP concentration can be quantied. The detection limits of o-NP using DP 4 A 0 and DP 4 A 2 as probes were estimated to be 5.73 Â 10 À9 and 7.36 Â 10 À9 mol L À1 . The possible uorescence sensing mechanism of both CMPs was deemed to be an electron transfer as well as hydrogen bonding and N, O/p interactions. The sensing performance of DP 4 A 0 and DP 4 A 2 was dependent on the position of the crosslinking points and crosslinking density.
Conflicts of interest
There are no conicts to declare. | 2020-02-06T09:11:56.579Z | 2020-01-29T00:00:00.000 | {
"year": 2020,
"sha1": "a924f0ce236953e3aa310518fa9ce243955bb3f6",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/ra/c9ra10384h",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c883f1ff9992d7af6f38ed18df9fc127bdd7f763",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
197469401 | pes2o/s2orc | v3-fos-license | Regularization, Recognition and Complexity Estimation Methods of Automata Models of Discrete Dynamical Systems in Control Problem
: In paper are considered laws of functioning of discrete determined dynamical systems and specific processes of functioning of such systems. As basic mathematical model of laws of functioning of systems are used automata models with a fundamentally new extension of these models to models with a countable infinite sets of states. This expansion is possible thanks to the proposed and developed by Tverdohlebov V. A. the mathematical apparatus of geometrical images of automaton mappings. Are presented results of development of regularization methods for partially set automata models of systems based on use of geometrical images of automatons mappings and numerical interpolation methods. Also in paper are considered a problem of complexity estimation of laws in a whole and specific processes of functioning of dynamic systems. For these purpose are used recurrent models and methods and also a specific mathematical apparatus of discrete riv-functions. Is spent classification by complexity estimations of automata models
Introduction
In the theory of experiments with automatons initial base is decoding of a contained of "black box". Initial data is the information on variants of a contained of black box. Under the general scheme of carrying out of experiment to a black box (to contents of a black box) are put influences, reaction to these influences is observed and on these supervision is construction logic conclusions. In control problems the automaton and family of automatons are set and it is required to define, contents of a black box is this allocated automaton or the automaton from the set family. In case of diagnosing it is supposed, that contents of a black box is the element of the set family of automatons and it is required to define what it is the automaton. By E. Mure [1], A. Gill [2], T. Hibbard [3] and other authors solve following problems: criteria of existence of the decision of a problem of recognition of a contained of black box are found; the basic method of construction of experiment on automaton recognition in the set family of automatons, including construction minimum on length of simple unconditional experiment (E. Mure, A. Gill) is developed. Further essentially important expansion of approaches and methods of technical diagnosing was representation of laws of functioning of automatons by geometrical images, i.e. numerical mathematical structures in the form of discrete numerical graphics [4]. If the automatons presented for the decision of control and diagnosing problems in their geometrical images to combine with analytically set curves search and construction of control and diagnostic experiments can be carried out on the basis of the decision of systems of the equations for the geometrical curves set analytically.
In the basic works, containing development of the automata theory, the problem of regularization of automata on the basis of the uniform approach is not considered. There are problems, at which decision used methods assume completely set laws of functioning of automatons, but in initial data these laws are presented partially. Fundamental mathematical results on regularization of partially set graphics are presented by classical methods of interpolation of Newton, Lagrange, Gauss, Bessel, spline-interpolation methods etc. Inapplicability of these methods for partially set automatons is connected with the symbolical form of the presentation of automatons by tables, matrixes, graphs, systems of the logic equations, etc. Presentation of laws of functioning of automata by numerical structures, offered and developed by V. A. Tverdokhlebov (see, for example, [4,5]), allows to use classical methods of interpolation in the automata theory. In paper are developed methods of interpolation for partially set laws of functioning of the automatons, presented by geometrical images.
One of fundamentals, making mathematical models of large-scale systems are the algorithms, realized by system according to its target mission. For an algorithmically solvable class of problems there is an infinite set of algorithms (solving a class of problems), which can be ordered on complexity. Realization of algorithms is generally connected with complexity of algorithm in the system, defining the major indicators: performance, a memory size, reliability, expenditure of energy etc. Number of variants of concept of complexity is sufficiently great and continues to increase in works of many researchers. For example, estimations of algorithms on their belong to NP and P classes (detail review see, for example, in work [6] and one of contemporary papers [7], in which Radoslaw Hofman show, that P not equal NP), Kolmogorov сomplexity [8], complexity from below, from above, complexity on the average, bit complexity (one of basic work in this area is [9]), multiplicate complexity, algebraic complexity, there are very large amount of works on asymptotical estimations of complexity (see, for example works [10,11]), approximation complexity of analysis of graph structures [12,13] and for identifying codes [14] etc.
In the given work with use of the apparatus of geometrical images of automatons [4], it is offered and is investigated the estimation of complexity of laws of functioning of the discrete determined dynamic systems (automatons) on the basis of discrete riv-functions of kind In this work is offered the estimation of complexity of laws of functioning of the discrete determined dynamic systems (automatons) on the basis of geometrical representation of laws and use of discrete riv-functions. Is carried out the analysis of more than 10 million discrete rivfunctions. In clause are considered the riv-functions, containing more of 20 billion of discrete graphs, on which are synthesised laws of functioning of automatons. Specificity of all considered riv-functions is defined. As complexity indicators are considered the minimum and maximum number of states at the minimal automaton from the set of automatons, defined by riv-function.
Geometrical Images of Lows of Functioning of Automatons
It is known, that the apparatus of continuous numerical mathematics effectively uses infinite sets. In this connection Tverdohlebov V. A. was developed the new approach to construction of models of complex systems and methods of the analysis of such models, which are stated in works [4,5]. A developed principle is placing of discrete structures on continuous geometrical curves, set analytically. For this purpose instead of next-state function and output function of automaton is considered a automaton mapping, i.e. symbolical mathematical structures of a kind (input sequence, output sequence). Geometrical image γ s of laws of functioning (see works [4,5]) of initial finite determined automaton A s = (S, X, Y, δ, λ, s) with sets of states S, input signals X, output signals Y and next-state function δ: S×X→S and functions of outputs λ: S×X→Y it is defined on the basis of introduction of a linear order ω in automata mapping Automaton mapping ρ s (set of pairs) is ordered by linear order ω, defined on the basis of an order ω 1 on X * and set by following rules: Rule 1. On set Х some linear order ω 1 (which we will designate 1 ≺ ) is entered Rule 2. An order ω 1 on Х we will extend to a linear order on set Х * , believing, that a. For any words 1 ω′ -an order on s ρ′ , induced rather ω 1 on X * . Having a linear order ω 2 , defined on set Y and having placed set of points ρ s in system of coordinates D 1 with an axis of abscisses (X * , ω 1 ) and an axis of ordinates (Y, ω 2 ), we receive a geometrical image γ s of laws of functioning of initial finite determined automaton A s = (S, X, Y, δ, λ, s). Linear orders ω 1 and ω 2 allow to replace elements of sets X * and Y by their numbers r 1 (p) and r 2 (p) on these orders. As a result are defined two forms of geometrical images, first, as symbolical structure in system of coordinates D 1 , and secondly, as numerical structure in system of coordinates with integer or real positive semiaxes.
The Method of Recognition of Automatons by Their Geometrical Images
Let automaton A 0 is mathematical model of efficient technical system and the family of automatons represents set I of failures of technical system. We will assume, that these automatons are set by geometrical image γ 0 and family of geometrical images In the developed method of recognition geometrical images γ 0 and rely located on analytically set geometrical curve L 0 and family of analytically set geometrical curves Then equality -sets of points of curves, is defined the decision of a problem of the control with use of simple unconditional experiment. Definition 3.1. Let L -a geometrical curve and ∆ -a piece on an axis of abscisses, on which the part of curve L (or all curve L) is defined. This part of a curve we will designate L (∆). Theorem 3.1. Let initial automaton A 0 = (S, X, Y, δ, λ, s 0 ) have a geometrical image γ 0 , located on curve L 0 and -a family of their geometrical images, located accordingly on curves from family is carried out and in a piece ∆ of abscissa axis are defined some points of geometrical image γ 0 and geometrical images from family β, then piece ∆ contains the decision of a problem of recognition of the automaton concerning of family α by simple unconditional experiment.
The proof. Let I = {1, 2, …, k}. We will present system of , are recognized by output sequences on the general input sequence p, i.e. by simple unconditional experiment.
On the basis of the theorem 1 is offered the method (with 4 stages) of recognition of the automaton, which laws of functioning are set by the geometrical images, located on analytically set curves.
The Method of Recognition of Automaton in Pair of Automatons by Geometrical Images
The method consists of following stages: Stage 1. For automaton A 1 = (S 1 , X, Y, δ 1 , λ 1 , s 01 ) and A 2 = (S 2 , X, Y, δ 2 , λ 2 , s 02 ), making an exclusive class and set in the geometrical images d The method is proved by the following theorem. Theorem 4.1. The problem of recognition of the automaton by simple unconditional experiment in pair of automatons A 1 and A 2 where A 1 =(S 1 , X, Y, δ 1 , λ 1 , s 01 ) and A 2 = (S 2 , X, Y, δ 2 , λ 2 , s 02 ), S 1 ∩ S 2 = ∅ , has the decision in only case when when for some Y y ∈ characteristic function φ y satisfies to condition
Interpolation for Regularization of Laws of Functioning of Automatons
The choice and application of a method of interpolation by implication correspond to acceptance and realization of a hypothesis, that the method of interpolation, applied to the numerical graphic, representing partially set geometrical image of the automaton, enough precisely regularize points of a geometrical image, i.e. is enough exact regularize partially set laws of functioning of the automaton. Therefore, validity of the results, received with use of the chosen method of interpolation, is shown to a substantiation of correctness of a hypothesis. In the given paragraph methods of a choice of a hypothesis (a choice of a concrete method of interpolation) are investigated and developed for concrete classes of automatons on an example of a choice of more exact method of interpolation from two methods of interpolation: Newton and Lagrange (under the similar scheme also is carried out the analysis of Gauss, Bessel etc. methods). These methods include following stages: 1 (method of interpolation of Newton with an assessment 0.14 more precisely, than a method of Lagrange).
Problem Statement
The finite determined automaton is set by finite geometrical image of laws of functioning (on a finite section). It is required to define the top and bottom borders for number of states without obvious construction of next-
Descrete Riv-Functions
The by the first 40 digits of number π, makes more than 10 27 .
Estimation of Complexity of Laws and Processes of Functioning of Automatons with Use of Descrete Riv-Functions
The finite The In clause are considered the initial finite determined automatons of type of Mile of kind , according to dynamics equations: s(t+1) = δ(s(t), x(t)), y(t) = λ(s(t), x(t)).
In the given work is used the apparatus of geometrical images of laws of functioning of the automatons, for the first time offered by V. A. Tverdohlebov in 1995г. and later developed in work [4]. Transformation of phase pictures to geometrical images of laws of functioning of the automaton, offered and developed by V. A. Tverdohlebov, has allowed to represent phase pictures by uniform mathematical structuresbroken lines with numerical coordinates of points. To V. A. Tverdohlebov is shown, that sequence of elements from the finite set, combined with linear order on set of input words, defines laws of functioning of the discrete determined dynamic system (automaton). V. A. Tverdohlebov is offered and developed a method of synthesis of laws of functioning of the automaton on the set sequence (see, for example, [4,5] The detailed description of a method of synthesis of the automaton on sequence, and also a method of check of equivalence of states of the automaton by it geometrical image contains in the monography [4]. We will note only the basic moments of a method of synthesis of the automaton on sequence. If as the task of laws of functioning of the automaton is considered the sequence ξ( , and next-state function δ define by rules s 0 =s ε and δ(s p , x) = s px , then function δ appears standard for all automatons with set of input signals X. Specificity of automatons is shown, that on infinite set of states for each automaton classes of equivalent states are allocated. At synthesis of laws of functioning of the automaton is essential the way of regularization of next-state functions δ of the automaton. Various ways of regularization of next-state functions of the automaton are possible: cyclic regularization, regularization in an initial state, state can generate by random generator (from set of possible states), etc. In a case, when , let's name cyclic regularization (or regularization of type 2). In paper are considered the specific families of automatons , С 1 , С 2 ∈ N + , С 2 -С 1 +1 = l, l ≥ 2 and m ∈ N + , m ≥ 2, and for family of automatons [4]). In discrete riv-function k d H , we will allocate so much initial parts, how many in pairs various broken lines of length m, take places in a rectangle l×m, i.e. l m parts. We will assume, that the geometrical image of the automaton from family , is required to regularization.
In a case, when l m <τ, next-state function δ of the automaton for states with numbers 1, 2, …, l m also is defined by a standard rule, and for states with numbers l m +1, l m +2, …, . In definition 6.12 is offered the specific way of regularization to next-state function. Definition 6. 12. A way of regularization to next-state function δ of the finite determined automaton of type of Mile will be necessarily is 1-equivalent to any condition from set (S\S′) (since in the table of outputs of the automaton for each column with number l m +1, l m +2, …, there will be an identical column with number from 1 to l m ). We will show, that at use of a way of regularization to nextstate function δ of the automaton all states from set S′ will be 2-recognizable and, besides, any conditions s and s′ where S s ∈ , S s ′ ∈ ′ , which are 1-equivalent, are recognizable conditions. Because any two conditions with numbers from 1 to l m are recognizable, at use of a way 3 of regularization to next-state function δ of the automaton for any states s′ and s″ from set S′, provided that pairs not equivalent, that proves the theorem statement. □
Example of Estimation of Complexity
Example 1. Whereas information on the real law of functioning of concrete complex discrete dynamic system has huge dimension, and the law is known only partially and is required the decision of additional problems on regularization it to completely set law, we will spend an illustration of the offered method of an estimation of complexity on an example, in which the top and bottom borders of riv-function are presented by known mathematical sequences. On fig. 1
Conclusion
In given article on the basis of use of the apparatus of geometrical images of automatons are offered methods and the algorithms, developed for recognition of laws of functioning of discrete determined dynamic systems (automatons), set by the automaton mappings, placed on analytically set geometrical curves. Methods are proved by corresponding theorems. In paper are stated models and the methods developed for interpolation of partially set laws of functioning of automatons, set by the automata mappings placed on geometrical curves, using base points of the interpolation, selected on the basis of selection of autonomous subautomatons.
In paper is offered the estimation of complexity of laws of functioning of the discrete determined dynamic systems (automatons) on the basis of geometrical representation of laws and use of discrete riv-functions. Is carried out the analysis of more than 10 million the discrete riv-functions, formed by fundamental mathematical sequences of length to 80 signs, taken from bank [15]. Also are considered the rivfunctions, containing more of 20 billion of discrete graphs, on which laws of functioning of automatons are synthesised. Specificity of all considered riv-functions is defined. As complexity indicators are considered: k min -the minimum number of conditions at the reduced automaton in family The number of states of modeling system is one of the fundamental characteristics, used at designing and system manufacturing. The offered method of an estimation of complexity of laws of functioning of the discrete determined automatons can be applied to get exact bottom and exact top estimations of number of states at the minimal automaton only on the basis of the analysis of a geometrical image of the automaton, without obvious construction of next-state and output functions of automaton and carrying out the subsequent minimization, which practical realization for automatons with large number of states even with use of modern computing systems is essentially complicated. The basic parameters, used in an offered method, are length of a considered initial piece of a geometrical image of the automaton -d, number of input signals of the automaton -m and number of output signals of the automaton -l. In view of that the basic criteria for reception of estimations in an offered method are only ratios of sizes d, m, l, and in a method recursive procedures of construction aren't used, the method can be used for the big finite sizes d, m, l. Use of the apparatus of geometrical images of the automatons, offered and developed by V. A. Tverdokhlebov (see, for example, [4]), allows to consider geometrical curves and numerical sequences with automata interpretation, i.e. as ways of the task of laws of functioning of automatons. It allows to build automaton models of the discrete determined systems without restrictions on number of states. The offered method is based on use of geometrical representation of laws of functioning of automatons and allows to give concrete estimations on number of states for any, as is wished great, values of sizes d, m, l, that can be used in practice at designing of systems for carrying out analysis on number of states of possible variants of realization of system for purpose of a choice of system with the least number of states.
The number of states of modeling system is one of the fundamental characteristics, used at designing and system manufacturing. The offered method of an estimation of complexity of laws of functioning of the discrete determined automatons can be applied to get exact bottom and exact top estimations of number of states at the minimal automaton only on the basis of the analysis of a geometrical image of the automaton, without obvious construction of next-state and output functions of automaton and carrying out the subsequent minimization, which practical realization for automatons with large number of states even with use of modern computing systems is essentially complicated. The basic parameters, used in an offered method, are length of a considered initial piece of a geometrical image of the automaton -d, number of input signals of the automaton -m and number of output signals of the automaton -l. In view of that the basic criteria for reception of estimations in an offered method are only ratios of sizes d, m, l, and in a method recursive procedures of construction aren't used, the method can be used for the big finite sizes d, m, l. Use of the apparatus of geometrical images of the automatons, offered and developed by V. A. Tverdokhlebov (see, for example, [4]), allows to consider geometrical curves and numerical sequences with automata interpretation, i.e. as ways of the task of laws of functioning of automatons. It allows to build automaton models of the discrete determined systems without restrictions on number of states. The offered method is based on use of geometrical representation of laws of functioning of automatons and allows to give concrete estimations on number of states for any, as is wished great, values of sizes d, m, l, that can be used in practice at designing of systems for carrying out analysis on number of states of possible variants of realization of system for purpose of a choice of system with the least number of states. | 2019-04-22T13:06:27.094Z | 2017-05-31T00:00:00.000 | {
"year": 2017,
"sha1": "3bc8122cba914070b7c4871466e521c7d9ccdaef",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ajmse.20170205.14.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "01c1067f160881e05bfbe9ef08062d1a49251787",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
269125097 | pes2o/s2orc | v3-fos-license | Particle filtering supported probability density estimation of mobility patterns
This paper presents a methodology that aims to enhance the accuracy of probability density estimation in mobility pattern analysis by integrating prior knowledge of system dynamics and contextual information into the particle filter algorithm. The quality of the data used for density estimation is often inadequate due to measurement noise, which significantly influences the distribution of the measurement data. Thus, it is crucial to augment the information content of the input data by incorporating additional sources of information beyond the measured position data. These other sources can include the dynamic model of movement and the spatial model of the environment, which influences motion patterns. To effectively combine the information provided by positional measurements with system and environment models, the particle filter algorithm is employed, which generates discrete probability distributions. By subjecting these discrete distributions to exploratory techniques, it becomes possible to extract more certain information compared to using raw measurement data alone. Consequently, this study proposes a methodology in which probability density estimation is not solely based on raw positional data but rather on probability-weighted samples generated through the particle filter. This approach yields more compact and precise modeling distributions. Specifically, the method is applied to process position measurement data using a nonparametric density estimator known as kernel density estimation. The proposed methodology is thoroughly tested and validated using information-theoretic and probability metrics. The applicability of the methodology is demonstrated through a practical example of mobility pattern analysis based on forklift data in a warehouse environment.
Introduction
Mobility pattern analysis can be a complex problem, particularly when dealing with uncertain and noisy position data.Consequently, various methods and techniques must be employed to address these challenges [1].This paper addresses the question of how to increase the reliability of exploring the probability distribution patterns of location data.
Raw position data presents inherent challenges when attempting to draw reliable conclusions.On the one hand, data sparsity poses a problem [2], since observations are less likely to cover events that occur infrequently in space and time.On the other hand, insufficient data quality introduces difficulties due to the reliance on sensor measurements for position observations.Sensors can introduce data quality issues such as noise, offset, and outliers [3].
Building a density model based on limited data can lead to the incorporation of incorrect information, distorting the model, and causing a discrepancy between the approximation and the true underlying distribution.However, since positional measurements from a dynamic system form a coherent time series, knowledge of their characteristics can provide additional information that aids in the density estimation process.Although we may not have complete knowledge about the underlying movement process from which position data originate, we often have assumptions about its basic kinematics or dynamics.In many cases, this knowledge is available in the form of mathematical models.For example, vehicle models are used to estimate the state of a car [4].Additionally, awareness of related physical constraints, such as considering joint angle limits in human motion tracking, can be valuable [5].The external environment in which movement occurs also influences the evolution of the process, and knowledge of these factors can be beneficial during mobility analysis.Using such prior knowledge is common in anomaly detection methods, such as considering the maximum speed a vehicle can achieve in a given environment [6] or exclusion zones that restrict vessel entry [7].By combining raw position measurements with the aforementioned background knowledge, it is possible to reduce uncertainty and develop more informative models.
In our work, we propose to incorporate the dynamic model of the system and other contextual information into the probability estimation for the analysis of mobility patterns, in addition to using raw position data.For this purpose, a Bayesian state estimation is employed, which recursively applies Bayesian inference to combine measurements of how the observed process evolves with prior knowledge about the process's characteristics.By combining Monte Carlo methods [8] with Bayesian estimation, the number of samples can be increased, resulting in a more valuable input for density estimation.Fig. 1 shows a framework for exploring spatial probability using available information and the above techniques.
This study was motivated by the need to analyze the measurements obtained from indoor positioning systems.These measurements are utilized to calculate the occurrence probability of objects in different locations to identify the pathways of tracked objects and characterize their typical places of occurrence and mobility patterns.However, the data collected from these systems are often affected by significant noise, making it difficult to extract valuable information.To address this issue, the paper proposes a novel method that takes advantage of the PF algorithm to generate mobility data with higher information content.By incorporating background knowledge on the moving process and environmental constraints, the method improves the accuracy of probability models and reduces uncertainty.
The key contributions of this work can be summarized as follows.
• Introduction of a methodology for processing mobility data, specifically addressing the challenges posed by noisy measurements.
• Application of the PF algorithm to generate probability-weighted position measurement samples that contain richer information.These samples serve as the basis for more accurate estimation of the probability density.• Incorporation of spatial environment information into the update formula of the PF, allowing the model to account for environmental constraints during the estimation process.• Adoption of Kernel Density Estimation (KDE) to model the probability distribution of mobility data based on the informationenriched dataset rather than relying solely on raw position data or expected position estimations.• Demonstration of the applicability of the proposed methodology through a practical example involving the tracking of forklifts using an indoor positioning system in a warehouse setting.• Evaluation and validation of the effectiveness of the methodology using probability and information-theoretic measures.
The article is structured as follows.Section 2 presents the relevant studies.Section 3 provides a detailed explanation of the proposed methodology.Section 3.1 presents a formal description of the problem, while Section 3.2 discusses the representation of prior knowledge through mathematical models.Subsection 3.3 describes the application of the PF algorithm to generate informative measurement samples, and Subsection 3.4 explains how contextual information is integrated into the estimation process.Section 3.5 focuses on the Kernel Density Estimation technique.Finally, Section 3.6 presents the metrics used to evaluate the proposed methodology.
Section 4 presents the application of the methodology.Subsection 4.1 illustrates the industrial application potential of the methodology through an example in the field of intralogistics, highlighting its capacity to solve real-world problems.Section 4.2 describes the studies conducted that measure the effectiveness of the method and presents the results obtained.The article concludes with a summary of the findings and their implications.
Works related to probability model-based processing of location data
Mobility pattern analysis involves different tasks, such as characterizing specific locations or trajectories [9,10].In trajectory modeling, transitions between sets of different locations are characterized.If semantic information is available, supervised techniques such as graph neural networks [11] or SVM [12] can be used.In the context-independent case, unsupervised techniques such as clustering [13] or frequent item mining [14] approaches can be used for this purpose.In location characterization, one of the tasks is to characterize locations in terms of user activity using semantic information, e.g.prediction of the number of visitors entering a shop [15].In many cases, only raw, noisy position data are available, in which case the task is to identify meaningful locations from them.Clustering techniques [16] are applied for grouping location data with similar characteristics to identifying places where data is concentrated.Probability density estimation methods, such as kernel density estimation (KDE) [17] and Gaussian mixture modeling (GMM) [18], can be utilized to infer the unknown spatial probability density function based on position data distributed according to this function, and to identify places with high occurrence probability of objects.
Mobility patterns encompass the movement of objects on various spatial scales and over time.Tracked objects generally transition between different locations and spend varying durations at each location.As a result, position data tend to concentrate to different extents at specific spatial points, forming regions with varying levels of density.The use of single unimodal parametric distributions, such as the normal distribution, does not capture the multimodality present in the position data distribution and tends to oversmooth local regions [19].A nonparametric estimator, the kernel density estimate (KDE) is suitable for handling data locality [20].KDE is commonly used for the estimation of spatial probability density in the detection of urban mobility patterns [21][22][23] and location prediction [24].In modeling spatial density based on social network location data from people, adaptive KDE outperformed traditional KDE as well as single or mixture Gaussian density modeling [2].On the other hand, when estimating the probability density function of position and velocity from vessel traffic data, both adaptive KDE and GMM yielded comparable performance [25].It should be noted that there is no universally optimal model applicable in all cases, as evidenced by a study in which different techniques have been applied to model the spatial probability of animals with varying effectiveness.Depending on the spatial habits of the animals, KDE was found to be superior in some cases, while simple Gaussian or GMM distributions were more suitable in others [26].
Density estimation techniques are effective in addressing the issue of data sparsity.However, in scenarios involving lowprobability events, simulation of samples becomes necessary.Monte Carlo methods offer a solution by allowing the generation of random samples from a specific probability distribution and the estimation of that distribution based on the generated samples [27].Importance sampling (IS) is a Monte Carlo method designed to sample from low-probability regions of distributions that are typically difficult to sample from.Instead of directly sampling from the target distribution, IS samples from a "proposal distribution" that has sufficient overlap with the target distribution but higher probability density in the region of interest.The samples obtained through IS are then weighted by a factor that depends on the ratio of the target distribution to the proposal distribution at each point, preventing an overrepresentation of the target distribution in that region [28].In a study that focused on estimating the probability distribution of vehicle-pedestrian interactions at intersections, a truncated GMM was used [29].Given that events with adverse outcomes have a lower probability and limited data are available, IS was used to simulate data for high-risk situations.Similarly, when estimating the probability of airplane collisions with limited available data, IS was applied to simulate trajectories that lead to conflicts [30].The task of estimating the "trajectories" of successive states in dynamic systems poses a high-dimensional problem where the basic IS approach becomes inefficient.In such cases, it is more reasonable to estimate the state distributions at each instant of time recursively using IS [31].Sequential importance sampling (SIS) is a technique that approximates the posterior density of states at each time step by applying IS and assuming the Markov property for the dependency between consecutive states [32].It is advantageous to choose the state transition density as the proposal distribution in SIS, not only for its simplicity and generality but also because it allows the utilization of a dynamic model of the process for sample generation.The obtained samples are assigned importance weights based on their likelihood in the measurement density, thus incorporating Bayesian inference.Bayesian inference-based state estimation techniques [33] are the most elaborated and widely employed techniques for integrating raw sensor measurement into the dynamic model [34,35], including the classic Kalman filter, whose principle of operation can also be derived from Bayesian state estimation [36].In addition, fuzzy logic and neural networks can also be used for information fusion [37,38].If several different models are available to describe the dynamics of the movement, then ensemble methods enable the fusion of information from the predictions of different models [39].
This approach enables the generation of informative samples by integrating knowledge of the dynamic process and observations.A modified version of SIS is sequential importance resampling, commonly known as particle filter (PF) [40].PF incorporates a resampling procedure within the recursion to address issues arising from the insufficient overlap between the state transition and measurement densities.PF is widely employed for state estimation, particularly in handling complex, non-linear, and non-Gaussian systems.Localization is a common application of PF in state estimation [41,42].
PF also provides a straightforward approach to taking physical objects in the environment into account when performing localization.By leveraging map information and state transitions, the weights of particles or samples can be assigned values that reflect the probability of the corresponding displacements in the physical environment.For example, the weights of the position particles can be influenced by factors such as proximity to objects [43] or the sequence of road segments [44].Furthermore, the weights can be set to zero or close to zero in cases where certain displacements are not feasible, such as crossing a wall [45][46][47].Incorporating knowledge about both the dynamics of the movement process and the environmental factors that affect movement enables a more accurate localization.However, the utilization of this knowledge goes beyond position estimation in our case; it extends to the generation of additional data.By retaining the particles of the filter, a multitude of information-enriched samples can be generated.Instead of relying solely on raw position measurements or the expected values of these particles, probability density estimation is performed directly on these samples.Consequently, the resulting approximation distributions exhibit reduced uncertainty.
Methodology of probability density estimation on information-enriched data
This research presents a methodology for exploring the underlying unknown probability distribution of position data by incorporating prior knowledge into density estimation, in addition to considering noisy measurements.The PF algorithm is used to generate mobility data with higher content of information leveraging background knowledge.The PF produces multiple samples, called "particles," at each time instant, with each particle weighted on the basis of the probability of representing a possible position.These weights provide additional information that facilitates the construction of more accurate probability density models.
Problem formulation
The proposed method is implemented in a system in which motion sequences of tracked objects are analyzed, focusing specifically on the measured position data.In the estimation of probability density, observed samples from a random variable of = [ 1 , ..., ] are typically utilized to estimate the probability density function p(), which represents the likelihood of various outcomes for the random variable .Taking into account the dynamic nature of the system, the measurement sequences can be interpreted as time series.Let denote the number of different measurement sequences, where represents a coherent sequence of movements of a tracked object, and represents the length of the sequence ℎ .Therefore, the raw observations consist of × data points.
In the presented method, density estimation is performed using not only raw measurements but also incorporating prior knowledge about the dynamic characteristics of the system and available contextual information.This fusion of information is achieved through the application of the particle filter algorithm, which generates a set of simulated measurement samples { () ... () } .These samples are weighted according to the probabilities that they represent a measurement that aligns with the true state { () ... () } .The probability-weighted samples are then used to estimate the probability density p() of the distribution of , by using Kernel Density Estimation (KDE).The resulting distribution exhibits reduced uncertainty compared to those based solely on raw measurement sequences, and its properties are evaluated using probabilistic and information-theoretic measures.The methodology is summarized in the flow diagram depicted in Fig. 2.
Modeling prior knowledge about system dynamics
Prior knowledge about the system dynamics is incorporated through a state-space model.Let () represent a vector of dimensions that denotes the state of the system at time .The dynamics of the system is described by the state transition equation (or process) (Equation ( 1)).
() = ( (−1) , (−1) , μ (−1) ) Here, (−1) represents the -dimensional control input vector, μ (−1) denotes the process noise vector with appropriate dimensions, and is the function that describes the system's evolution over time.In general cases, the state of the system is not directly observable but is observed instead as a combination of uncertain measurements [48].The measurement (or output) equation (Equation ( 2)) establishes a relationship between state information and noisy measurements.
Here, () is the dimensional measurement vector at time , and ν () represents the measurement noise vector with appropriate dimensions in that time instant.The function ℎ describes how a measurement can be predicted given the state.
For our specific example of mobility, the internal states of the system are represented by the two-dimensional coordinates and The state-transition model is based on kinematic equations of displacement and velocity, with the velocity being perturbed by noise.Additionally, we assume some inertia, which is represented as the control input () = [ 1 , 2 , 3 , 4 ] in the system.This control input has a regularization effect similar to the Ornstein-Uhlenbeck process, where a mean-reverting effect pulls the process variables back to an asymptotic value with a force proportional to the deviation from it.The expected velocity components x() = [ x() .However, in the proposed method, measurements are combined with prior knowledge about the system's dynamics, represented by the state-space model, and contextual information to improve the accuracy of the probability density estimation.
Let the proportional gain be = ΘΔ, where the parameter Θ adjusts the strength of the regulatory effect.Θ takes a value between 0 and 1, where a value of 0 means that no inertial effect applies, while a value of 1 means that the velocity of the object is the same as the expected velocity if Δ = 1.Its appropriate value always depends on the dynamics of the given movement and can be easily estimated using the trial-and-error method.This schema can be interpreted as proportional feedback control for velocity components: [ () 3 , () 4 ] = .The control input for the position components is set to zero.
Hence, the state transition model (Equation (1)) can be described as follows: 1 3 The measurement vector is a subvector of the state vector.Therefore, the output equation (Equation (2)) can be described as:
Data generation by particle filter
The state transition model can be represented as a probability distribution (Equation ( 5)).This representation signifies that the state () is drawn from a distribution of possible current states conditioned on the previous state (−1) and the control input (−1) .
() ∼ ( () | (−1) , (−1) ) Similarly, due to the uncertainty associated with the measurement model, it can also be represented as a probability distribution (Equation ( 6)).This representation indicates that the measurement at time is a sample of the distribution of all possible measurements given the state () , with a specific probability.
() ∼ ( () | () ) The question arises as to how we can use these probability density functions to estimate the distribution of and its density function p().The solution is provided by Bayesian state estimation [33], which recursively applies Bayesian inference to combine the measurements with the probability distributions that describe the process's temporal evolution and the observable information about it.
One technique for implementing this recursive estimation is PF [49].The PF approximates the probability distributions using a crowd of elementary particles (samples), enabling the handling of nonnormal and arbitrarily shaped distributions.As a result of the filter, the discrete probabilistic distribution of the tracked object's state at each time instant in any given sequence is obtained as a set of simulated samples along with their corresponding weights ( () , () ).The associated weights () represent the probability that a particle represents a real state or a possible measurement since they are proportional to the probability of measurement.These weights are recursively obtained using the importance sampling technique.The state transition equation (Equation ( 5)) is chosen as the proposal distribution from which the particles are sampled, and the weights are updated using the following formula, where the new weight is expressed as the weight from the previous instant multiplied by the probability of measurement (Equation ( 6)): These weights need to be normalized: To overcome sample degeneracy, which is a limitation of the PF technique, resampling should be applied (the different resampling procedures are well summarized in reference [50]).
Finally, the posterior density at time instant is approximated as a mass of discrete weighted samples: The operation of the filter can be summarized as follows: In the prediction step, the particles are propagated according to the kinematic model (Equation (3)), and then their weights are updated based on the likelihood of the received position measurements in the correction step (Equation ( 7) and ( 8)).After updating, the particles are resampled with a probability corresponding to their weights.This process iterates at each time step.More details about the PF technique can be found in reference [51].
We obtain a set of samples with appropriate probability weights as the posterior distribution (Equation ( 9)) of the object's state.
These "state particles" () can be transformed into "measurement" or "position particles" () using the output equation (Equation ( 4)).These particles serve as representative samples for the overall distribution of the measurements, and their weights () reflect their probabilities within this distribution p().
The filter operates independently on each sequence.Consequently, the cardinality of the resulting enriched dataset is
Incorporating environmental model in the particle filter
One significant advantage of the PF is its ability to integrate contextual information, particularly with respect to environmental constraints.Real-world environments often contain obstacles that restrict the movement of objects.For example, a vehicle cannot pass through a solid wall or occupy the same space as a storage rack, as such events are physically impossible.Therefore, the movement of particles in the PF should comply with these constraints.An approach to achieving this is to modify the weight update formula to reflect the probability of physical events.Each obstacle corresponds to a specific region in space, which can be represented as forbidden area.These areas are defined by predefined polygons = [ 1 , ..., ] with known corner coordinates, and the possible location of the tracked object within these areas is excluded.Consequently, the likelihood that particles fall within these forbidden areas should be set to zero.However, it is not only about assigning zero probability to the object being located in a forbidden area, but also considering the probability of the object being in proximity to that area as low.For example, a forklift does not move directly adjacent to a wall.Therefore, during the particle weighting process, a more gradual transition in probability is needed, which can be achieved by penalizing proximity.To implement this, the particle weights should be multiplied by a probability density that allows distant particles to have higher weights while reducing the weights of particles in close proximity.To do this, the distance between each particle () and the closest point of each polygon is calculated during the weight update process.The unnormalized density w (−1)
𝑖
by which the weights are multiplied is determined by the sigmoid function of the distance .The sigmoid function offers several A. Darányi, T. Ruppert and J. Abonyi beneficial properties.It has a lower limit of 0, and if the curve is shifted in the positive direction, it approaches zero near the origin.In addition, the slope of the curve is low in that domain, allowing a stronger penalty for particles in close proximity to obstacles.On the other hand, the upper limit is 1, so beyond a certain threshold, the value of the sigmoid function no longer increases significantly with distance.Consequently, the "distance rewarding" effect becomes negligible when no forbidden area is in close proximity.Mathematically, Equation ( 7) is modified as follows: ( () | () )w () (10) Where w is a sigmoid function of the distance: Here, and are constants that allow the sigmoid function to be moved and compressed along the horizontal axis.With the help of these two parameters, it can be set at what distance and to what degree the spatial limiting effect is applied from the boundary of objects.Their value depends on the context and is subject to expert judgment, nevertheless, = 1 and = 0 can be used as default values for them.
This modification facilitates smoother movement of particles towards permissible areas, ensuring that the uncertainty in position estimation is not represented by a truncated distribution.Additionally, particles can explore their surroundings more effectively, leading to more accurate estimations.
Utilizing particle filter output for probability density estimation
As discussed in previous sections, the weights assigned to particles in the PF encode valuable information that can contribute to a more accurate estimation of probability density.Therefore, it is necessary to incorporate these weights when estimating probability densities.In this method, nonparametric kernel density estimation (KDE) is used to estimate the probability density of the weighted data set.
KDE is a nonparametric density estimator that does not rely on assumptions about the underlying parametric density function.Instead, it automatically learns and explores the shape of the underlying density based on the available samples.This makes it wellsuited for analyzing data from complex non-parametric distributions.In our case, the particles serve as samples in the estimation process, and their weights must be taken into account.
Let represent the -dimensional output of the state space being examined, which is measured and has multiple possible values.
The weighted KDE of this output can be calculated as follows: Here, K denotes the kernel function, which plays a smoothing role, and ℎ represents the bandwidth that determines the level of smoothing.The KDE process can be explained as follows: Each elementary particle is smoothed to a certain extent determined by ℎ, resulting in continuous distributions characterized by the shape of the kernel function K().The weights of the particles emphasize the density of these distributions.Regions with a higher concentration of data points will be covered by more distributions, and the densities will be summed for each point in space.Consequently, regions with a larger number of weighty data points will exhibit higher KDE values.
In this approach, the Gaussian kernel is chosen, with K representing the multivariate normal density function.Using a Gaussian kernel, the density estimate (Equation ( 12)) can be expressed as: In this equation, the weighted Gaussian KDE considers the weights of particles and incorporates them into the density estimation process, resulting in a more accurate representation of the underlying probability density.
Evaluation of probability distributions
The purpose of this method is to reduce the uncertainty associated with modeling distributions.To validate the results, the obtained distributions are compared with those based on raw measurements, assessing their information content.The evaluation of distributions is carried out using a regular rectangular grid that spans the entire distribution space.
The intersections of the grid form a sample set that is independent of the samples used to fit the distributions, and this sample set remains the same for all evaluations.At each point on the grid, the density is estimated using the density function p() of the model distributions obtained.Various information-theoretic and probabilistic measures are employed to evaluate the results and compare different distributions.Specifically, Shannon entropy, log-likelihood function, Jensen-Shannon divergence (JSD), and root mean squared error (RMSE) are utilized.
Shannon's entropy serves as a commonly used measure for quantifying the amount of uncertainty in a probability distribution.
Given the number of points on the grid and their estimated probability densities p( ) in the modeling distribution P (), the entropy formula is expressed as: The log-likelihood function provides a general measure of the goodness of a probabilistic model by expressing the sum of the probability densities of the samples.
𝓁(
JSD is a dissimilarity measure between two distributions, which can be calculated as the difference between the entropy of the mixture of the two distributions and the average of the entropies of the two distributions.For distributions P1 () and P2 (), JSD is expressed as: RMSE is used to compare distributions based on the differences between outcomes in different distributions.The RMSE between P1 () and P2 () distributions is expressed as: These evaluation metrics provide a comprehensive assessment of the obtained distributions, enabling the comparison and validation of different models based on their information content and dissimilarity.
Application example: occurrence pattern detection from indoor position data
In this section, we demonstrate the application of the proposed methodology using position data from forklift movement sequences.The method is employed to create a probability heat map of the tracked forklift locations based on the position data obtained from an indoor positioning system.
Analyzing the probability distribution of forklift truck locations in a warehouse environment has significant managerial benefits.By understanding the probability distribution of the positions of the forklift truck, managers can gain insight into the frequency of access to different warehouse areas.They can identify areas that require additional forklift trucks or resources due to high demand or congestion, and pinpoint peak periods of activity or workflow bottlenecks.This knowledge allows efficient resource allocation, optimized scheduling, and improved warehouse layout.In addition, spatial probability information can help identify areas prone to congestion, collision risks, or safety hazards.
To evaluate the effectiveness and reliability of the proposed method, probability densities were estimated from both raw position data and weighted particles using weighted KDE (Equation ( 13)).Subsequently, the obtained distributions are compared using probabilistic and information-theoretic measures discussed in the previous subsection.
Demonstration of the method's application using forklift data
To demonstrate the applicability of the method, we performed estimations using forklift data collected from a warehouse environment.In this operating environment, forklifts are tracked using an indoor positioning system.Each forklift is equipped with a tracking tag that emits a radio signal at regular intervals.Receiver units with known fixed coordinates receive the signal, allowing the system to determine the vehicle coordinates.The system uses the Time Difference of Arrival-based lateration technique to calculate the position.The system accuracy is approximately 0.5 meters.
Due to the presence of obstacles and signal reflections, there is often no direct line of sight between the tags and receivers, leading to signal reflections and absorption, further reducing the system's accuracy.As a result, the distribution of measurement noise is unknown and asymmetric, justifying the use of a particle filter in our methodology.The information provided by the system includes two-dimensional coordinates and timestamps.Therefore, the data set used for the analysis comprises 2D coordinate data recorded with time stamping with a sampling frequency of 0.5 seconds.The data set includes data from 8 forklifts, spanning a period of 10 hours.
The analysis was performed in a Python environment.Before analysis, the data were preprocessed to reduce complexity.Specifically, we selected sequences in which the forklift trucks were in motion.The selection process involved dividing the dataset into data sequences based on the calculated velocity derived from the moving average filtered position data.The smoothed velocity was then compared to a threshold, and sequences in which the velocity exceeded the threshold were selected.These selected sequences were further examined formation, and only those lasting at least 10 seconds were retained.As a result of this preprocessing step, we obtained a dataset consisting of 208 sequences containing a total of 14, 123 data points.The warehouse shop floor is divided into lines by storage racks between which forklifts operate.Utilization of a particular line can be understood as the probability of encountering a forklift within the area defined by the two storage racks.The probability densities calculated for these lines can provide valuable information for management.However, due to the inherent uncertainty in the data, it is not always possible to accurately determine in which row a forklift was located.As a result, a significant portion of the measurements is scattered within the storage rack area, as shown in Fig. 3 (a), which does not correspond to the actual positions of the forklifts.
To address this issue, noisy data was filtered using PF, with each sequence processed independently.A sample size of = 300 was used.In addition to the dynamics of movement, information about the environment, specifically storage racks, was incorporated into the state estimation process.The storage racks were represented as polygons with known corner coordinates.The distance between each pair of particles and polygons was calculated, and the weights were updated accordingly using the formulas presented in Equations (10) and (11).The process took 712 seconds for computing, in case of PC with Intel(R) Core(TM) 7 − 10700 CPU @ 2.90 2.90 CPU and 32 RAM specifications.
It is important to note that the filter parameter settings were not tuned for each sequence, but rather the same parameters were used for all sequences.Consequently, the filter did not perform well on some sequences, leading to failures in the estimation process.During such instances, the likelihood of certain particles decreased to such a small value that it effectively became zero, resulting in the computer interpreting them as missing values.This occurred in a total of 31 sequences, and the corresponding 670, 200 particles from these sequences were not stored, representing 15 Therefore, a total of 3, 566, 700 particles were generated and stored during the filtering processes.KDE was then applied to both the original position data and the stored particles using a bandwidth parameter of 0.1.Figs. 3 (b) and (c) depict the results of this estimation.The particle-based distribution exhibits more characteristic features and displays more uniform density values in regions where forklifts are present.This distribution provides two valuable pieces of information that are not apparent in distributions based solely on raw data.First, the frequently used lines stand out, indicating areas of higher activity.Second, frequently used locations can be identified as certain points exhibit a higher probability density compared to their surroundings.These points suggest specific positions within the lines where faster-moving inventory is often placed, resulting in more frequent loading activities.Identifying specific positions for faster-moving inventory in a warehouse offers several advantages.It facilitates faster loading activities, as commonly requested items can be retrieved more quickly, streamlining the loading process, and expediting order fulfillment.Second, it helps reduce congestion in the warehouse by separating faster-moving inventory from slower-moving stock, promoting smoother movement and minimizing delays.
Furthermore, it is worth noting that the density extends into the forbidden zones at certain points.This occurs because particles tend to concentrate in these locations near the boundaries of the forbidden zones, and the smoothing effect of KDE expands their probability density beyond the boundaries.
It is important to note that the calculation requirements of the algorithm are minimal.The necessary calculation time can be linked to the simulation of the model used, i.e. the calculation complexity primarily stems from the evaluation of the dynamic model.An important aspect is that this simulation must be carried out in amounts corresponding to the number of particles.In the present case, using a simple model, this is not critical, but it is important to emphasize that in the case of critical applications, the PF can be parallelized.
Test and validation of the method
To assess the effectiveness of the proposed methodology, a series of tests were conducted comparing the raw data distributions with the particle-based distributions.KDE was performed on raw data and particles using different bandwidth values ranging from 0.01 to 1 with a step size of 0.04.Lower bandwidth values resulted in less smoothing, allowing the underlying pattern and irregular fluctuations in the original data to be better represented in the resulting model.On the other hand, particle-based density estimation concentrated the center of mass of the particles around the true position, giving less weight to noisy outliers and resulting in a more compact distribution.Increasing the amount of smoothing diminished this advantage above a certain bandwidth value.The uncertainty of all the distributions obtained was quantified using Shannon's entropy (Equation ( 14)), as shown in Fig. 4 (a).The figure confirms that the particle-based distributions exhibited lower uncertainty in the lower bandwidth ranges.
The logarithmic likelihood (Equation ( 15)) of the models was also evaluated.Fig. 4 (b) presents the ratio of logarithmic probability between the distributions based on particles and the distributions based on raw data.The graph reveals conclusions similar to those of the entropy analysis.At low bandwidth values, the particle-based model outperformed the raw data-based model, resulting in a higher ratio.However, as bandwidth increased, this advantage diminished.These findings are summarized in Table 1, which demonstrates the ability of the method to mitigate the impact of measurement uncertainty and achieve a more precise estimation of probability density.
To further examine the reliability of the results, cross-validation was performed.The validation was based on the assumption that if both the noisy measurement data and the particles were randomly split into two sets and the distributions were fitted to each set, the distributions fitted to the raw data would exhibit greater differences due to random noise compared to the distributions fitted to the particles.Density estimations were carried out on both types of partial data sets, and fitted models were compared using two indicators, JSD (Equation ( 16)) and RMSE (Equation ( 17)).The results are summarized in Table 2.In all cases, the RMSE between the distributions fitted to the raw data was at least one order of magnitude larger than the RMSE between the distributions fitted to the particles (Fig. 4 (c)).Similarly, the JSD between the distributions fitted to the raw data was higher compared to the divergence between the particle-based distributions (Fig. 4 (d)).These validation results demonstrate that the proposed method not only provides more accurate density estimates but also ensures greater consistency and reliability.
Conclusion
The analysis of mobility patterns based on inaccurate position data presents challenges in extracting meaningful information.To address this issue, the proposed methodology takes advantage of the knowledge of movement characteristics and environmental factors to generate informative data, enabling more accurate and reliable probabilistic modeling.By incorporating a particle filter approach, which utilizes the dynamic model of movement and spatial information, the methodology approximates the posterior
Fig. 1 .
Fig. 1.Schematic diagram of a proposed framework to explore the spatial probability of occurrence.The position measurements are combined with other available background knowledge and samples representative of the mobility pattern are simulated.More reliable information can be extracted from the resulting dataset.
Fig. 2 .
Fig.2.Flowchart illustrating the proposed methodology.In the traditional case, density estimation relies solely on measurement sequences ()
Fig. 3 .
Fig. 3.The raw position measurements(a), distribution obtained by KDE on the raw position data (b), and distribution obtained by KDE on particles (c).The red rectangles indicate the areas of the storage racks.
Fig. 4 .
Fig. 4. The result of the validation of the method.Figure part (a) shows the entropy of distributions as a function of bandwidth.Figure part (b) shows the log-likelihood ratio of the particle and raw data-based distributions.Figure part (c) shows the RMSE ratios between distributions based on particle data fragments and raw data fragments.Figure part (d) shows the JS divergence ratios between distributions based on particle data fragments and raw data fragments. | 2024-04-14T15:15:14.145Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "d0526dc80d54f8fdc00befa1c49bedf5c852dd32",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2405844024054689/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "52708c43464800048c616b650da60bca6cb2f165",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": []
} |
238033398 | pes2o/s2orc | v3-fos-license | Epidemiological prospective study on estimating the coexistence of allergic rhinitis and asthma in adult patients
Background: Both allergic rhinitis and asthma are immunoglobulin E-mediated allergies, triggered by similar allergens and have inter-related inflammatory and pathophysiological mechanisms. Objective: To evaluate the prevalence coexistence of allergic rhinitis and asthma who are attending allergy outpatient clinics. Methods: This epidemiological prospective study was carried out during the period January 2019 to March 2020. With the help of questionnaire demographic data and baseline clinical features along with a personal and family history of allergic were noted. According to the Allergic Rhinitis and its Impact on Asthma guidelines and Global Initiative for Asthma both allergic rhinitis and asthma were classified. Prevalence of coexist were estimated from the detailed statistical analysis. Results: Data were obtained from 650 asthma patients with mean age of 41.5 years. Among 650 patients, asthma was major complaint of 340 and 310 patients visited for allergic rhinitis. Concomitant allergic rhinitis affected 85% of patients with asthma. The high prevalence of the co-existence of rhinitis was found in patients with intermittent asthma (88%) and low rate was found with severe persistent asthma (34%). The prevalence of the comorbidity of asthma and rhinitis decreased as age of patents increased. Family histories, smoking habits, association with pets or animals have significant impact on the coexistence. Conclusion: This study indicated that the allergic rhinitis is coexist with asthma and at higher rate in the older patients with severe the asthma. This study reinforces the need for early diagnosis and guideline-based management of allergic rhinitis in patients with asthma.
Introduction
Allergic rhinitis is a most common disease falling under the category of atopic disorders. Allergic rhinitis patients often sustain loss of productivity and significant morbidity [1] . Its principal symptoms include nasal congestion, rhinorrhea, sneezing, and nasal itching. Different kind of allergens such as airborne pollens, molds, dust mites, and animals may induce allergic rhinitis or triggers symptoms [2] . Skin testing or serum sampling helps to confirm diagnosis and also guide treatments. Although allergic rhinitis is not a serious illness, it is clinically relevant because it underlies many complications, is a major risk factor for poor asthma control [3] . Asthma is a heterogenic condition due to chronic inflammation of the lower respiratory tract. Characteristic features of asthma are variable airway obstruction and bronchial hyperresponsiveness. Clinically symptoms are cough, recurrent episodes of wheeze, chest tightness, and shortness of breath [4] . Asthma is often associated with various comorbidities such as gastroesophageal reflux disease, obstructive sleep apnea, hormonal disorders, sinusitis and rhinitis [5] . Various studies on pathophysiological and clinical parameters since last 50 years suggested that allergic rhinitis and allergic asthma frequently co-exist [6][7][8] . These apparently separate disorders are manifestations of the same disease expressed to a greater or lesser extent in either the upper or the lower airways. While allergic rhinitis is the atopic diseases of the nose, asthma is that of the lungs and both are related to each other, and coexist in many patients. The presence of allergic rhinitis allergic rhinitis is considered as a etiological risk factor for, both the incidence as well as the severity of, asthma [9] . ~ 530 ~ The coexistence of AR and asthma can lead to delay in the right treatment, as these patients often undergo rounds of consultations with chest physicians, allergy experts, otolaryngologists, general physicians, and pediatricians. Moreover, treatment focus is mainly on asthma with less attention given to the associated rhinitis, which often goes unrecognized. This emerging concept has important implications for both the diagnosis and the management of these extremely common and potentially disabling illnesses. About 4% to 11% of the general population has asthma, whereas the prevalence of allergic rhinitis is around 10% to 30% [10] . Between 20% and 50% of patients with allergic rhinitis have asthma, and 30% to 90% of patients with asthma have concomitant rhinitis [11,12] . In India, the burden of allergic diseases has been on a rising trend in terms of prevalence as well as severity. The simultaneous presentation of rhinitis and asthma is independent of the etiology of the disorder. Moreover, allergic rhinitis may be a predisposing risk factor for the development of asthma [13,14] . The frequent coexistence of asthma and rhinitis suggest that the presence and severity of allergic rhinitis should be assessed in every patient with asthma for adequate management of both diseases. There are limited studies on simultaneous presentation of allergic rhinitis and asthma [15,16] . This study is aimed to assess the co-existence of allergic rhinitis in patients with asthma attending allergy outpatient clinics and examine the inter-relationship between the two disease conditions.
Materials and Methods Study type
This is a prospective epidemiological study carried out during January 2019 to March 2020 with approval of ethics committee.
Subjects and demography
Total 650 patients with complaint of either allergic rhinitis or asthma are selected in this study. These patients were visiting our out patients department. After obtaining written informed consent from participating patients we have provided them a structured questionnaire. This requested demographic data (age, gender, residence), data on exposure to pets or other animals, smoking, personal and family history of atopy, clinical features of asthma and rhinitis (frequency and severity of the symptoms, exacerbations, duration of the disease).
Clinical evaluation
The severity of allergic rhinitis was classified according to the ARIA [17] and that of asthma by Global Initiative for Asthma (GINA) report [18] . Skin prick testing was performed in all patients with a panel of the most relevant aeroallergens in each geographical area. Spirometry was carried out according to guidelines of European Respiratory Society for adult patients [19] .
Statistical analysis
We used Graph-Pad Prism software for various statistical test performed to evaluate the results. We used Analysis of variance to compare quantitative variables between 2 or more factors and post-hoc test to compare several factors together.
Results
Clinical and demographic data of 650 patients who participated in the study are shown in Table 1. On the other hand, asthma was found in 75% of the patient with allergic rhinitis (232 of 310 subjects). Classification of asthma severity in these patients according to GINA and allergic rhinitis as per ARIA is shown in Table 2. According to the frequency of symptoms, allergic rhinitis was classified as intermittent in 55% and persistent in 45% of patients. Severity was stratified as mild in 49% and moderate-persistent in 51% (Table 2). We have observed coexistence of allergic rhinitis to that of the asthma. The high prevalence of the co-existence of rhinitis was found in patients with intermittent asthma (88%), mild persistent asthma (80%), and moderate persistent asthma (73%). Lesser prevalence of coexistence was found in the patients with severe persistent asthma (34%). This implies that the more severe the asthma, the less frequent the prevalence of comorbid rhinitis (P<0.05). A significant positive correlation (P<.0001) was found between the severity of rhinitis and asthma.
~ 531 ~ Fig 1: Type of asthma and prevalence of coexistence with rhinitis.
The prevalence of the comorbidity of asthma and rhinitis decreased as age of patents increased (P<0.01). Comorbidity was higher in patients with age groups <18 years (85%). In 19-40 years old patient's prevalence was 67% and in 41-60 years old it was 51%. Minimum rate of prevalence was found in the patients >60 years old (43%). A family history of rhinitis was a significant risk factor for the comorbidity of rhinitis and asthma. Among 267 subjects with family history of allergic rhinitis, comorbidity was found in 240 patients (90%). Similarly, in smokers (n=421) we have found significant comorbidity of asthma and rhinitis in 340 subjects (80%). Presence of pets and animals at home was also found to be significantly (P < 0.005) associated with allergic rhinitis -asthma coexistence. However, we do not found any comorbidity of asthma and allergic rhinitis in patients from rural and urban areas. Lung function testing (spirometry) was performed and was within normal limits in 90%. No significant differences were observed in the spirometric parameters between patients with asthma or with asthma and rhinitis simultaneously. Out of the total study population, 78 patients (12%) underwent a skin prick test, of which, nearly 49 patients were sensitized to at least one aeroallergen.
Discussion
This study indicated that the allergic rhinitis is coexist with asthma (85%), in the older patients with severe the asthma, the prevalence of rhinitis is lower. Other epidemiological studies have reported varying rate of the prevalence of asthma in patients with allergic asthma which ranges from 20% to 70% [20][21][22][23] . The prevalence of allergic rhinitis among subjects with asthma also varies in the previous studies. In different studies this is reported from 80%-99% [24,25] . In the present study, concomitant allergic rhinitis affected 85% of patients with asthma and asthma was found in 75% of the patient with allergic rhinitis. Indeed, rhinitis is the first clinical symptom of chronic allergic respiratory disease that consequently progress to asthma [17] . Asthma and rhinitis can sometimes start simultaneously, or asthma may even precede rhinitis. In one of the previous study in rhinitis and asthma, 45% developed rhinitis earlier than asthma, 35% developed rhinitis later of asthma, and 21% experienced both conditions simultaneously [26] . In the present study, 76% subjects showed allergic rhinitis before or at the same time as asthma. This is similar to reports by Guerra et al.
(76%) [13] . With support of these previous studies, we can suggest that link between allergic rhinitis and asthma does exist as manifestations of a common inflammatory airway disorder that often occur together during the natural history of the disease. We have noticed in the present study that prevalence of rhinitis decreases with age and asthma severity increases. This may attribute to the fact that rhinitis can subside during the natural history of chronic airway inflammatory disease.
Similar observations were noted in previous studies which demonstrated improvement or waning of rhinitis as age increases [27,28] . Rate of asthma in allergic rhinitis patient was 66% in subjects with age <60 years and 39% in subjects older than 60 years [24] . Development of asthma is dependent on the severity of rhinitis. People with persistent and severe rhinitis had a risk at least 5 times higher of developing asthma [6] . Allergic sensitization to domestic allergens, aeroallergens, and exposure to trigger factors appears to be an important risk factor in the association between rhinitis and asthma. In this study, the presence of pets and animals at home was found to be two-fold higher in patients with co-existing asthmarhinitis. Several other studies have also shown a higher frequency of AR-asthma comorbidity in subjects sensitized to pollens and animal dander [29] .
Conclusion
We found high prevalence of comorbidity of allergic rhinitis among patients with asthma. Severe form of rhinitis may initiate asthma early than normal patients and in some cases rhinitis precedes or is concomitant with asthma. Such epidemiological studies reinforce the need for early diagnosis and guideline-based management of allergic rhinitis in patients with asthma. | 2021-08-27T17:10:08.547Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "a3ceea77d5c929d4ae5ab0e88940bf818b31052e",
"oa_license": null,
"oa_url": "https://www.medicinepaper.net/article/201/3-1-118-891.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6c1b2eedc9d6deccf6c8f93627e95b9e609930c6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269446144 | pes2o/s2orc | v3-fos-license | Prevalence of High-Risk HPV Detection and HPV Vaccination in Cervical Cancer Screening During the HPV Vaccination Era at Siriraj Hospital – Thailand’s Largest National Tertiary Referral Center
Objective: To investigate the prevalence of high-risk (HR) human papillomavirus (HPV) detection and HPV vaccination among women undergoing cervical cancer screening during the HPV vaccination era at Siriraj Hospital – Thailand’s largest national tertiary referral center. Methods: This prospective cross-sectional study was conducted at our center’s outpatient gynecology clinic during September-December 2021. Women aged ≥18 years with no previous hysterectomy, no history of preinvasive or invasive cervical cancer, and no current pregnancy who visited for cervical cancer screening were eligible for enrollment. Women with abnormal vaginal discharge/bleeding, and specimens with inadequate cellularity were excluded. We collected sociodemographic data, history of HPV vaccination, cervical cytology results, and high-risk HPV testing results. Reverse transcription polymerase chain reaction was used to determine HPV genotype. Results: A total of 216 women (mean age: 41.7 years (range: 25-65), 75.9% premenopausal) were enrolled. Twenty of 216 (9.3%) women tested positive for HR-HPV, and 15 of 216 (6.9%) women had been previously vaccinated for HPV. The most common HPV genotypes detected were Group B infection (HPV 35/39/51/56/59/66/68) (38.9%), followed by HPV16 (27.78%), Group A infection (HPV 31/33/52/58) (27.8%), and HPV18 (5.56%). No HPV45 infection was detected. The detection rate of cytologic abnormalities was 4.16%. Three-quarters (77.8%) of patients with cytologic abnormalities were HR-HPV positive. Conclusion: Among the 216 women who underwent cervical cancer screening in this study, there was a 9.3% prevalence of HR-HPV infection, and a 6.9% prevalence of HPV vaccination. Among the 15 vaccinated women, 2 tested positive for HPV16 (1 normal cytology, 1 abnormal cytology).
Introduction
Cervical cancer screening is considered the most effective cervical cancer prevention method.The agestandardized incidence of cervical cancer has declined worldwide from 15.2 cases/100,000 females in 2008 to 13.3 cases/100,000 females in 2020, and most new cases were reported from less-developed countries [1,2].Persistent HR-HPV infection is a leading cause of preinvasive/invasive cervical cancer [3].The 3 most commonly used screening methods are cytology, HR-HPV DNA test, and VIA.Of these, VIA is an attractive screening option in low resource settings [4,5].HPV vaccination is also regarded to be a highly effective cervical cancer prevention strategy [6,7]; however, the cost of HPV vaccine is high in Thailand and some are unaware, so few women are vaccinated.
Prevalence of High-Risk HPV Detection and HPV Vaccination in Cervical Cancer Screening During the HPV Vaccination Era at Siriraj Hospital -Thailand's Largest National Tertiary Referral Center
Several studies conducted at our center to determine and compare the effectiveness of our center's proprietary Siriraj liquid-based solution for both cytology and HPV DNA testing for cervical cancer screening revealed both its comparative effectiveness and superior cost-effectiveness [8][9][10].Accordingly, our proprietary solution is now used for both tests since June 2022.
The primary aim of this study was to investigate the prevalence of high-risk HPV detection and HPV vaccination among women undergoing cervical cancer screening during the HPV vaccination era at Siriraj Hospital -Thailand's largest national tertiary referral center.Our secondary objective was to compare cytologic results between those negative for and those positive for HR-HPV, and to determine the proportion of each identified HPV genotype for each cytologic finding.
Materials and Methods
This prospective cross-sectional study was conducted during September-December 2021.Women aged ≥18 years with no previous hysterectomy, no history of preinvasive/invasive cervical cancer, and no current pregnancy who visited for cervical cancer screening were eligible.Women with abnormal vaginal discharge/ bleeding, and specimens with inadequate cellularity were excluded.The study protocol was approved by our centers IRB (COA number Si222/2021), and all patients provided written informed consent.
Cervico-vaginal specimens collected via Ayre spatula and cervical brush were immediately immersed in 10 ml of Siriraj liquid-based solution.Specimens were separated into 2 portions (8 ml for cytology, and 2 ml for HPV test).For cytology, slides were stained with Papanicolaou stain.The Bethesda System 2014 was used for cytologic interpretation, as follows: negative for NILM, ASCUS, LSIL, HSIL, ASC-H, SCC, AGC, or adenocarcinoma [11].
Patients
A total of 216 women (mean age: 41.7 years (range: 25-65), 75.9% premenopausal) were enrolled.Other baseline sociodemographic characteristics of subjects are shown in Table 1.
History of HPV vaccination
Fifteen of 216 (6.9%) study women had previously received HPV vaccination, and 2 of those 15 women tested positive for HPV16.
Cervical cytology
Cytologic abnormalities were detected in 4.16% of study women.The cytologic findings are presented in Table 2.
Limitations
The limitations of this study include its single-center design, the fact that we did not report the prevalence of each identified genotype (we grouped some genotypes: Groups A and B), and the fact that we didn't collect information specific to sexual experience or timing of HPV vaccination.
In conclusion, among the 216 women who underwent cervical cancer screening in this study, there was a 9.3% prevalence of HR-HPV infection, and a 6.9% prevalence of HPV vaccination.Among the 15 vaccinated women, 2 tested positive for HPV16 (1 normal cytology, 1 abnormal cytology).The most prevalent abnormal cytology in both positive and negative HR-HPV was ASCUS.The most prevalent HPV genotypes were HPV16 and Group B.
Table 2 .
Cytologic Findings Compared between the Negative for and Positive for HR-HPV Groups, and the Proportion of each Identified HPV Genotype for each Cytologic Finding (N=216) | 2024-04-30T06:17:33.305Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "d87b2087c2fbdc77bdb8872a2047b5087ae7e5e0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "eeb1999895dfeebcae8619540fbd5ecd735d1275",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3073751 | pes2o/s2orc | v3-fos-license | Stick-Slip Control in Nanoscale Boundary Lubrication by Surface Wettability
We study the effect of atomic scale surface-lubricant interactions on nanoscale boundary-lubricated friction, by considering two example surfaces - hydrophilic mica and hydrophobic graphene - confining thin layers of water in molecular dynamics simulations. We observe stick-slip dynamics for thin water films confined by mica sheets, involving periodic breaking-reforming transitions of atomic scale capillary water bridges formed around the potassium ions of mica. However, only smooth sliding without stick-slip events is observed for water confined by graphene, as well as for thicker water layers confined by mica. Thus, our results illustrate how atomic scale details affect the wettability of the confining surfaces, and consequently control the presence or absence of stick-slip dynamics in nanoscale friction.
Understanding friction plays a central role in technological applications and phenomena in diverse fields ranging from micromechanical devices to bioengineering [1] and to earthquakes [2].Given the continuing miniaturization of mechanical devices towards the nanoscale [3], improved understanding of friction and wear could help in reducing energy consumption, improving reliability and extending service life.Indeed, an important part of their design process consists of trying to minimize friction and to eliminate stick-slip dynamics [4].
Stick-slip control in lubricated friction is of particular importance given the vast amount of applications where lubricants are used to reduce the detrimental effects of friction and wear [5].Examples of mechanisms behind the emergence of stick-slip in boundary-lubricated systems have been numerically demonstrated to include repeated crystallization and shear melting of the thin lubricant film [6], interlayer slips within the ordered solidlike lubricant film, or wall slips at the wall-film interface [7].Most of the numerical studies of stick-slip in boundary lubrication have focused on coarse-grained or simplified/idealized models [6,8,9], not explicitly considering the atomic scale interactions occurring in real systems.On a coarse-grained scale, a useful classification of the lubricant-surface interactions is given by the wettability of the confining surfaces by the lubricant, with systems displaying a larger contact angle/lower wetting generally exhibiting lower friction.Other approaches to friction control include e.g.applying mechanical oscillations [10,11].While the effect of wettability on lubricated friction has been studied experimentally in macroscopic [12][13][14][15] and nanoscale [16] systems, and modeled using phenomenological finite-element models [17] and simplified molecular dynamics (MD) simulations of nanopatterned surfaces [18,19], less is known about the underlying atomic scale processes and mechanisms responsible for the presence or absence of stick-slip.
Given the large surface-to-volume ratio in boundary lubrication, nature of the interaction between the lubricant and the confining surfaces originating from their atomic composition should play a crucial role.Thus, we study the interaction of a thin water layer (thickness h around 0.5 nm unless stated otherwise) in MD simulations using full atomic models of two experimentally relevant confining surfaces with different wetting characteristics: crystalline mica, a hydrophilic substrate that strongly adsorbs water [20] and graphene, a hydrophobic surface interacting weakly with water [21], see Fig. 1.We ob- serve stick-slip dynamics for thin water layers confined by mica: each unit cell of mica contains two K + ions, interacting strongly with the water oxygens via Coulomb interactions, leading to formation of atomic scale capillary bridges next to the K + ions, connecting the two mica surfaces in the stick state.These bridges break during the subsequent slip event, and reform during the next stick phase, a process that is also visible as the breaking and reforming of interfacial hydrogen bonds between water and mica.This mechanism is different from both the crystallization-shear melting transitions [6] and interlayer or lubricant-surface slips [7] observed before in simplified models.In contrast, water films confined by hydrophobic graphene, as well as thicker water layers confined by mica, exhibit fundamentally different dynamics with no stick-slip.
To model the confined water film, we consider systems ranging from 200 to 1200 SPC/Fw water molecules [22].We consider 2M1-muscovite mica with the formula KAl 2 (Al, Si 3 )O 10 (OH) 2 , with the force field parameters from Ref. [23].One mica surface consists of 10 × 6 unit cells, and has linear dimensions of L x = 52.07Å and L y = 54.036Å, see Fig. 1.To create site disorder, mimicking a real mica surface with a random distribution of potassium ions on it, one K + ion of the pair in each unit cell is removed and subsequently placed on the bottom part of the sheet [24].The graphene sheets have L x = 68.063Å and L y = 36.841Å.The Lennard-Jones parameters for carbon are from Ref. [25].The cutoff radius is r c = 10.0 Å for all potentials.Both sheets are parallel to the xy plane with periodic boundary conditions along the x and y directions.Couette flow is generated by moving the top sheet at a constant velocity V along the x direction.The distance between parallel sheets is allowed to vary, and a constant normal load F n , giving rise to a pressure P ⊥ , is applied on the top sheet.The bottom sheet is constrained to move along the x axis, and is attached to a spring of stiffness k/N p = 0.0035 N/m, where N p is the total number of atoms in a sheet.The other end of the spring is connected to a fixed stage.Temperature of T = 295 K is maintained using a Langevin thermostat, applied only in the y direction to avoid streaming bias [26,27].The equations of motion are solved with the velocity Verlet algorithm implemented in the LAMMPS code [28], with an integration time step of 1 fs.Longrange electrostatic interactions are computed using the particle-particle particle-mesh (PPPM) solver with 10 −5 accuracy.Initially the water molecules are arranged in a simple cubic lattice.The simulations are first run for 100 ps with both surfaces kept fixed, followed by 100 ps during which the top surface is subject to a normal force F n and is allowed to move vertically.Then, the top surface is driven horizontally with a velocity V for 1 ns to generate the steady state, after which we continue the simulations for approximately 60 ns, recording the observables of interest.Simulation results for 256 water molecules confined by mica sheets for P ⊥ = 1 atm and V = 0.1 m/s are shown in Fig. 2. The force per atom on the bottom sheet applied by the spring, F s /N p , exhibits characteristic stickslip behavior [Fig. 2 (a)].Fig. 2 (b) shows the friction force per sheet atom on the bottom mica plate applied by the water and the top mica plate, F r /N p , exhibiting similar time-dependence as the spring force, with superimposed high-frequency fluctuations due to the finite temperature.Fig. 2 (c) shows the position Z of the center of mass of the top sheet in the z direction.The center of the bottom mica sheet is fixed at z = 4.16 Å.During each slip event, Z increases by roughly 10% [6].Since formation and breaking of interfacial chemical bonds is known to play a role in friction (see Ref. [29] for an example from rock friction), we show also the time-dependence of the number of hydrogen bonds (i.e. the number of water hydrogens closer than 3 Å from the bottom mica surface) between water and the bottom mica surface in Fig. 2 (d): bonds break as the system evolves from stick to the slip state.
For comparison, we also performed MD simulations of water confined by hydrophobic graphene sheets.We varied the number of water molecules from 200 to 1200, the normal loads from P ⊥ = 1 to 10 atm, and the driving velocities from V = 0.01 to 0.1 m/s.Fig. 2 (e) shows the spring force from simulations of 200 water molecules, P ⊥ = 1 atm, and V = 0.1 m/s; similar results are obtained for other P ⊥ and V values.We observe a small increase of friction with V for both mica and graphene, see Supplemental Material [30], and Refs.[31,32] for experimental results on mica-confined systems with sliding velocities significantly lower than those reachable in our MD simulations.Our simulations thus demonstrate that the stick-slip behavior does not arise for thin water films confined by graphene.Instead, continuous, smooth sliding with the maximum friction force well below that obtained for mica is observed for all parameter values considered.We also note that the same applies to the mixed system with one graphene and one mica surface: slip is localized at the hydrophobic graphene-water interface, and no stick-slip is observed.This difference between the two kinds of surfaces may be explained by the relatively strong interaction of the potassium ions on the mica surfaces with the oxygen atoms of the water molecules via Coulomb interactions.Thus, the ions could act as "freezing nuclei", with the water molecules gathering around them to form nanoscale capillary water bridges [33,34], connecting the top and bottom surfaces within the stick phase.As the system starts to slip, these bridges would break.The interaction of carbon atoms with oxygen is much weaker, and we expect that no capillary bridges are formed between graphene sheets, explaining the absence of stick-slip dynamics in that case.
To verify this hypothesis, we calculate the density distributions ρ(x, y) of water oxygens in the contact layer relative to the bottom surfaces.Fig. 3 (a) shows ρ(x, y) for a water film confined by mica sheets when the system sticks [t = 1 ns in Fig. 2 (a)].Peaks in ρ(x, y) are located at the K + ions.Fig. 3 (b) presents the corresponding ρ(x, y) during the first slip state when t = 5 ns [cf.again Fig. 2 (a)]: the peaks of ρ(x, y) become smaller and broader.Finally, Fig. 3 (c) shows ρ(x, y) for the subsequent stick state at t = 7 ns [Fig. 2 (a)], where we again observe that the peaks are as high and narrow as those of the previous stick state.
To gain more insight into the nucleation and breaking of the capillary bridges between the surfaces, we calculate the density profiles ρ(z) of water oxygens across the gap.When the system is slipping [Fig.3 (d)], ρ(z) exhibits two separate peaks, consistent with breaking of the capillary bridges.In the stick state [Fig.3 (e)], ρ(z) exhibits multiple peaks spanning the gap.This can be understood as the water molecules forming nanoscale capillary bridges between the two mica surfaces.In contrast to this behavior, the density distributions ρ(x, y) of the water film consisting of 200 water molecules confined by graphene sheets in Fig. 4 show that water clusters to form a single, relatively large droplet-like structure between the two graphene sheets, without any apparent signature of breaking-reforming transitions.The corresponding density profiles ρ(z) [30] are similar to previous observations in equilibrium graphene-confined systems [35].
Thus, when the two mica surfaces are very close together, the thin confined water film loses its fluidity, and the bulk flow properties of water play little or no role in friction.However, they may be recovered by increasing the thickness of the water layer [36], with the conditions approaching those of hydrodynamic lubrication.To this end, we performed MD simulations with four different, larger thicknesses of the water layer: h = 1.77, 2.03, 2.29, and 2.56 nm, corresponding to 1536, 1792, 2048, and 2304 water molecules, respectively.For these thicker water films, the stick-slip dynamics disappears.Instead, smooth sliding dynamics is observed, which at a first glance looks similar to that in the graphene-confined system.However, subtle differences can still be observed between the two surfaces.Zooming in to the spring force time series (e.g. the one shown in Fig. 2(e)] reveals periodic oscillations corresponding to the eigenfre- quencies of the spring-bottom plate mass (M ) system, f = 1/(2π) k/M .For both surfaces, the varying amplitudes of these oscillations at each period [blue circles in Figs.5(a) and (b)] form sequences of time-ordered observations X(n) which can be well-described by an autoregressive model with W white noise originating from the interaction with the fluctuating lubricant and α a model parameter, both extracted using the R package [37].For water confined by graphene, we find α ≈ 0.8 and δW ≈ 0.1 pN for all conditions considered, while we find α ≈ 0.1 and δW ≈ 0.3 pN for thick water films (h ≥ 1.77 nm) confined by mica.Accordingly, the autocorrelation function (ACF) of X(n) for mica decays more rapidly to zero than its counterpart for graphene.In both cases the ACFs computed from the simulation data agree with those of the corresponding autoregressive model [see Figs. 5 (c) and (d)].The observation that δW does not significantly depend on h for h ≥ 1.77 nm indicates that the screened mica-water interaction has a sub-nanometer range, resulting essentially in a surface effect of the fluctuations of the water layer.Also, the stronger interaction of mica with the fluctuating lubricant (as compared to that of graphene) results in a factor of three larger δW [see also Figs. 5 (e) and (f)].
In summary, the presence or absence of breakingreforming transitions of local capillary bridges in the water film, controlled by the atomic structure and the ensuing wettability (hydrophilic mica vs hydrophobic graphene) of the confining surfaces, plays a crucial role in whether stick-slip dynamics is observed or not.For mica, the decisive role of the K + ions in the formation of the nanoscale capillary bridges suggest that the microscopic details behind stick-slip dynamics should in general depend on the atomic structure of the system, and it would be interesting to perform similar studies for other confining surfaces with different surface-lubricant interactions.Nevertheless, we expect our main observations to be rather general, and to open up interesting possibilities in controlling nanoscale boundary-lubricated friction by tuning the wettability of the confining surfaces.We acknowledge the financial support by the Academy of Finland through the Centres of Excellence Program (project no.251748) and via an Academy Research Fellowship (L.L., project no.268302).We are also grateful to the COST action MP1303.The calculations presented above were performed using computer resources within the Aalto University School of Science "Science-IT"project.We also acknowledge the computational resources provided by CSC (Finland).
FIG. 1 .
FIG. 1. (color online) (a) The geometry of the simulation system.Solid sheets are held together by a constant normal load Fn.The top sheet is moving at a constant velocity V , and the bottom sheet is connected to a fixed stage by a spring of stiffness k.The water molecules are confined by (b) two mica sheets (each of thickness of 8.34 Å) or (c) two monolayer graphene sheets.The color code of the atoms is: water oxygen (red), water hydrogen (white), potassium (pink), silicon (yellow), aluminum (blue), mica oxygen (cyan), mica hydrogen (lime), and carbon (gray).
FIG. 2. (color online) Time evolution of (a) the spring force per sheet atom, (b) the friction force per sheet atom on the bottom mica sheet applied by the water and the top mica sheet, (c) the position Z of the center of mass of the top mica sheet in the z direction, and (d) the number of hydrogen bonds between the 256 mica-confined water molecules and the bottom mica sheet.(e) Spring force per sheet atom for 200 water molecules confined by graphene sheets.V = 0.1 m/s and P ⊥ = 1 atm in both cases.
FIG. 3 .
FIG. 3. (color online)Contour graphs of the density distribution ρ(x, y) of water oxygens in the contact layer relative to the bottom mica surface for (a) t = 1 ns ("stick"), (b) t = 5 ns ("slip"), and (c) t = 7 ns ("stick").White corresponds to no water molecules being present.The density profiles across the gap, ρ(z), of water confined by mica sheets when the system (d) slips and (e) is in the stick state.In both (d) and (e), the top surface of the bottom mica sheet is at z = 8.3 Å, while the lower surface of the top mica sheet is at z = 14.7 Å in (d) and at z = 13.4Å in (e).
FIG. 4 .
FIG. 4. (color online) Contour graphs of the density distribution ρ(x, y) of water oxygens in the contact layer relative to the bottom graphene surface for (a) t = 0 ns, (b) t = 3 ns, (c) t = 5 ns and (c) t = 8 ns.White corresponds to no water molecules being present.
FIG. 5. (color online) Time dependence of Fs/Np during a time period of 0.1 ns (a) for 1200 graphene-confined and (b) for 1536 mica-confined water molecules.Blue circles show the local maxima of the signals, corresponding to the timevarying amplitudes X(n) of the spring force oscillations.The autocorrelation functions (as a function of the lag τ ≡ n − n ′ ) of these amplitudes are given (c) for graphene and (d) for mica, extracted from 10 ns long spring force signals.The solid lines correspond to the simulation results, while the dashed lines show the corresponding ACFs from the autoregressive model.The plots of X[n + 1] vs X[n] extracted from the simulations for (e) graphene and (f) mica further illustrate the different nature of the smooth sliding dynamics for the two kinds of confining surfaces.The slopes of the lines (linear fits) are 0.9 and 0.1 for graphene and mica, respectively. | 2015-02-13T14:00:59.000Z | 2015-02-13T00:00:00.000 | {
"year": 2015,
"sha1": "ab9bbfa4b364993b1446e0b5babc23f7c2d0f8b2",
"oa_license": null,
"oa_url": "https://research.aalto.fi/files/3267165/PhysRevLett.114.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ab9bbfa4b364993b1446e0b5babc23f7c2d0f8b2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics",
"Materials Science"
]
} |
236432532 | pes2o/s2orc | v3-fos-license | The prognosis and risk factors of baseline high peritoneal transporters on patients with peritoneal dialysis
Abstract The relationship between baseline high peritoneal solute transport rate (PSTR) and the prognosis of peritoneal dialysis (PD) patients remains unclear. The present study combined clinical data and basic experiments to investigate the impact of baseline PSTR and the underlying molecular mechanisms. A total of 204 incident CAPD patients from four PD centres in Shanghai between 1 January 2014 and 30 September 2020 were grouped based on a peritoneal equilibration test after the first month of dialysis. Analysed with multivariate Cox and logistic regression models, baseline high PSTR was a significant risk factor for technique failure (AHR 5.70; 95% CI 1.581 to 20.548 p = 0.008). Baseline hyperuricemia was an independent predictor of mortality (AHR 1.006 95%CI 1.003 to 1.008, p < 0.001) and baseline high PSTR (AOR 1.007; 95%CI 1.003 to 1.012; p = 0.020). Since uric acid was closely related to high PSTR and adverse prognosis, the in vitro experiments were performed to explore the underlying mechanisms of which uric acid affected peritoneum. We found hyperuricemia induced epithelial‐to‐mesenchymal transition (EMT) of cultured human peritoneal mesothelial cells by activating TGF‐β1/Smad3 signalling pathway and nuclear transcription factors. Conclusively, high baseline PSTR induced by hyperuricaemia through EMT was an important reason of poor outcomes in CAPD patients.
| INTRODUC TI ON
Peritoneal dialysis (PD) is now globally used as an effective replacement therapy for patients with end-stage renal disease (ESRD).
Ultrafiltration failure and cardiovascular events are still the main reasons for the mortality and withdrawal of long-term PD patients. 1 The success of PD depends on the integrity of the peritoneal structure and function. It has been demonstrated that the peritoneum may exhibit submesothelial thickening and vasculopathy at the time of PD catheter insertion in patients with ESRD. 2 Previous study had also confirmed that intraperitoneal and systemic inflammation increases during the first year of PD therapy and inflammation may partly be responsible for the development of a high peritoneal solute transport rate (PSTR). 3 The most common peritoneal functional alteration is impaired ultrafiltration and decreased dialysis efficiency caused by fast PSTR. 4 Ultrafiltration failure is the main limitation of long-term PD treatment. Peritoneal transport function refers to the permeability of the peritoneum to transport small solutes. PSTR is measured by dialysate-to-plasma (D/P) ratios of low-molecularweight solutes through the peritoneal equilibration test (PET). A high PSTR leads to rapid re-absorption of glucose and increasing in protein loss, which followed by fluid overload and malnutrition. 5 Since the transport function of peritoneal membrane varies widely among individuals, it is more accurate to study based on different types of fast transporters. There may exist two distinct types of high transporters, the early inherent phenotypes and the late acquired type. [6][7][8] The late acquired type, a consequence of continuous exposure to bioincompatible PD solutions, has no longer been considered predictors of poor outcomes with the application of automated peritoneal dialysis (APD) and the icodextrin-based PD solution. 5,[9][10][11] While among those PD patients with early inherent phenotypes of baseline high PSTR, several previous studies demonstrated that they had more technical failure and higher mortality rate. [12][13][14] However, there are also studies showed the opposite results. [15][16][17][18] The discrepancy may due to differences in the study population, regional diversity, sample size, primary disease of chronic kidney disease (CKD) or other chronic illness burden is hard to homogenize. According to these existing research results, we still do not fully understand whether a high baseline PSTR was associated with poor prognosis.
The potential causes of high PSTR might be increased peritoneal capillary perfusion and vascular numbers, both of which may be inherent or acquired. During long-term PD, peritoneal membrane is continuously exposed to high dextrose concentration dialysate, coexisted with the other inflammatory stimuli such as peritonitis, and the peritoneal membrane may develop many structural abnormalities, including angiogenesis and submesothelial fibrosis. 19,20 While persistent intraperitoneal inflammation may ultimately lead to fast PSTR and peritoneal fibrosis (PF). 2,[21][22][23] Another study considered that inflammation, along with comorbidity and low serum albumin, was independent predictors of higher baseline peritoneal permeability. 24 Moreover, Pletinck A. et al. reviewed factors including haemoglobin A1c level, salt intaken and genetic polymorphisms which have important effects on the peritoneal membrane and can result in variability of peritoneal function. 25 Therefore, there is no consistent view of the cause of the increased peritoneal permeability.
In terms of molecular mechanism, epithelial-to-mesenchymal transition (EMT) of peritoneal mesothelial cells has been proved to be associated with high peritoneal transport 26 which occurred even early in CAPD and will be developed during long-term exposure to high glucose dialysate, mechanical denudation, profibrotic factors such as TGFβ and inflammatory cytokines. 27 When EMT occurs, the mesothelial cells adopt a more fibrogenic characteristic and acquire a proliferative, migratory and invasive phenotype. 28 As a result, a large amount of neovascularization and accumulation of extracellular matrix accelerate tissue fibrosis, alter peritoneal transport status and lead to ultrafiltration failure. 27,29 Recently, Mizuiri et al. evaluated the association between PSTR and the expression of effluent markers related to EMT and they found effluent hepatocyte growth factor (HGF), vascular endothelial growth factor (VEGF) and interleukin-6 (IL-6) levels were significantly higher in the patients with high transport rate. 26 Moreover, a review newly published by our team systematically summarizes the angiogenic effect of VEGF. VEGF is associated with high peritoneal transport rate while impaired mesothelial cells are the major sources of VEGF in the peritoneum. 30 Since the evaluation of peritoneal function of patients is after the start of peritoneal dialysis, there are still a large number of patients with initial high PSTR who undergo PD and their prognosis is not clear. This study was undertaken to evaluate the subsequent overall survival and technique survival of the PD patients grouped by their baseline peritoneal transport status, analyse independent predictive risk factors of high baseline peritoneal transport status and explore its potential pathogenesis through in vitro experiments.
| Study design and participants
The present analysis was a multicentre retrospective cohort study that included all of the incident PD patients, aged between 18 and performed CAPD 3 to 5 times a day, and daily dialysis dose ranged from 6000 to 10000ml. All the participants underwent a PET after the first month of PD treatment. The exclusion criteria were as follows: patients who underwent PD treatment for less than 6 months, aged less than 18 years or over than 75 years, patients who suffered from peritonitis within the first 3 months, patients who had malignant tumour, liver cirrhosis, active tuberculosis, history of acute myocardial infarction and major surgical trauma within 3 months before starting PD, patients who initiated PD in other PD centres and previously accepted haemodialysis (HD) or kidney transplantation. These enrolled patients were followed until cessation of PD, death or on 30 September 2020.
| Demographic and Clinical Data
The baseline data collected consist of information about the demographic details, clinic and biochemical tests and a limited range of comorbidities including diabetes mellitus (DM), hypertension and cardiovascular disease (CVD). CVD included previous and present history of congestive heart failure, ischaemic heart disease or cerebrovascular disease. A fasting venous blood sample was collected before the morning exchange. Blood biochemical examination was analysed by standard techniques. Corrected serum calcium (cSCa) was correction with the following formula: cSCa = serum Ca + 0.8*(4.0-serum albumin [ALB]) (if serum ALB < 4g/dl).
| Peritoneal equilibration test
Baseline PET was performed 1 month after initial of dialysis.
According to Twardowski, 31 a standard 4-h dwell period was used, using a 2.5% dextrose PD solution for a 2 L volume exchange after 2.5% dextrose PD solution dwelling overnight (8 ~ 12 h
| Outcomes
The endpoint of the study was the patient status (dead or alive) or technical failure like transferring to HD at termination of the followup period (30 September 2020).
Reactive oxygen species (ROS) assay kit was purchased from Nanjing Jiancheng Bioengineering Institute (Nanjing, China). Peritoneal dialysis fluid (2.5%) was purchased from Baxter Healthcare (Guangzhou, China). In order to confirm the additive effect of uric acid and high glucose (HG) in HPMCs, uric acid (800 µM) and 2.5% HG peritoneal dialysis fluid were used singly or in combination for 36 h before cell harvesting. All of the in vitro experiments were repeated for at least three times.
| CCK-8 proliferation assay
The CCK-8 proliferation kit was used according to the manufacturer's instructions. HPMCs were starved for 24 h with DMEM/F12 containing 0.5% FBS and then exposed to uric acid in different doses
| Wound-healing assay
HPMCs were seeded into a 6-well plate and allowed to reach 90% confluence. A scratch wound was created on the cell surface using a micropipette tip. Then, cells were washed with PBS in three times and incubated in serum-free DMEM/F12 with uric acid (800 µM).
| Reactive oxygen species assay
ROS level was examined with a ROS assay kit that sets DCFH-DA as the probe. HPMCs were grown in 6-well plates and were incubated in serum-free media containing DCFH-DA (10 μM) in the presence of groups for 60 min at 37°C and 5% CO2 in the dark. After washing with PBS for three times, the positive cells of DCFH-DA were measured using immunofluorescence photography.
| Immunoblot analysis
Cell lysates were collected from each group. Immunoblot analysis was conducted as described previously. 32 The densitometry analysis of immunoblot results was conducted by using ImageJ software (National Institutes of Health, Bethesda, MD, USA).
| Immunofluorescence staining
Immunofluorescence staining was carried out according to the procedure described in our previous study. 33 HPMCs from different treatment groups were immobilized and incubated with primary antibodies against α-SMA or E-cadherin, and then Texas Red-or FITClabelled secondary antibodies (Invitrogen).
| Statistical analysis
Results are expressed as mean ± SD for continuous data and as frequencies (n) and percentages (%) for categorical data. Data distribution normality was evaluated by the Kolmogorov-Smirnov test.
A comparison among the different peritoneal transport types was performed by analysis of variance (ANOVA, parametric distribution) or the Kruskal-Wallis test (non-parametric distribution). The Kaplan-Meier survival curves were drawn for each event of interest (patient survival and technique survival), and the log-rank test was used to compare curves. Univariate and multivariate Cox proportional hazards models were used to search significant risk factors associated with study outcomes. Data were censored at the time of renal transplantation; 30 September 2020 or transfer to haemodialysis for the overall survival analyses, whereas the death-censored technique analyses were censored at the time of renal transplantation; death; or 30 September 2020. Multivariate logistic regression modelling was used for the analysis of risk factors associated with baseline high PSTR. A two-tailed p value <0.05 was considered statistically significant. In multivariate model, the adjustment variables included peritoneal transport category; age; gender; smoking status; BMI category; weekly residual renal Kt/V; the presence or absence of hypertension; CVD and diabetes, the covariate of laboratory parameters included ALB; C-reactive protein (CRP); cSCa; cardiac troponin (cTnT); haemoglobin (Hb); glycosylated haemoglobin (HbA1c); phosphate (P); parathyroid hormone (PTH); serum creatinine (Scr); total cholesterol (TC); triglyceride (TG); UA; and 4-h D/P Cr.
All the in vitro experiments were conducted at least three times.
Data depicted in graphs represent the means ± SEM for each group.
Intergroup comparison was made using one-way analysis of variance. Multiple means were compared using Tukey's test. The differences between two groups were determined by Student's t test.
Statistical significant difference between mean values was marked in each graph. p < 0.05 was considered significant.
The statistical analyses were conducted by using IBM SPSS Statistics 20.0 (version X; IBM). Table 1.
| Outcomes of CAPD patients
The total average survival time of the study patients was
| Predictors of Baseline Peritoneal Transport Status
In order to explore the possible reasons for the baseline high PSTR, we Figure 3). Otherwise, peritoneal permeability was not associated with the other clinical characteristics which were shown in Table 5.
| Uric acid induces EMT of cultured human peritoneal mesothelial cells in a dosedependent manner
The present results noted us that baseline uric acid levels were obviously associated with both all-cause death and high PSTR, Further results indicated that uric acid had the additive effect of EMT with 2.5% HG peritoneal dialysis fluid. Expression levels of α-SMA, collagen I and vimentin were higher in combined use of UA and HG than separate use (Supplemental Figure S1). Moreover, 800 μM uric acid could increase ROS production according to the DCFH-DA immunofluorescence (Supplemental Figure S2), suggesting that UA contributed to oxidative stress in HPMCs.
| Uric acid induces EMT of cultured human peritoneal mesothelial cells in a timedependent manner
Moreover, we also explored the level of EMT marker expression in the HPMCs stimulated with uric acid at 800 μM in different periods of time. As shown in Figure 5
| Uric acid facilitates the proliferation and migration of peritoneal mesothelial cells
To investigate whether uric acid is also involved in proliferation of
| The uric acid concentration in dialysate
Finally, in order to clarify the difference between uric acid concentration in PD solution and serum, forty-three PD effluent samples were collected after 4 h dwelling in the abdominal cavity when performed the first PET test. We measured the concentration of uric acid in PD effluent and found that the concentration of uric acid was lower in PD effluent compared with that in serum (p < 0.001). Compared with patients in low average group, the concentration of uric acid in PD effluent was higher in high transporters (Supplemental Figure S3).
| DISCUSS ION
This study retrospectively evaluated a cohort of patients that had their initial peritoneal membrane permeability analysed as categorical variable (H, HA, LA and L transport status). In our research, transport classes were significantly associated with deathcensored technique survival. Compared with LA group, baseline H transporters have a more than 5 times higher risk of technical failure (p = 0.008). SUA was significantly associated with baseline high PSTR (p = 0.020) and independently predicted mortality (p < 0.001). more studies had confirmed that the high transport of peritoneal solutes is one of the risk factors for poor prognosis. 37 Our findings were broadly consistent with those of previous studies. The suggested mechanism of adverse outcomes was mainly concentrated on peritoneal ultrafiltration failure and fluid overload do to rapid glucose absorption followed by reduction in osmotic gradient and abundant loss of protein in the dialysate leading to malnutrition. [38][39][40] It was apparently to see in the present study that high transporter 44 In the publication from the ANZDATA Registry, poor prognosis was associated with fast transporters only on CAPD but not on APD. 13 Another larger study of 4128 patients showed that APD leads to better survival of fast transporters. 45 On the other hand, with the utilization of biocompatible neutral pH, low GDP dialysate, peritoneal ultrafiltration may be increased and the function of peritoneum can also be preserved. 11,46 For patients who fail to achieve ultrafiltration goals with CAPD, APD implementation in combination with icodextrin dialysate could prolong technique survival. 47 In our present study, even after adjustment for nutrition and inflammation factors such as serum albumin, CRP, concentrations of creatinine, phosphorus and body mass index, hyperuricemia was still an independent predictor of all-cause mortality. To our knowledge, UA is the metabolic end product of nucleic acid purine which is mainly eliminated via renal excretion. Hyperuricaemia is very common in CKD patients due to the decreased ability of the kidneys to remove uric acid. In individuals at high risk for cardiovascular disease, despite uric acid has been proven to be a potent radical scavenger and antioxidant, 48
ACK N OWLED G EM ENT
We acknowledge and appreciate our colleagues for their valuable efforts and comments on this paper.
CO N FLI C T O F I NTE R E S T
No conflicts of interest, financial or otherwise, are declared by the author(s).
DATA AVA I L A B I L I T Y S TAT E M E N T
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 2021-07-27T06:23:23.791Z | 2021-07-26T00:00:00.000 | {
"year": 2021,
"sha1": "e4f6f4036d55f26ad6e01cf8a0708b8bebc09769",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcmm.16819",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "100cec9f9f6bca9674b06cba9f1a6473b636ea35",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254487288 | pes2o/s2orc | v3-fos-license | Altered neurovascular coupling in children with idiopathic generalized epilepsy
Abstract Aims Alterations in neuronal activity and cerebral hemodynamics have been reported in idiopathic generalized epilepsy (IGE) patients, possibly resulting in neurovascular decoupling; however, no neuroimaging evidence confirmed this disruption. This study aimed to investigate the possible presence of neurovascular decoupling and its clinical implications in childhood IGE using resting‐state fMRI and arterial spin labeling imaging. Methods IGE patients and healthy participants underwent resting‐state fMRI and arterial spin labeling imaging to calculate degree centrality (DC) and cerebral blood flow (CBF), respectively. Across‐voxel CBF‐DC correlations were analyzed to evaluate the neurovascular coupling within the whole gray matter, and the regional coupling of brain region was assessed with the CBF/DC ratio. Results The study included 26 children with IGE and 35 sex‐ and age‐matched healthy controls (HCs). Compared with the HCs, the IGE group presented lower across‐voxel CBF‐DC correlations, higher CBF/DC ratio in the right posterior cingulate cortex/precuneus, middle frontal gyrus, and medial frontal gyrus (MFG), and lower ratio in the left inferior frontal gyrus. The increased CBF/DC ratio in the right MFG was correlated with lower performance intelligence quotient scores in the IGE group. Conclusion Children with IGE present altered neurovascular coupling, associated with lower performance intelligence quotient scores. The study shed a new insight into the pathophysiology of epilepsy and provided potential imaging biomarkers of cognitive performances in children with IGE.
| INTRODUC TI ON
Epilepsy is one of the most common central nervous system diseases, affecting an estimated 70 million patients worldwide; it causes a significant psychological and economic burden to patients, their families, and society. [1][2][3] In particular, pediatric epilepsy patients account for approximately 10.5 million cases, 3,4 and have different clinical manifestations and treatment modalities from adults.
Importantly, the frequent seizures may affect the cognitive performance and physical growth of these children, possibly resulting in low intelligence. 5 Thus, early diagnosis and identification of novel imaging biomarkers of cognitive impairment are critical to improve the prognosis of children with epilepsy.
Idiopathic generalized epilepsy (IGE) is a common subtype of pediatric epilepsy; early diagnosis of IGE is challenging because the etiology is unknown and no sensitive and specific biomarkers have been identified. Magnetic resonance imaging (MRI) is a reliable and noninvasive technique to study the pathological mechanisms of epilepsy in the human brain, analyzing the brain structure and function in an omnidirectional and multangular manner. 6 Seizure activity in IGE patients may affect the cerebral blood flow (CBF) and metabolism, and neuroimaging evidence of neuronal injury and cerebral perfusion changes in IGE has been provided by restingstate functional MRI (rs-fMRI) studies. [7][8][9][10][11] However, these studies on IGE mainly used a single imaging modality, and this method cannot comprehensively and sensitively reflect the altered regional CBF and neuronal activity caused by seizures. 10,12 In contrast, combining rs-fMRI and arterial spin labeling (ASL) imaging can promptly and comprehensively detect altered neurovascular coupling (NVC), reflecting the relationship between changes in neural activity and in regional CBF. [13][14][15] NVC is a mechanism of the neurovascular unit (NVU) that regulates CBF to meet the energy demands of the neuronal activity and maintain balance. 16 The metabolic demand increases during seizures, and the CBF has an insufficient response to meet this demand. 17 These alterations ultimately result in impaired NVC in epilepsy patients. Several studies have identified NVC alterations in schizophrenia, 18 neuromyelitis optica, 19 primary open-angle glaucoma, 20 and type 2 diabetes, 21 indicating a possible neuropathological mechanism causing brain dysfunction. These studies have shown that investigating NVC by multimodal MRI may provide novel insights on the pathophysiology of neurological diseases. [18][19][20][21] Furthermore, basic research has confirmed that the blood-brain barrier is damaged in mammals with seizure. 22 Recurrent epileptic seizures result in neuronal injury, blood-brain barrier impairment, and gliosis, which alter the integrity of the NVU and eventually may lead to NVC impairment. [22][23][24][25] However, to the best of our knowledge, specific imaging biomarkers and neuroimaging evidence of NVC alterations in children with IGE are still lacking. Thus, the current study aimed to evaluate the neurovascular decoupling and its clinical significance in childhood IGE combining rs-fMRI and ASL and provide a new perspective to understand the neuropathological mechanisms of this disease.
| Neuropsychological assessment
On the same day of the MRI scan, all IGE children underwent standardized neuropsychological assessments, performed by an experienced neuropsychologist. The Wechsler Intelligence Scale for Chinese Children Revised Version is an individually administered instrument for assessing the cognitive performance of children between the ages of 6 and 16 years, widely used in China. 27,28 Neuropsychological assessments were conducted in a quiet illuminated room, with only the participant and neuropsychologist present. This assessment can measure three intelligence quotient (IQ) variables in each subject.
| Acquisition of MRI data
After the neuropsychological assessment, the children were allowed a rest period before the imaging scans. Then, all participants underwent MRI scans using a 3.0-T magnetic resonance scanner (GE Healthcare, Chicago, IL, USA). A three-dimensional brain volume sequence (3D BRAVO) was applied to obtain structural T1-weighted
The DICOM images were converted into the Nifti format. The first 10 volumes were eliminated to stabilize the signal of the images, and slice timing and realignment was performed on the remaining 200 volumes. The head motion was corrected with the Friston 24-parameter model. Linear drift corrections were applied, and the interference of white matter, cerebrospinal fluid, and movement signals was removed using a regression. Then, the functional images were spatially normalized using a standard template in the Montreal Neurological Institute (MNI) space. The frequency range (0.01-0.1 Hz) was retained using a temporal bandpass filter. Finally, the functional images were resampled into 3-mm cubic voxels for further analysis.
| CBF data preprocessing
The CBF maps were obtained from the ASL sequence, and the SPM12 toolbox (http://www.fil.ion.ucl.ac.uk/spm) in MATLAB was chosen for the preprocessing of these images. First, the CBF map of each participant was spatially normalized into the MNI space using the one-step registration method, with voxel resampling to 3 × 3 × 3 mm.
Then, further transformations were performed to standardize the maps, converting the CBF map of each participant into z-scores with the following method: within the whole-brain mask, the voxel-wise z-scores were calculated from each voxel's CBF value minus the mean CBF value of all voxels, divided by the standard deviation of all CBF values. Finally, the standardized CBF maps were smoothed using a 6-mm full-width at half-maximum (FWHM) kernel.
| Calculation of degree centrality
Degree centrality (DC) represents the number of edge connections between a voxel and all the other voxels in the brain. 30 In the present study, Pearson's correlation coefficients were calculated using a BOLD time series between pairs of voxels within the whole-brain mask, removing the global signal. Given the ambiguous interpretation of the negative correlations to remove weak correlations after removing the global signal, we conservatively restricted the DC analyses to positive correlations with r ≥ 0.2. The DC calculations were performed using DPARSFA (http://rfmri.org/DPARSF). The DC maps were then standardized into z-scores, and smoothed with a 6-mm FWHM kernel.
| Across-voxel CBF-DC coupling analysis
In each participant, across-voxel correlation analyses were performed between DC and CBF in the whole gray matter (GM) to quantitatively evaluate the global NVC. The Pearson correlation coefficients between the CBF and DC values of all voxels within the GM mask were calculated for each participant. The coefficients from the two groups were tested for normality and variance homogeneity. Finally, two-tailed, unpaired two-sample t-tests were used to compare the HC and IGE groups if the values were normally distributed, and Mann-Whitney tests otherwise. p values <0.05 were considered statistically significant.
| Voxel-Wise analysis of CBF/DC ratio, CBF, and DC
For all voxels, we calculated the CBF/DC ratio using the original values of CBF and DC, without z-transformations, to represent the regional NVC. For each subject, the CBF/DC ratio map was then standardized into z-scores, and smoothed using a 6-mm FWHM kernel. We performed voxel-wise comparisons to identify significant intergroup differences in the CBF/DC ratio. Intergroup differences in CBF and DC were also analyzed to determine the causes of the differences in the CBF/DC ratio.
| Group differences in clinicodemographic variables
The normality of the distribution of age and years of education was assessed with the Kolmogorov-Smirnov test. Normally distributed data were analyzed with the unpaired two-sample t-test; otherwise, the Mann-Whitney U-test was used. Age and years of education were normally distributed and analyzed with unpaired two-sample ttests. The Chi-squared test was performed to analyze the intergroup difference in sex distribution. SPSS version 17.0 (IBM, Armonk, NY, USA) was used to perform these statistical analyses, and the significance level was set as p < 0.05.
| Voxel-wise intergroup comparation in CBF, DC, and CBF/DC ratio maps
The differences in CBF, DC, and CBF/DC ratio maps were analyzed using a voxel-wise two-sample t-test in the DPABI software V5.0 (http://rfmri.org/DPABI), based on the MATLAB R2018b platform, adjusting for the covariates of age, sex, and years of education. For the resulting CBF, DC, and CBF/DC ratio maps, the Gaussian random field (GRF) method (p < 0.05) was selected to correct for multiple comparisons.
| Group difference in global NVC and correlation analysis
The CBF-DC correlation coefficients of the two groups were tested for normality using the Kolmogorov-Smirnov test; the unpaired twosample t-test was used for data with normal distribution, and the Mann-Whitney U-test otherwise. The mean value of each cluster with significant between-group differences in CBF, DC, and CBF/DC ratio was extracted and correlated with the clinical variables using Pearson correlation. SPSS (Version 17.0) was used to perform these statistical analyses, and the significance level was set as p < 0.05.
| Validation analysis
We validated our results considering four potential influencing factors: gray matter volume (GMV) changes, demographic factors (sex, age, and years of education), different DC correlation thresholds (i.e., 0.15 and 0.25), and the influence of antiepileptic drugs. [30][31][32][33] The analyses were repeated in all the resulting statistical CBF, DC, and CBF/DC ratio maps and corrected using the GRF method (p < 0.05).
In addition, we replaced the DC with fractional amplitude of low frequency fluctuations (fALFF) to assess the neuronal activity. All preprocessing steps were performed using the same parameters as in the DC preprocessing. Then, the voxel-wise CBF/fALFF ratio was calculated and compared between the two groups, controlling for the effects of sex, age, and years of education. The GRF method (p < 0.05) was used to correct for multiple comparisons. We have also performed the spatial normalization for all children by using the pediatric brain template 34 to validate our results. And the processing workflow of replacing the pediatric brain template has been added to the Supplementary Materials.
| Participants
The study finally included 61 right-handed subjects ( Table 1.
| Spatial distributions of CBF, DC and the CBF/DC ratio
Despite subtle differences, the spatial distributions of CBF, DC, and CBF/DC ratios in the IGE group were similar to those in the HCs group ( Figure S1). A higher CBF was primarily noted in the IGE group in the posterior cingulate cortex, parietal cortex, visual cortex, and lateral temporal cortex. In contrast, a lower DC emerged mostly in the posterior cingulate cortex, precuneus, inferior frontal cortex, and lateral temporal and parietal cortex. Higher CBF/DC ratios were identified in the medial prefrontal cortex, inferior temporal gyrus, posterior cingulate cortex, and precuneus.
| Changes in whole gray matter CBF-DC coupling
All subjects exhibited significant across-voxel spatial correlations between CBF and DC; two representative correlation maps (one from each group) are presented in Figure 1A. At the group level, global CBF and DC Figure 1B).
| Changes in CBF, DC, and CBF/DC ratio
The IGE group showed significantly higher CBF/DC ratio in the right medial frontal gyrus (MFG), posterior cingulate cortex/precuneus, and middle frontal gyrus, with significantly lower CBF/DC ratio in the left inferior frontal gyrus compared to the HCs group (GRF corrected: p < 0.05); all results are shown in Table 2 and Figure 2A. The IGE group also showed a significantly lower CBF than the HCs group in the left inferior frontal gyrus, and a higher CBF in the right middle temporal gyrus and superior parietal lobule (GRF corrected: p < 0.05, Table 2).
The IGE group exhibited a lower DC in the right posterior cingulate cortex, precuneus, and middle frontal gyrus compared to the HCs group, though the brain region of higher DC was not reported (GRF corrected: Table 2). We projected the intergroup difference maps of CBF, DC, and CBF/DC ratios onto an overlay map to represent more intuitively the cause of the changes in the CBF/DC ratio ( Figure 2B).
| Clinical correlation analysis
Pearson's correlation was calculated to examine the relationship between the mean CBF, DC, and CBF/DC ratio in every significantly different cluster and IQ scores, age of onset, and disease duration in children with IGE (Table S1). The results showed a significant negative correlation (r = −0.408, p = 0.038) between the increased CBF/DC ratio in the right MFG and the performance IQ (PIQ) scores ( Figure 3). Other parameters, including the mean value of significantly different CBF and DC clusters, did not show significant correlations with IQ scores, age of onset, and disease duration.
| Validation analyses
Overall, the alterations of the CBF/DC ratio observed using different validation strategies remained highly consistent with our main Figures S2, S7-S9, S12-S14). Furthermore, the TA B L E 2 Brain regions with significant intergroup differences in CBF, DC, and CBF/DC ratios The analysis was repeated using fALFF in place of DC to assess the neuronal activity and validate the reproducibility of our results.
No significant difference between groups emerged in the global CBF-fALFF coupling (t = −0.6739, p = 0.503); however, in the regional NVC analysis, the brain regions with significantly altered CBF/fALFF ratio (GRF corrected: p < 0.05, Figure S15) were similar to those with altered CBF/DC ratio (GRF corrected: p < 0.05). The alterations of the CBF/DC ratio observed by using the pediatric brain template for spatial normalization remained consistent with our main findings, after controlling for sex, age, and years of education (GRF corrected: p < 0.05, Figure S16).
| DISCUSS ION
NVC impairment is a significant pathophysiological mechanism in the initiation and development of epilepsy. 22 Our study confirmed the presence of NVC alterations in IGE children, using neuroimaging methods. We found significantly lower global NVC in the GM of IGE children compared to that of HCs. In addition, a voxel-wise analysis of the CBF/DC ratio in the whole GM demonstrated regional NVC changes in multiple functional regions associated with executive function and cognitive control, undetectable using CBF or DC alone.
Another important finding was the increased regional NVC in the Previous studies reported significant across-voxel correlations between CBF and functional connectivity strength, regional homogeneity, and DC in patients with schizophrenia, neuromyelitis optica, and diabetes mellitus. 18,19,21 These results suggest an important potential role of the NVC in the pathophysiology of the human brain. In our study, an across-voxel correlation between CBF and DC was found in IGE patients; however, the correlation was significantly lower in the IGE group than in the HCs, suggesting global neurovascular decoupling in IGE. Across-voxel CBF-DC correlation may represent a global NVC alteration, possibly resulting from changes in the NVU components. 13,19 The NVU comprises neurons, astrocytes, and blood vessels, and its structural integrity is critical to retain its function. Impairment of any components can cause NVC alterations. 36 Several potential factors may explain the CBF alterations found in IGE children in the current study. Based on the theory of NVC, CBF changes are controlled by the neuronal activity; therefore, a region with higher neuronal activity increases its blood supply, as shown by the CBF increase. 16,46 We found that IGE children have significantly higher CBF than HCs in the right middle temporal gyrus and superior parietal lobule and lower CBF in the left inferior frontal gyrus. These regional CBF changes are similar to previous results from rs-fMRI studies, confirming that the neuronal activity might play an important role in CBF changes. 47
| CON CLUS ION
Children with IGE present reduced global CBF-DC coupling, indicating neurovascular decoupling. Furthermore, the regional disrupted CBF-DC coupling is associated with executive and cognitive dysfunction. These findings provide new neuroimaging evidence of neurovascular decoupling in children with IGE and may be helpful for a deeper understanding of the potential neuropathological mechanisms in seizure generation, providing new biomarkers of cognitive performance in childhood IGE.
AUTH O R CO NTR I B UTI O N S
JH and HFR: manuscript writing, study design and data analysis.
GQC, YLH and QHL: manuscript revision. JWL and FLL: collection of data or analysis. HL and TJZ: conception, study design and critical review. All authors contributed to the article and approved the submitted version.
ACK N OWLED G M ENT
The authors thank the members of their research group for useful discussions.
CO N FLI C T O F I NTE R E S T
The authors declare that the study was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
DATA AVA I L A B I L I T Y S TAT E M E N T
Data available upon request to the corresponding authors. | 2022-12-10T16:03:29.422Z | 2022-12-08T00:00:00.000 | {
"year": 2022,
"sha1": "1c837c50762063f7cbfbc9912f305df2bfc2bb61",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Wiley",
"pdf_hash": "90610e73e34cf0ced45efeecda572f2f6a27150e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
207930443 | pes2o/s2orc | v3-fos-license | Correlated Feature Selection for Tweet Spam Classification using Artificial Neural Networks
Identification of spam messages is a very challenging task for social networks due to its large size and complex nature. The purpose of this paper is to undertake the analysis of spamming on Twitter. To classify spams efficiently it is necessary to first understand the features of the spam tweets as well as identify attributes of the spammer. We extract both tweet based features and user based features for our analysis and observe the correlation between these features. This step is necessary as we can reduce the training time if we combine the features that are highly correlated. To perform our analysis we use artificial neural networks and train the model to classify the tweets as spam or non-spam. Using Correlational Artificial Neural Network gives us the highest accuracy of 97.57\% when compared with four other classifiers SVM, Kernel SVM, K Nearest Neighbours and Artificial Neural Network.
Introduction
Online social networking platforms such as Twitter, Facebook, Instagram etc. allow people to meet, discuss and work together by collaborating on projects with just a click. The combined number of users just on Facebook, Linkedin, Instagram and Twitter stands at 3200 million as of September 2017 1 . Twitter has generated a lot of interest among netizens recently due to its widespread use by influential people like the Presidents and the Prime Ministers of powerful countries. As per latest reports, approximately 330 million active users are on twitter . One of the interesting properties of Twitter is the ability to follow any other user with a public profile. Media organisations, politicians, and celebrities are reaching millions of followers everyday. It is interesting to note that in many cases, the number of actual followers is not genuine [3]. In a recent incidence in India, some of the leading national newspapers published a headline news on Oct 21, 2017 with the title being "Bots behind rise in XXXX Twitter popularity? " Supported by Shiv Nadar University 1 https://www.statista.com/statistics/282087/number-of-monthly-active-twitterusers/ when one of his tweets received 30,000 re-tweets. Fake Twitter followers and Robot driven accounts being used for re-tweets is not a new phenomena. There are softwares in the market such as Twitter bots that use Twitter APIs to control Twitter accounts. These software bots can be used to send tweets, re-tweets, follow, unfollow and increase the number of likes on a tweet [10]. Varol et al. [11] in his paper states that as many as 48 million accounts in Twitter are actually bots. This means that approximately 15% of the profiles are fake. The growth in popularity of Twitter in the recent years has led to a large number of spammers who exploit and manipulate these numbers to misuse the whole medium for unwanted gains. Spamming is not a new concept. Initially the term was associated with bulk email which were unsolicited. Lots of research has been done to overcome this problem and now we have fairly accurate filters which keep on segregating spam mails into a separate folder. The internet society in 2015 estimated that 85% of the global emails are spam 2 .
Considering that decades of research has been done in the area of email filtering, Tweet spam filtering needs significant contribution from the academic research community.
Twitter defines spam as unsolicited, repeated actions that negatively impact other people. Examples of the afore mentioned could be, posting harmful links of phishing sites, using automated bots for mass following, abusing, creating multiple handles, posting or re-tweeting on trending topics unnecessarily . Twitter has it's own mechanism of spam detection however it is still in its infancy and the academic research community should provide it's support. On an average Twitter is able to detect roughly 3.2 million suspicious accounts per week 3 . The primary issue that needs attention is to understand how harmful these spams can be. To figure out a solution to this problem, we note that the US Intelligence community released a report in January 2017 highlighting the role that Russia Today (RT) might have played in influence the 2016 U.S. Elections. This is a big allegation and which is still under investigation; however, it still shares the influence of online platforms particularly Twitter for such big events [2] .
In section 2, we will review related spam detection research. In section 3, we will explain our process of data collection and preprocessing. In section 4, we will discuss our approach for feature selection which further divided into 2 sub sections for the analysis of Tweet based features and User based features. Section 5, we detail our experiments and comparisons with our spam detection model. Section 6 concludes this research with directions to future work.
Related work
Twitter continues to gain popularity among the various social networks currently available and thus attracts spammers who would try to abuse the system by manipulating existing features to gain undue advantage [10].
There have been numerous studies which have proposed machine learning and artificial intelligence techniques for detecting spammers. Existing studies have focussed on classification algorithms to distinguish between spammers and nonspammers [3]. Lee et al. [7] created social honeypots for the identification of spammers. Many studies have focused on URL based spam detection and blacklists based on domain and IP address. This has not been successful since short URL's obscure the base and new short URL's are used by spammers as soon as old ones are blacklisted [13]. Grier et al. worked extensively on blacklisted URL's [4]. Twitter identifies users through a unique username referred to as the screen name. Each user can send replies containing screen names. One can also mention another user's screen name anywhere in their tweet. This feature helps users to track conversations and know each other. Spammers, however use this feature by including many screen names in their replies and tweets, If there are too many replies or mentions in tweets by an user, Twitter will treat this as suspicious 4 [8].
Twitter allows a message with a maximum length of 140 characters. Due to this restriction many URLs are shortened in the tweets. However, short URLs can obscure the source and this property has been used by spammers to camouflage the spam URLs [13]. Twitter allows the re-tweets and all such re-tweets start with @ RT. Many authors [5] use the number of re-tweets in the most recent 20-100 tweets of a user as important feature in spam detection. Trending topic has become a ubiquitous topic these days. If there are several tweets with the same term, then it will become a trending topic. Spammers seek attention by posting many unrelated tweets with trending terms [12]. Another prominent feature of Twitter allows users to create public and private list to categorize people in different groups based on similar interests [6] . There are two major categories in which we can segregate the extracted features for Twitter spam detection. User profile based features and tweet content based features. Some of the major user profile based features are the number of followers, the number of follows, duration of the existing account, the number of favourites, number of lists in which the user has membership and the average number of tweets a particular user sends [5]. The tweet content based features are the number of times a particular tweet has been re-tweeted, the number of hashtags, the number of times a particular tweet has been mentioned, the number of URLs included in a tweet, the number of characters and number of likes in this tweet [6].
Data collection
We need labelled dataset to train and test our model. A majority of spam contains embedded URLs and many researchers have focused only on this [9]. For making our labelled dataset of spam and non-spam tweets, we have used tweets from spammers and non-spammers. We have used the list of spammers from the reference [1] and for non-spammers we have randomly picked Twitter user accounts. For each user we have extracted at most 100 previous tweets using Twitter API Tweepy by giving user screen name as a query. Along with the text of the tweets we have also extracted timing of the tweets, number of previous tweets, favourites, friends, lists, number of followers of the users. We have extracted 719300 tweets from a of total 760 users out of which 370 (350900 tweets) users are randomly picked non-spammers and 390 (368400 tweets) are spammers.
Feature selection
Any machine learning based spam classification technique would need feature extraction. Historical information of user based features such as number of tweets sent by user in last 30 days etc. are important and give useful insights. To make sure that feature extraction is a real time process, we have used light weight features from Tweepy API and derived new features from these extracted features.
One of the important difference between spammers and non-spammers is the intent of spamming. Spammer's purpose of tweeting is to get some undue advantage through those tweets or belittle some rival. Taking into account the intentions of the tweets it is obvious that the spammer's tweets should have distinct characteristics compared to non-spammer's tweets. Historical data suggests that the average time spent by non-spammers should be less than spammers [14]. There can be various other discriminatory attributes which can reflect on user behaviours. In this section we will take into account the features that have been considered for each tweet for the purpose of classification. Each tweet has two broad categories of features viz. tweet based features, such as those which are related to that particular tweet like upper-case percentage in the tweet or time of tweet posted etc, and user based features, such as number of follower/following of the user etc. From the Tweepy API we have already extracted past tweets of a user from a list of 76 pre labelled spammers and non-spammers.
Tweet based features
From only the text of the tweet, we can get a lot of properties like upper-case percentage, number of screen names in a tweet, link to word percentage, same screen name percentage and tweet similarity. These features correspond to the textual attributes of the tweet which are useful for spam classification. The Tweet similarity percentage Ts is calculated as shown in equation (1) , Let the tweet of which we are finding the tweet similarity percentage be T. We compare T having nt words with all the previous i th tweets Ti's of that user and each comparison is given a percentage of similarity PSi which is percentage of number of similar words ns which are there in both the tweets except all the hashtags, screen names and links to nt and the average of all these PSi's is calculated to give the tweet similarity percentage of the tweet T. This gives how much this tweet is similar to the previous tweets of the user that has posted this tweet. From Fig.1 it can easily be seen that spammers post mostly similar tweets Upper-case in tweets is usually used to emphasise some part of the tweet. Uppercase percentage U can be found by equation (3) where nu is number of upper-case words in the tweet and nt is total number of words in the tweet.
From Fig.2 we can easily identify that most of the spammers use high percentage we can directly see that most of the tweets from the users use high number of screen names and the reason observed from the dataset is that they have a high tendency of to promote other users but this is not the case with non-spammers. There are some spammers which have a natural tendency to advertise themselves by providing links to their products in the tweets. Link to word percentage L2W can be found by equation (4) where nl is number of links in the tweet and nt is total number of words in the tweet.
It is observed from Fig.4 that spammers usually have low Link to word percentage and the reason for this which is observed from the dataset is that most spammers post tweets which mostly have less words and more link.
User based features
Text of the tweet alone cannot be used for spam classification. From the dataset which we have made, we can see that users who post spam tweets also have different trends in some of the user based properties like re-tweet percentage, link use percentage, percentage of time user tweeted with same screen name, tweet frequency, upper-case use percentage, standard deviation of tweet length, tweet similarity percentage of the user, number of followers and following, number of tweets, number of lists, number of favourites. As explained above, tweet similarity percentage plays an important role for spam classification purpose. From Fig.5 it can easily be seen that spammers post mostly similar tweets but the non-spammers don't. From the dataset it can be seen that spammers only post similar tweets with same agenda every time i.e. to promote a link or user or advertise some of their product. User's tweet similarity percentage can be calculating by taking average of Ts's of all the tweets of an user where Ts is calculated by equation (1) A re-tweet is a re-posting of a tweet. Twitter's re-tweet feature helps to quickly share that tweet with all of the followers. One can re-tweet their own tweets or tweets from other users. Sometimes people type "RT" at the beginning of a Tweet to indicate that they are re-posting someone else's content. It is observed from our dataset that most of the non-spam users have a tendency to re-tweet other posts or there own posts, but in the case of spam users they normally do not post re-tweets however if some of the spammers do re-tweeting then the same tweet is used again and again. It is observed from Fig.6 and Fig.7 that most of the users who are spammers never post re-tweets or if they do then they only post re-tweets of similar other spam accounts only , but on the other hand most of the non-spam users frequently posts re-tweets. Link/URL use frequency percentage L, is also an important feature for classification of those type of spammers who usually send a high percentage of tweets having links with high frequency. These type of spammers usually post tweets with similar links which are associated with some product they want to advertise about. It can be calculated by equation (5) where nl is number of previous tweets of the users with any links/URLs and nt is total number of tweets of the users. L = nl nt × 100 (5) Fig. 9. Avg link per tweet -user From Fig.8 and Fig.9 we can see that spammers either have very high frequency of tweets with links or very low but that's not the case with non-spammers, It is also observed that those spammers which have a high frequency of link use mostly the same set or same links in each of their post which they want to advertise or if they have a very low link use frequency then they are not advertising any link but it is observed that they mostly advertise other user and most of their posts are related to increasing the follow count of other users. Those type of spammers who have a tendency to advertise other users and get there following to advertise their product uses a lot of screen names in there tweets. So percentage of time user tweeted with same screen name S is calculated by finding the percentage of number of all the unique screen names nus used to all the number of all screen names nts used in all the tweets of the user as shown in equation (6). S = nus nts × 100 From Fig.10 it is observed that normal users mostly use same screen name in there tweets but spammers either use no screen name, from the dataset it is observed that these are those types of spammers who only promote links in there posts but not advertise users. Other type of spammers are those that always use same screen name in their posts and always want to promote same set of user in there tweets. Rest of the spammer don't use same screen name frequently (They advertise many different users In there tweets). From Tweepy Twitter API we can also extract the time at which a given tweet was posted. We have used this feature to find tweet frequency of user. Fig. 11. Tweet frequency So from Fig.11 we can easily observe that mostly all the spammers have high tweet frequency of tweet and on the other hand normal users have very low tweet frequency. These type of spammers are usually some type of bots. These type of bots have a tendency to post similar type of tweets. Tweet length is also an important factor. It is observed from the dataset which we have generated that usually there is a lot of variation in length of the tweet of normal user but the length of the tweet of the spammers usually remains the same. So we have also considered standard deviation of tweet length as an important user based feature. From Fig.12 we can see that most of the spammers have low standard deviation in tweet lengths which signifies that most of there tweets are of same length and then from observing the dataset we can see that most of them have same length and format but there is only minor change like change in the link in the tweet or change in the screen name but this is not the case with normal users. As we have discussed above that the upper-case use in the tweets is used for emphasising certain information within the tweets. This is heavily used by the spam users to advertise. So we calculate this by finding the percentage of time Fig. 13 we can easily identify that most of the spammers use high percentage of upper-case words in their tweets but normal users don't. Other properties considered in this paper are tweet similarity percentage, number of followers and following, number of tweets, number of lists, number of favourites.
Experiment and evaluation
For our experiment we have used Pearson correlation coefficient for finding correlation between different features 5 . In statistics, the Pearson correlation coefficient is a measure of the linear correlation between two variables X and Y. It has a value between +1 and −1, where 1 is total positive linear correlation, 0 is no linear correlation, and −1 is total negative linear correlation. It is calculated as shown in equation (7). cov(x, y) σx × σy (7) where: cov(x,y) is the covariance σx is the standard deviation of x σy is the standard deviation of y These quantities are also related to the F1 score, which is defined as the harmonic mean of precision and recall.
From all the above plots we can see that there are a lot of correlated features so to find the correlation between all 21 different features ,we had made a matrix of all the features and found the Pearson's correlation value between different features. From these Pearson's correlation coefficient values we have combined the most correlated features by taking their products. There were 11 sets of correlated features as shown in Table 14. In our model we have used these 11 correlated features and passed it through an artificial neural network having 11 input units, 6 hidden units and one output unit. AUC of correlational ANN is shown in Fig.15
Conclusion
We identified a total of 21 features related to a tweet that contained attributes from both the tweet and the user the corresponding user. While computing the correlation between the features, it was observed that all the features could be grouped into 11 sets of correlated features. Thus our input for the artificial neural network gets reduced to 11 nodes. We apply the ANN for classification on data collected from Twitter API where, we used 80% for training and 20% for test. This classifier showed better performance than the four other classifiers that we compared with namely SVM, Kernel SVM, K Nearest Neighbours and Artificial Neural Network . On testing it was observed that precision, F1 score and accuracy improved for the same dataset with Correlational Artificial Neural Network as shown in Table 1. While we saw a slight decrease in the recall value. However this can be addressed as a future work where we not only look into features of individual tweets but also look at the links and identify patterns between the tweets and the users. It would be interesting to study if this additional information would improve our results further. | 2019-11-06T15:16:35.000Z | 2019-11-06T00:00:00.000 | {
"year": 2019,
"sha1": "03c5145767ff9c8d7c018d973683ef6ded5fd4df",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "03c5145767ff9c8d7c018d973683ef6ded5fd4df",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
230819679 | pes2o/s2orc | v3-fos-license | A novel economic framework to assess the cost-effectiveness of bone-forming agents in the prevention of fractures in patients with osteoporosis
Summary A novel cost-effectiveness model framework was developed to incorporate the elevated fracture risk associated with a recent fracture and to allow sequential osteoporosis therapies to be evaluated. Treating patients with severe osteoporosis after a recent fracture with a bone-forming agent followed by antiresorptive therapy can be cost-effective compared with antiresorptive therapy alone. Incorporating these novel technical attributes in economic evaluations can support appropriate policy and reimbursement decision-making. Purpose To develop a cost-effectiveness model accommodating increased fracture risk after a recent fracture and treatment sequencing. Methods A micro-simulation cost-utility model was developed to accommodate both treatment sequencing and increased risk with recent fracture. The risk of fracture was estimated and simulated using the FRAX® algorithms combined with Swedish registry data on imminent fracture relative risk. In the base-case cost-effectiveness analysis, a sequential treatment starting with a bone-forming agent for 12 months followed by an antiresorptive agent for 48 months initiated immediately after a major osteoporotic fracture (MOF) in a 70-year-old woman with a T-score of 2.5 or less was compared to an antiresorptive treatment alone for 60 months. The model was populated with data relevant for a UK population reflecting a personal social service perspective. Results The cost per additional quality-adjusted life year (QALY) gained in the base-case setting was estimated at £34,584. Sensitivity analyses revealed the sequential treatment to be cost-saving compared with administering a bone-forming treatment alone. Without simulating an elevated fracture risk immediately after a recent fracture, the cost per QALY changed from £34,584 to £62,184. Conclusion Incorporating imminent fracture risk in economic evaluations has a significant impact on the cost-effectiveness when evaluating fracture prevention treatments in patients with osteoporosis who sustained a recent fracture. Bone-forming treatment followed by antiresorptive therapy can be cost-effective compared to antiresorptive therapy alone depending on treatment acquisition costs. Supplementary Information The online version contains supplementary material available at 10.1007/s00198-020-05765-7.
Introduction
Osteoporosis results in approximately 9 million fractures annually worldwide [1]. The total monetary burden of osteoporosis in 27 EU countries, including both fracture-associated costs and pharmacological interventions, was estimated at €37 billion in 2010 [2]. In addition, osteoporotic fractures account for around 2 million disability-adjusted life years lost annually in Europe [1].
A previous fracture is a major risk factor for future fractures [3][4][5][6]. The risk of suffering a subsequent fracture following a first fragility fracture changes over time and is highest within the first 1 to 2 years following the initial fragility fracture [3][4][5][6]. During this period, 12-34% of women experience a subsequent fracture, with vertebral fractures particularly increasing the re-fracture risk [3,5]. This temporal elevated fracture risk that is associated with recency of a fracture is termed "imminent risk" [7].
The majority of available osteoporosis therapies decrease bone resorption (antiresorptive agents) [8]. However, an increase in bone mass can primarily be achieved with a few treatments called bone-forming agents [9]. These boneforming treatments have the potential to reduce the risk of fracture faster and to a higher degree than antiresorptive agents [10][11][12][13][14].
The International Osteoporosis Foundation (IOF) and the European Society for Clinical and Economic Aspects of Osteoporosis, Osteoarthritis and Musculoskeletal Diseases (ESCEO) recently recommended that increased fracture risk should be differentiated into "high risk" and "very high risk" [7]. High and very high risk is categorised by fracture risk and the thresholds depend on age. The rationale for the more refined characterisation of risk was to help direct appropriate bone-forming interventions to those designated at very high risk. Bone-forming treatments include teriparatide, romosozumab and, in some countries, abaloparatide. Intervention thresholds for the very highrisk group, i.e. at what 10-year major osteoporotic fracture (clinical vertebral, forearm, hip or humerus fracture, MOF) probability should a bone-forming treatment be initiated, were determined from a clinical perspective. However, it is also important to ensure that initiating treatment at the intervention thresholds can be considered cost-effective. To be able to assess the cost-effectiveness of boneforming agents at the intervention threshold, it is necessary to have a model that can accommodate the specific characteristics of bone-forming treatments and any additional risk associated with a recent fracture. Bone-forming agents, such as romosozumab, are seen as appropriately administrated in sequence with another osteoporosis drug. For example, 1 year with a bone-forming agent followed by a switch to an antiresorptive treatment from the second year onwards to maintain the improvement in bone mineral density (BMD) [11,15].
The objective of this paper is to present a novel costeffectiveness model framework that incorporates both the risk associated with a recent fracture and treatment sequencing. This paper describes the structure of the economic model. Furthermore, it reports results on the potential impact on the cost-effectiveness of the example where a bone-forming agent followed by an antiresorptive agent is compared against an antiresorptive agent alone.
Model structure and simulation technique
The health states in the model and the possible transitions between these states are shown in Fig. 1. All patients begin in the "at risk" health state, where the simulated patient is a 70-year-old woman with a T-score of − 2.5 and a recent MOF. At the end of each cycle, a patient has a probability of incurring a fracture (any), remaining in the same health state without a new fracture, or dying. If a patient dies, she moves to the "death" state. The model is run using a micro-simulation technique in which patients are simulated individually in the model. This technique is chosen since changes in fracture risk, mortality and patient's disease progression related to (re-)occurrence of fractures are highly individualised and depends on time and historical fracture events and therefore need to be tracked individually during the course of the simulation to allow for an accurate depiction of At risk of fracture Hip fracture Vertebral fracture Non-hip, nonvertebral fracture Death Fig. 1 Markov micro-simulation model structure. Footnotes: All patients begin in the "at risk of fracture" state, and, at the end of each cycle, a patient has a probability of incurring a fracture (any), remaining in a health state without a new fracture, or dying. "Death" is an absorbing state from any of the other states ("at risk of fracture", "vertebral fracture", "hip fracture" and "non-hip, non-vertebral fracture"). If a patient dies, she moves to the "death" state and remains there for the rest of the simulation individual fracture risk, multiple fractures and treatment patterns (e.g. sequencing and treatment persistence).
Modelling fracture risk
The risk of fracture for a specific target patient population in the model depends on three elements: The risk for an individual in the general population of incurring a fracture; II. The increased fracture risk associated with osteoporosis (the relative risk) compared to the general population and III. A risk reduction, if any, attributed to treatment.
General population fracture risk
The general population risk of fractures required for the model is differentiated by age, sex and fracture type and is derived from published sources. Fracture incidence used in this analysis reflected a UK population and is described in Supplementary Table 1.
Increased risk of fracture for target patient population
The increased fracture risk for the target patient population is estimated using the FRAX® algorithm [16]. In addition, with lacking UK data, algorithms derived from a Swedish retrospective real-world study were used to estimate imminent fracture risk [17].
FRAX®
FRAX® is a fracture risk assessment tool that estimates a patient's fracture risk and can be used to inform intervention decisions for patients at increased risk of fracture [16,18]. Its use is currently recommended in more than 80 osteoporosis treatment guidelines worldwide [19]. The FRAX® algorithms estimate the 10-year probability of hip and MOF. The fracture risk is based on a number of clinical risk factors: age, gender, BMD, prior fractures, parental hip fracture history, body mass index, ethnicity, smoking, alcohol use, glucocorticoid use, rheumatoid arthritis and secondary osteoporosis [16,18]. In addition to the 10-year probabilities, FRAX® can also produce the relative risk (RR) of hip fracture, MOF as well as RR of pre-fracture mortality compared to gender-and age-matched controls. The RRs derived can, thus, be used to adjust the population fracture risk for any combination of the clinical risk factors (CRFs) included in FRAX® in the model. The implementation of FRAX® in health economic modelling is described in more detail in Ström et al. [20]. Also, the National Institute for Health and Care Excellence (NICE) has used FRAX® for fracture risk estimation in their recent health technology assessments (HTAs) of osteoporosis treatments [21].
However, a novel FRAX®-related model feature is a functionality that allows for updating the FRAX® relative fracture risk and mortality at pre-specified intervals during the simulation (every year, every 5th or 10th year) related to increasing age and decreasing BMD. In previous cost-effectiveness models, the RR calculated at baseline has been kept constant during the whole simulation [20,22,23]. Keeping the RR of fracture constant over time overestimates the fracture risk because with increasing age (all else equal) the RR of fracture decreases. Thus, with this modification, the fracture risk trajectory over the lifetime for a patient is more realistically simulated.
Other CRFs are kept constant in the model due to a lack of data indicating that the prevalence of those factors changes with age.
Imminent fracture risk
While it is well established that a fragility fracture increases the risk of a subsequent fracture over a patient's lifetime, recent studies have shown that the increase in RR is not constant over time and varies by age and number of fractures [5,17]. A limitation of the FRAX® algorithm is that it does not capture this time-dependent elevated fracture risk after a recent fracture.
In the model, the relative imminent fracture risk is updated each time a fracture is experienced. Fracture risk at any time point during the model simulation is estimated as a function of the general population risk, the RR estimated by FRAX® for a given patient profile excluding the prior fracture CRF and the maximum of the time-dependent RR of an imminent fracture and the RR of fracture as estimated by FRAX® including the prior fracture CRF.
Data on the imminent fracture risk for the model was derived from a Swedish real-world data study [17]. In this study, women with fractures were matched to controls without a prior fracture, based on gender and birth year. Survival regression analysis was used to estimate the incidence functions of hip, vertebral and MOF (after 1st, 2nd and 3rd fracture), with the first subsequent incident fragility fracture as the failure event. Relative risks for each time period (0-6, 7-12, 13-18, 19-24, 25-36, 37-48, 49-60 months), age group (50-64, 65-75, 75+) and fracture site (any, hip and vertebral) were estimated by interacting exposure status and age with a time period. Separate survival regressions for different types of recent fractures of recent fracture (hip, vertebral and any) were estimated.
Modelling the intervention
In its simplest form, a relative fracture risk reduction due to treatment is applied to the target patient population fracture risk during the treatment period. This period of risk reduction is usually followed by a period where the treatment effect is declining (the residual effect after treatment discontinuation). Treatment persistence is important to consider in cost-effectiveness models in osteoporosis [24,25]. These features (residual effect and treatment persistence) have been included in published osteoporosis cost-effectiveness models and are included in this economic model as well. In addition, a few novel features (i.e. treatment sequencing and imminent risk) are incorporated into this novel economic model, which were required to more appropriately reflect characteristics of osteoporosis disease and to capture the impact of bone-forming agents.
Treatment sequencing
The model accommodates functionality to specify the treatment sequence and the timing of how or when the patient switches treatment. The implemented switch "triggers" are either a fracture (any fracture site) or a specific time point (months since treatment start). Only patients who are persistent with the first treatment will be switched to the next treatment in the sequence. This assumption is made due to the difficulty to distinguish the reason for non-persistence in available data. After a treatment switch, patients have the probability of non-persistence corresponding to the time since the start of the entire treatment regimen. No additional treatments were initiated when a fracture occurred.
Patient population
The model allows for a simulation of patient profiles based on the FRAX® CRFs in combination with the recency of prior fracture, as described above. A recent fracture may be narrowed down to a specific fracture site, including hip, vertebral or MOF. Overall, 13 risk factors need to be defined to run the model simulation; the number of potential patient profiles is almost inexhaustive. However, when performing costeffectiveness analyses of osteoporosis treatments, it is more To assess the cost-effectiveness for such broader patient groups, the model runs simulations on a larger set of patient profiles that on aggregate are representative of the target patient population. Such data sets of representative patient profiles for a specific target patient population were drawn from a prevalence and correlation matrix of CRFs from the FRAX® cohort. The default number of patient profiles drawn from the FRAX® matrix was after calibration set to 8000, which was deemed sufficiently large to represent the most possible combinations of risk factors. The distribution of risk factors is described in Supplementary Table 2.
Time horizon and cycle length
Osteoporosis is a chronic disease with long-term consequences after a fracture, so a lifetime time horizon was deemed appropriate. All patients are individually followed through the model from the patient's age at treatment initiation to their time of death or age of 100 years, whichever comes first.
In most cost-effectiveness (CE) models of osteoporosis treatments, the cycle length has been 6 months or 1 year [22,26]. For fast-acting bone-forming agents which are mainly intended to be given for shorter periods (i.e. 12-24 months), a 1-year model cycle would be too long as it would only allow for one transition during a 12-month treatment course and miss potentially meaningful achievements in the first 6 months of treatment in which patients are at high risk of subsequent fracture following a recent fracture [5]. In the economic model, the cycle length is flexible and may be changed at a specific time point during the simulation. As a default, a 6-month cycle length during the entirety of the time horizon was used as it was deemed sufficiently short to capture an imminent increase in fracture risk and any meaningful short-term treatment effect.
Data inputs
The model was populated with economic and epidemiological data relevant to a female UK population.
The age-and gender-specific all-cause mortality rates for the general population in the UK were based on the years 2012-2014 [27]. The model calculates the absolute risk of death by applying the normal UK population mortality to the excess mortality 1 year and subsequent years, respectively, after hip, vertebral and other fractures, down adjusted for comorbidities [28,29]. Increased mortality after a fracture was assumed to persist for 8 years [22].
Available data indicates that bone-forming agents have a better persistence profile compared with antiresorptives [30,31]. Approximately 50% of patients discontinue treatment with alendronate, administered orally daily or weekly, after 1 year [30,32]. Persistence with a bone-forming agent was assumed to be 80% during the first year of treatment. Following bone-forming treatment, patients switching to antiresorptive were assumed to have a persistence (percentage of patients on treatment) corresponding to the persistence of patients on alendronate after 1 year of treatment [33].
The impact on the quality of life during the first and subsequent years after hip, vertebral and NHNV fractures were based on EuroQol-5 dimension (EQ-5D) data from the International Costs and Utilities Related to Osteoporotic Fractures Study (ICUROS) [34]. This data source was chosen because ICUROS is to date the largest prospective study collecting quality of life data designed to be appropriate for health economic analysis (Supplementary Table 3 and Supplementary Table 4).
Costs of hip, vertebral and NHNV fractures were taken from a study of postmenopausal women in the UK [35,36] (Supplementary Table 5). The cost of NHNV fractures was calculated by weighting the cost of wrist fractures and the cost of other NHNV fractures based on the population incidence of these fractures [37]. Hip and vertebral fractures are assumed to incur costs in subsequent years (£115 and £357, respectively) [21].
Because no specific treatment strategy was evaluated, an annual treatment acquisition cost of £5000 was assumed for the bone-forming agent and £20 for the antiresorptive agent, in line with treatment acquisition costs of existing reimbursed antiresorptive treatments. In sensitivity analyses, the annual price of the bone-forming agent was varied to £3000 and £7000. In addition, a physician visit every second year and a dual-energy X-ray absorptiometry (DXA) scan every fourth year were assumed and included in the treatment-related costs.
Costs and effects were discounted within the analysis at 3.5% per annum in accordance with NICE guidelines [38]. All costs are presented in 2019 prices.
Model outputs
The primary outcome in the economic model is the costeffectiveness of a defined treatment versus an alternative treatment strategy reported both as the cost per quality-adjusted life year gained and cost per life year gained. Other outcomes include estimates of life years (LYs), number of fractures avoided and number needed to treat (NNT) to avoid one hip or vertebral fracture.
Analyses
The main purpose of the analysis was to show the potential impact of the novel features of the model framework and not conduct a cost-effectiveness analysis for a specific treatment strategy. Therefore, the intervention treatment strategy was a sequential treatment starting with a bone-forming agent for 12 months initiated immediately after a MOF followed by an antiresorptive agent for 48 months. As a starting point to evaluate the impact of the novel model features on costeffectiveness results, the intervention treatment strategy was compared to an antiresorptive treatment for 60 months ("base case").
To explore the impact of treatment sequencing on costeffectiveness results, the intervention treatment strategy was compared with the two following alternatives: non-sequential bone-forming treatment for 18 and 24 months. To explore the impact of imminent fracture risk on cost-effectiveness results, the intervention treatment strategy was compared with the three following alternatives: sequential treatment as the main strategy but treatment initiated 6, 12 and 24 months after the fracture. As an additional scenario, the impact of deactivating the imminent fracture risk algorithm, thus neglecting the demonstrated imminent fracture risk after a recent fracture, was explored.
The analyses were run based on a set of common assumptions. The patient population tested was assumed to be women starting treatment at an age of 70 years. Compared with no treatment, bone-forming treatment was assumed to reduce the relative risk of fractures by 50% for hip fractures, 60% for vertebral fractures and 40% for other fractures. This treatment efficacy (relative risk reduction from bone-forming treatment) was assumed to be maintained during the entirety of the treatment duration (12 months + 48 months). The relative risk reduction for antiresorptive treatment in isolation was assumed to be 30%, 50% and 20% for hip, vertebral and other fractures, respectively. The relative risk reductions were chosen to be similar to a recent network meta-analysis of bone-forming and antiresorptive treatments for osteoporosis [39]. After treatment discontinuation, the treatment effect was assumed to linearly decline over a period equal to time on treatment ("offset time"). For the treatment sequence (bone-forming agent to antiresorptive), the offset time for both drugs was modelled jointly, referring to the discontinuation of the sequence.
Additional sensitivity analyses were run to explore the impact of imminent risk and treatment sequencing based on the new model features: recent fracture was assumed to be hip or vertebral fracture alone and the starting age of treatment varied between 60 and 80 years. The results from the sensitivity analyses are presented comparing the bone-forming agent for 12 months followed by an antiresorptive agent for 48 months to antiresorptive treatment for 60 months.
Base case
Base case results for the treatment sequence of 12-month bone-forming treatment followed by 48 months of antiresorptive treatment compared with 60-months antiresorptive treatment are presented in Table 1. The boneforming agent treatment sequence was associated with higher treatment costs and lower fracture-related costs. The total incremental cost was £2978, with an increase in QALYs of 0.086 yielding an incremental cost-effectiveness ratio (ICER), expressing the additional cost per additional QALY gained, of £34,584.
Impact of treatment sequencing
Modelling the intervention treatment strategy as sequence (12month bone-forming followed by 48-month antiresorptive) had a significant impact on the cost-effectiveness results when compared to a non-sequential bone-forming agent for 18 and 24 months, respectively ( Table 2). Compared with boneforming agent only for 18 months, sequential treatment was associated with incremental QALYs of 0.05 at a lower cost. The comparison with 24-months bone-forming agent showed lower incremental QALYs (0.031) compared with 18-months bone-forming, but to a lower incremental cost.
Impact of imminent fracture risk
Initiation of treatment immediately after fracture, when the fracture risk is highest, was associated with more QALYs and lower costs compared with initiating treatment 6, 12 or 24 months after the initial fracture (Table 2). Deactivating the imminent fracture risk algorithm, i.e. assuming that fracture risk corresponds to any historical fracture and is non-time dependent, was associated with lower incremental QALYs and higher incremental costs compared with the base-case scenario.
Sensitivity analyses
Lower age at treatment initiation was associated with worse cost-effectiveness compared with the base case age of 70 years, in a population with T-score − 2.5 or less and MOF (Fig. 3). Higher age was however associated with better cost-effectiveness. Decreasing the price of the bone-forming agent led to an improved cost-effectiveness compared with the base case, due to lower total cost but did not change the QALYs gained (Table 3). A higher price of bone-forming treatment consequently led to a decreased cost-effectiveness. Simulating the patients to only initiate treatment after a recent hip fracture was associated with decreased costeffectiveness as opposed to the base case where patients were simulated to initiate treatment after incurring any MOF.
Discussion
The overall objective of this study was to present a costeffectiveness model framework that incorporates both recency of fracture and treatment sequencing. These two novel components have so far not been captured in existing osteoporosis modelling approaches. These features have been shown to be important to consider in osteoporosis management and are therefore expected to enable to more accurately capture the progression of osteoporosis patients in any economic evaluation. This paper assesses the impact on economic evaluations when evolving cost-effectiveness modelling in osteoporosis to incorporate these novel features with the example of estimating the cost-effectiveness of bone-forming agents against antiresorptive treatments. Bone-forming agents may be of particular benefit for patients with a recent fracture, where rapid BMD improvement is needed to interrupt a potential fracture cascade, and where sequential antiresorptive treatments can maintain the improved BMD over time. This important clinical evolution was demonstrated in this research to also impact the economic value assessment of osteoporosis treatments as the incremental cost-effectiveness ratio almost doubled (from £34,584 to £62,184) when the novel imminent fracture risk was deactivated. Additionally, under the assumption that the relative risk reduction of the bone-forming agent can be maintained during the sequential treatment with an antiresorptive, the cost-effectiveness results show that a treatment sequence is the dominating strategy compared with a bone-forming agent as a standalone treatment. The results further highlight the importance of starting treatment as early as possible after a fracture occurs. Immediate intervention compared to delayed treatment start was cost-saving in all explored scenarios.
The objective of this study was to present a model framework for estimating the cost-effectiveness of bone-forming agents and not to determine the cost-effectiveness of a specific treatment. In order to evaluate a specific treatment or treatment sequence, the economic model would need to be modified to accommodate the specific characteristics of the intervention and its comparators (i.e. drug price and relative efficacy) in more detail. However, the results presented in this study provides a good indication that treatment with a bone-forming agent followed by an antiresorptive compared to antiresorptive only in patients at imminent risk of fracture could be considered cost-effective despite the substantial price difference. In a recent publication by Kanis et al. [7], the potential added value of a treatment sequence with a boneforming agent followed by an antiresorptive was calculated as fractures saved. Over a 10-year time frame they estimated that the number of saved fractures increased from 5.7 at a starting age of 50 years to 126.6 at 90 years of age.
There are a considerable number of publications that have estimated the cost-effectiveness of various interventions for treatment and prevention of osteoporotic fractures [22,40,41]. Only a few publications have estimated the costeffectiveness of treatments in a sequence [42,43]. In Mori et al., the cost per QALY gained of sequential teriparatide/ alendronate compared with alendronate alone in osteoporotic women with prior vertebral fracture was greater than $280,000 in a US setting [43]. Le et al. estimated the cost per QALY gained of sequential abaloparatide/alendronate compared with placebo/alendronate in osteoporotic women with a prior vertebral fracture to $188,891 in the USA [42]. Neither of these studies are directly comparable to this study as the study design differs in several aspects (e.g. patient groups, time horizon and comparators) and perhaps, most importantly, they do not consider the imminent risk of fracture. The modelling approach of this research is to the best of knowledge the first study presenting a cost-effectiveness model for osteoporosis treatments that include imminent fracture risk and allows for treatment sequencing at the same time. Other cost-effectiveness studies on bone-forming agents are available [23,41,44] but it is not meaningful to compare these with the results in this study because they use different comparators, are based on other countries and do not consider imminent fracture risk or treatment sequencing.
The algorithms for time-dependent fracture risk in patients with recent fracture (i.e. imminent fracture risk) were derived from a Swedish retrospective real-world data study [3,17]. Preferably, as much data as possible should be country specific in cost-effectiveness analyses. However, sufficiently detailed and comprehensive data required to estimate imminent fracture risk functions are scarce in most countries. Sweden is one of few countries that can provide population-based patient-level data of such granularity that is required to calculate algorithms for imminent fracture risk appropriate for economic modelling. When using the Swedish data on imminent fracture risk in other countries, this relies on the assumption that the relative risk of recent fracture versus no recent fracture is similar between countries. The validity of this assumption is supported by other studies [5]. Recently, granular data have become available from Iceland which have been used to provide adjustments to conventional estimates of fracture probability using FRAX [7]. In addition to the recency of fracture, probability adjustment was age dependent, decreasing with age in both men and women. Probability ratios also varied according to the site of sentinel fracture with higher ratios for hip and vertebral fracture than for humerus or forearm fracture. These observations may permit refinements in modelling with significant implications for cost-effectiveness.
The imminent fracture risk data from Söreskog et al. incorporated in the model were adjusted for a range of observable confounders (such as prior drug use impacting fracture risk, secondary osteoporosis and comorbidities) [17]. However, not all risk factors that are included in FRAX® were available in the study, such as BMD T-score. Therefore, the risk contribution of a recent fracture, when added on top of the risk estimated by FRAX®, may have been overestimated and should be considered a limitation of the cost-effectiveness model.
Mortality was estimated as a function of normal population mortality and excess mortality one and subsequent years after the fracture, respectively. Since excess mortality was estimated as the cumulative number of deaths during the first year after fracture, mortality risk immediately after fracture may have been underestimated.
The model has the capability of running probabilistic sensitivity analysis (PSA) as well. However, results were not presented because PSA is a standard model functionality and not a novel feature and the deterministic sensitivity analyses provided in this study serve the purpose of this research in a sufficient manner.
In this study, the cost-effectiveness was estimated for a bone-forming agent that might correspond to some overall perception of the characteristics of this class of compounds. However, in future reimbursement applications, the model should be tailored to assess the cost-effectiveness of a specific bone-forming drug in its intended indications in relevant countries. In addition, another future use of the model would be to calculate cost-effectiveness intervention thresholds (i.e. 10-year MOF probabilities at which the bone-forming agent becomes cost-effective) to support the clinical intervention thresholds as recently suggested for very high fracture risk patients by IOF and ESCEO [7].
Conclusion
Incorporating imminent fracture risk in osteoporosis costeffectiveness modelling has a significant impact on the costeffectiveness when evaluating fracture prevention treatments in a patient with a recent fracture. Bone-forming agents in a sequence with antiresorptive treatment can be cost-effective compared with antiresorptive therapy despite the difference in treatment acquisition costs.
Availability of data and material Not applicable.
Funding This study was funded by UCB Pharma and Amgen.
Compliance with ethical standards
Conflict of interest J.A.K. reports grants from Amgen, Eli Lilly and Radius Health and consulting fees from Theramex. J.A.K. is the architect of FRAX® but has no financial interest. E.S., I.L., F.B. and O.S. are employees of Quantify Research which was contracted and paid by UCB, a pharmaceutical company marketing products for osteoporosis, to conduct the study. The authors did not receive direct payment as a result of this work outside of their normal salary payments. D.W., C.S. and M.C. are employees of UCB Pharma. B.S. is an employee of Amgen (Europe).
Code availability Not applicable.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc/4.0/. | 2021-01-07T15:55:26.027Z | 2021-01-07T00:00:00.000 | {
"year": 2021,
"sha1": "9bbc296ba9a5405b463855cd92971b9e069d8a88",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00198-020-05765-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "9bbc296ba9a5405b463855cd92971b9e069d8a88",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53139246 | pes2o/s2orc | v3-fos-license | The osteoarthritis-associated gene PAPSS2 promotes differentiation and matrix formation in ATDC5 chondrogenic cells
3′-Phosphoadenosine 5′-phosphosulfate synthetase 2 (PAPSS2) has been shown to be important in the development of normal skeletal structure. The aim of the present study was to evaluate the role of PAPSS2 in the differentiation of chondrocytes as well as their mechanisms. Using RNA interference-mediated via a lentivirus and a retrovirus, PAPSS2 gene silence and overexpression in ATDC5 chondrogenic cells were performed. Chondrocyte differentiation and chondrogenic-related gene markers associated with extracellular matrix formation were noted. The mRNA and protein expression for Wnt4, β-catenin and SOX9 genes were observed. The PAPSS2 transcript expression levels progressively decline in ATDC5-induced chondrocyte-like cells during differentiation. Silencing of PAPSS2 expression had a significantly attenuating effect on cell differentiation and decreased expression of collagen II and X. In contrast, over-expression of PAPSS2 promoted the differentiation of ATDC5 chondrogenic cells. The mRNA expression levels of Wnt4 and SOX9 decreased significantly in PAPSS2 knock down cells vs. control cells. However, this expression was increased in the cells over-expressing PAPSS2. These data indicate that PAPSS2 regulates aggrecan activity as well as cell differentiation. The findings favor a mechanism by which PAPSS2 induces differentiation in ATDC5 cells via direct regulation of early signaling events that promote formation of collagenous matrix components. This control is probably mediated via extracellular matrix formation Wnt/β-catenin signaling pathways.
Introduction
The formation of endochondral bone is a complex developmental process.This process is initiated by the differentiation and subsequent proliferation of mesenchymal stem cells to chondroblasts.After leaving the cell cycle for terminal differentiation, the chondrocytes become pre-hypertrophic and finally hypertrophic.Cytokines, hormones and transcription factors may all affect this process known as chrondrogenesis (1)(2)(3).
Sulfate conjugation reactions are catalyzed via sulfotransferase enzymes via 3'-phosphoadenosine 5'-phosphosulfate (PAPS).PAPS serves as a high-energy sulfate donor and is synthesized from adenosine trisphosphate and inorganic sulfate.This synthesis uses two isoforms of PAPS synthetase (PAPSS): PAPSS1 and PAPSS2 (4)(5)(6)(7).Availability of PAPS is a prerequisite for the sulfation of biological molecules, including proteoglycans that function as key extracellular matrix components.This highlights the importance of sulfation in bone development and growth, which in turn depends on the integrity of the extracellular matrix.Due to the requirement for PAPS in sulfation, the biosynthesis of PAPS may influence the rate of sulfation reactions in cells.
Previously, a large Pakistani pedigree was reported to have a homozygous mutation (S475X) resulting in spondyloepimetaphyseal dysplasia (SEMD), Pakistani type.The symptoms include enlarged knee joints, short stature, short, bowed lower limbs, as well as kyphoscoliosis and brachydactyly (characterized by complex shortening of the digits) (8)(9)(10)(11).SEMD of the Omani type is caused by deficient chondroitin 6-O-sulfotransferase activity due to an abnormal PAPSS2 gene that impairs PAPS biosynthesis (4,12).A deficiency of PAPSS2 activity results in long bone shortening and bowing, as well as knee arthritis and degenerative joint disease.Mutant mice lacking PAPSS2 activity have been proposed as a model to study PAPSS2 deficiency-associated arthrosis, due to the features of premature and degenerative joint disease and other similarities to human SEMD (9).Of note, postnatal skeletal development is specifically affected in this PAPSS2 mutant mouse model.Although the skeleton appears to be normal in newborn brachymorphic mice, the columnar and hypertrophic zones of the epiphyseal growth plates are small, which is consistent with reduced growth (11,13).
In previous studies by our group, microarray analysis provided evidence for the involvement of PAPSS2 genes in patients suffering from endemic knee osteoarthritis and Kashin-Beck disease, which manifests as shortened long bones and enlarged joints in the knees and fingers; in addition, it was demonstrated that PAPSS2 influenced osteoblast differentiation via the Smad signaling pathway (14).These observations suggest that PAPSS2 participates in fibrillogenesis and/or matrix calcification/mineralization.However, the precise mechanisms through which PAPSS2 influences cartilage development and formation are still largely elusive.To investigate cell signaling mechanisms involving PAPSS2, biochemical and molecular studies on the structural and functional characteristics of this enzyme are required.The present study examined the role of PAPSS2 in the extracellular signal-regulated kinase pathway that controls chondrocyte differentiation to better understand how PAPSS2 influences chondrogenesis.
Materials and methods
Ethics statement.The present study was approved by the Animal Experimental Ethics Committee of Xi'an Jiaotong University (Xi'an, China).
Cell line and culture.Experiments were performed using the murine teratoma cell line ATDC5 (American Type Culture Collection, Manassas, VA, USA).They are chondrogenic cells with processes analogous to chondrocyte differentiation (15,16).The cells were cultured on Dulbecco's modified Eagle's medium with nutrient mixture F-12 (DMEM/F-12; Life Sciences; Thermo Fisher Scientific, Inc.) with 10% fetal bovine serum (FBS; Gibco; Thermo Fisher Scientific, Inc., Waltham, MA, USA) as well as 100 U/ml penicillin and 100 mg/ml streptomycin.Cell culture was performed at 37˚C in a humidified atmosphere with 5% CO 2 .The seeding density was 1x10 4 cells/cm 2 and cells were passaged every 5-7 days, but for no more than 20 passages.For pre-chondrocytic ATDC5-cell differentiation, the cells were induced via a chondrogenic growth medium containing 100 mg/ml ascorbic acid (17,18).To induce chondrocytic differentiation, cells were seeded on 6-well plates and incubated for 2 weeks in DMEM/F-12 supplemented with 1,600 nM human biosynthetic insulin (Sigma-Aldrich; Merck KGaA, Darmstadt, Germany) (18).
PAPSS2 small hairpin (sh)RNA lentivirus packaging, retrovirus vector, assessment of viral titers and cell transfection.
A small hairpin RNA (shRNA) lentiviral packaging system (Open Biosystems, Lafayette, CO, USA) was used to modify PAPSS2 gene expression in accordance with the manufacturer's instructions (14).The human embryonic kidney cell line (293T) was obtained from the American Type Culture Collection (Manassas, VA, USA) and transfected with expression constructs (pLenti-P2A or pLenti-P2B) or constructs containing a scrambled shRNA sequence and associated packaging.PAPSS2 shRNA sequences are presented in Table I.Five individual pLB-PAPSS2 shRNA (P-S) vectors (each, 500 ng/µl) and a control pLB-scramble shRNA (PLB) vector (Open Biosystems) were co-transfected with the packaging plasmids (500 ng/µl).Retroviral vector pBMP-PAPSS2 was constructed by inserting a full-length 3.64 kb PAPSS2 cDNA (access no.NM_011864) into the EcoRI and NotI site of pBMN-I-GFP (Addgene) and packaging was performed following the protocol from the Dr Garry Nolan Laboratory, Stanford University (Stanford, CA, USA) (14).Briefly, retrovirus vectors pBMN-I-GFP (control vector) and pBMN-PAPSS2 were separately transfected into the Phoenix-eco tropic packaging cells using the CaCl 2 precipitation method.A total of 2-3 days later, the viral supernatant was harvested, and lentiviral titers were assessed in 293T cells with serial dilutions of the lentivirus in the presence of 4 µg/ml polybrene (Sigma-Aldrich; Merck KGaA).Following the transfection, the cells were placed in a 32˚C humidified incubator for 48 h (32˚C aids in stabilizing the virus).The media containing infectious virus was harvested and filtered through a 0.45 mm filter for the titration assay and infecting ATDC5 cells and following 24 h, virus containing media was removed and replaced with fresh complete medium.Following further 48 h, GFP and PAPSS2 protein expression were confirmed by observation of GFP+ cells, immunostaining, and western blot analysis.The retroviruses carrying pBMP-PAPSS2 and pBMN-I-GFP were then used to infect 70-80% subconfluent ATDC5 cells in the presence of 6 µg/ml polybrene (Sigma-Aldrich; Merck KGaA) and viral supernatant for 24 h.After induction with chondrogenic medium for various durations according to the experimental protocol, the cells were analyzed.On days 0, 2, 4, 6, 8 and 10, they were then detached from 6-well plates by trypsinization for counting with a cell counting chamber.
Construction of overexpression vectors and production of recombinant retrovirus.
The retroviral vector pBMN-PAPSS2 carried a full-length 3.64-kb PAPSS2 complementary (c)DNA (GenBank accession no.NM_011864).This construct was inserted into the EcoRI and NotI restriction sites of the pBMN-I-green fluorescent protein (GFP) expression plasmid (Addgene, Cambridge, MA, USA).This downstream insertion and viral packaging was performed based on Stanford/Nolan lab protocols (14).In brief, the retroviral pBMN-I-GFP (control) as well as pBMN-PAPSS2 vectors were transfected separately into Phoenix ecotropic packaging cells via calcium chloride precipitation.At 48 h after transfection, the cells were cultured at 32˚C for 48 h to stabilize the virus.The media with infectious viral particles were collected and filtered through a 0.45-µm filter to complete titration assays and transfected into ATDC5 cells.The expression of GFP and PAPSS2 was measured by fluorescence detection, immunostaining and western blot analysis.The pBMN-PAPSS2 and pBMN-I-GFP were used to transfect ATDC5 cells in subconfluent culture (80% subconfluency) with 6 µg/ml polybrene.Cell proliferation was assessed in DMEM/F12 in 6-well plates.Cells were induced under conditions to either overexpress or silence the PAPSS2 gene (depending on the differentiation RNA interference medium used).On days 0, 2, 4, 6, 8 and 10, they were then detached from 6-well plates by trypsinization for counting with a cell counting chamber.
Primers (5'-3')
For immunohistochemical staining, cartilage slices were fixed with 4% paraformaldehyde for 24 h at 4˚C following the removal of the tissue and decalcified at room temperature for 10 min in 3% ethylenediaminetetraocetic acid.Samples were dehydrated in a series of alcohol, cleared in xylene, and embedded in paraffin wax at room temperature.Paraffin sections (6-8 µm) were cut, mounted on slides, pretreated with 2% poly-L-lysine at 4˚C for 10 min and stored at room temperature until used.Deparaffinized cartilage sections were incubated with testicular hyaluronidase (2 mg/ml in PBS, pH 5; cat.no.E0037; Shanghai Baoman Biotechnology Co., Ltd., Shanghai, China) for 30 min at room temperature.Samples were incubated with primary antibodies (as aforementioned) overnight at 4˚C and visualized using alkaline phosphatase-labeled secondary antibodies (1:100) obtained form and using the Histostain™-SAP kit (cat.no.SAP-9100; OriGene Technologies, Inc., Rockville, MD, USA).Visualization was performed for 30 min at room temperature using 3-hydroxy-2-naphthoic acid 2,4-dimethylanilide (1%).Finally, nuclei were counterstained with hematoxylin for 2 min at room temperature.Sections were examined and counted using a light microscope for cytoplasmic and pericellular staining.Four to six randomly selected fields in each zone were counted at a magnification of x400.
Alcian blue staining.Alcian blue staining of ATDC5 cells was performed as previously described (21).Following fixing in 4% (w/v) parafomraldehyde in PBS for 30 min, samples were stained with Alcian blue (Sigma-Aldrich; Merck KGaA) for 5 min, followed by dehydration with 95% ethanol.Images were then captured under a microscope (Nikon TE2000-S; Nikon) and analyzed using Image-Pro Plus 6.0 software (Media Cybernetics, Inc., Rockville, MD, USA).
Statistical analyses.SPSS software (version 17.0; SPSS, Inc., Chicago, IL, USA) was used for one-way analysis of variance, followed by Tukey's honest significant differences test for post-hoc analysis.For comparison between two groups, statistical significance was assessed using Student's t-test.All experiments were performed at least in triplicate.Values are expressed as the mean ± standard deviation.P<0.05 was considered to indicate a statistically significant difference.
Results
Expression of PAPSS2 mRNA in cartilage and role in medium-induced cell differentiation.PAPSS2 mRNA expression was measured in various tissues from mice via RT-qPCR.The expression levels were elevated in calvaria, bone, liver and lung, and moderately in the heart.The expression in calvaria and long bone was significantly higher than in the other tissues as indicated in the figure (Fig. 1A).The PAPSS2 mRNA expression was low in the muscle, spleen, kidney and brain (Fig. 1A).The housekeeping gene murine β-actin was present at constant levels across all tissues.This highlights the reliability of the chosen RT-qPCR method in measuring PAPSS2 mRNA.These results are consistent with those of a previous study (4).PAPSS2 mRNA expression was higher in mouse chondrocytes than in other tissues (4).PAPSS2 was also detected in cartilage tissue following immunohistochemical staining for PAPSS2 (Fig. 1A-C), and in the murine ATDC5 cell line at the protein level by western blot analysis (Fig. 1D and E).These results indicate marked PAPSS2 mRNA expression in mouse cartilage, as well as protein expression in mouse chondrocytes exposed to chondrogenic media.The in vitro results indicate an important role for PAPSS2 in chondrocyte differentiation.To analyze the molecular functions of PAPSS2 in the proliferation and differentiation of chondrocytes, groups of PAPSS2-overexpressing (PO) and -silenced (PS) ATDC5 cells were established (Fig. 2A-D).The silencing efficiency of the RNAi technique was assessed by RT-qPCR (data not shown) and western blot analysis following transfection with siRNA specific to PAPSS2 and PAPSS2 mRNA.The protein expression of PAPSS2 was almost silenced in these stably transfected cells relative to that in the controls (transfected with empty vector), at 7 and 14 days post-transfection.Compared with the parental cells, the PAPSS2 protein had decreased expression in knockdown cells when assessed at 7 days (Fig. 2B and D).At 14 days of differentiation, no morphological alterations were observed in the cells of the PS and PO groups when compared with control ATDC5 cells (Fig. 2E).There were no morphological changes after chondrogenic differentiation compared with baseline (e.g.day 14 vs. 0).
PAPSS2 promotes aggrecan activity and chondrocyte matrix production in chondrogenic terminal differentiation events.
During development, chondrocytes undergo a hypertrophic phase followed by terminal differentiation and mineralization.To highlight the importance of PAPSS2 in chondrocyte differentiation, a lentivirus-mediated RNA interference (RNAi) technique was applied to silence the PAPSS2 gene in ATDC5 cells exposed to chondrogenic media.To analyze the effect of PAPSS2 overexpression and knockdown on aggrecan expression, Alcian blue staining assays were performed after induction or pBMN-I-GFP-PAPSS2.After transfection for 48 h, the ATDC5 cells were induced with osteogenic induction culture media for 7 or 14 days.The level of PAPSS2 was assessed by western blot analysis.(B) ATDC5 cells were transfected with pLenti-shRNA PAPSS2 or pLenti-scrambled shRNA viruses for 48 h and treated with osteogenic induction media for 7 and 14 days.(C) Quantified expression values from A normalized to β-actin levels.PAPSS2 was decreased in control cells and was significantly elevated in cells of the PO group.(D) Quantified expression values from B normalized to GAPDH levels indicated that the protein expression levels of PAPSS2 slowly decreased in control cells and were markedly lower or even undetectable in the PS group.(E) Morphology of the parental ATDC5 cells and ACTD5 cells grown in differentiation media containing either pLenti PAPSS2-shRNA for knockdown or pBMN-PAPSS2 overexpression vector for 14 days (scale bar, 0.02 mm).Overexpression or knockdown of PAPSS2 did not significantly alter the appearance of ATDC5 cells.* P<0.05 vs. 7 day control; # P<0.05 vs. 14 day control.PO, PAPSS overexpression group; PS, PAPSS suppression group; PAPSS2, 3'-phosphoadenosine 5'-phosphosulfate synthetase 2; shRNA, small hairpin RNA; GFP, green fluorescence protein; pLenti, lentiviral plasmid. of ATDC5 cells in chondrogenic media containing the vector pBMN-PAPSS2 for 14 days (Fig. 3A).Following the silencing of the PAPSS2 gene, extracellular matrix differentiation was significantly reduced as determined by alcian blue staining and the levels of ColII and COlX were decreased as determined by immunocytochemical staining.To evaluate the effect of PAPSS2 overexpression on differentiation, the ATDC5 cells were induced for 7 days in media either containing retroviral constructs to induce PAPSS2 overexpression or control (empty) constructs.shRNA was used to modify PAPSS2 gene expression.PAPSS2 overexpression resulted in increased levels of ColII and COlX as determined by immunocytochemical staining.Furthermore, extracellular matrix differentiation was significantly increased, as determined by alcian blue staining (Fig. 3A).Of note, PAPSS2 overexpression was associated with marked increases in aggrecan activity and chondrocyte matrix production (Fig. 3A).Furthermore, positive immunostaining for COL2 and COLX was evaluated in the PS and PO groups, revealing that the percentage of the chondrocytes positive for COLX and COL2 in the PS group was significantly lower than that in the control group (P<0.01;Table II) (Fig. 3).The percentage of positive staining for COLX and COL2 in the PO group was significantly higher than that in the controls (P<0.001).
Effect of ectopic PAPSS2 gene expression on the time course of cell proliferation.
The ATDC5 cells upon reaching 80% subconfluency were grown in chondrogenic media and simultaneously transfected with retroviral pBMN-PAPSS2 vector to induce PAPSS2 overexpression.A lentivirus-mediated RNAi was applied to silence the PAPSS2 gene in ATDC5 cells exposed to chondrogenic media.The effects of these treatments to either overexpress or silence the PAPSS2 gene during ATDC5 proliferation and differentiation were assessed.Cell counts were assessed on days 0, 1, 2, 4, 7 and 10.At the selected time-points, the cells were trypsinized and counted.The cells maintained in medium to overexpress PAPSS2 proliferated at a significantly higher rate compared with the cells in the control group, while proliferation was significantly decreased for cells treated with medium to silence the PAPSS2 gene (Fig. 4).
Effect of ectopic PAPSS2 gene expression on chondrocytespecific markers and modulation of the Wnt/β-catenin
pathway in the transfected ATDC5 cells.Chondrogenic induction was initiated in ATDC5 cells using chondrogenic media containing retroviral vector pBMN-PAPSS2 (to promote PAPSS2 overexpression) and an empty lentivirus vector (to knockdown PAPSS2 expression) to study the effect of ectopic PAPSS2 gene expression on known chondrogenic markers and collagen proteins.Following exposure to media either containing retroviral constructs to induce PAPSS2 overexpression or control (empty) constructs, the PAPSS2 protein levels were evaluated by western blot analysis.As expected, the levels of various chondrogenic markers were significantly higher in cells incubated in chondrogenic media containing constructs to induce PAPSS2 overexpression vector compared with that in the control group.RT-qPCR analysis indicated significant increases in the expression of the chondrocyte marker genes COLX, COL2, Wnt4, and SOX9 in cells overexpressing PAPSS2 (Fig. 5A-D).However, the levels of various chondrogenic markers were significantly lower in cells of the PAPSS2 silence vector group compared with that in the control group.RT-qPCR analysis indicated significant decreases in the expression of the chondrocyte marker genes COLX, COL2, Wnt4, and SOX9 in cells silenced PAPSS2 (Fig. 5A-D).To further investigate whether PAPSS2 affects Wnt/β-catenin signaling expression activation, β-catenin and Wnt4 levels were observed by RT-qPCR.As presented in Fig. 5, there was a significant increase in Wnt4 and β-catenin in the PAPSS2 silence group compared with the control, and a decrease in the PAPSS2 overexpression group compared with the control.These results clearly indicated that both silencing and overexpression of PAPSS2 affected Wnt/β-catenin signaling activation.The effect of PAPSS2 overexpression on COL2 and COLX confirmed the effect observed at the protein level by immunocytochemistry (Fig. 3).In addition, quantification of alcian blue staining after induction of ATDC5 cells in chondrogenic media containing the retroviral vector pBMN-PAPSS2 for 14 days from Fig. 3A indicated that PAPSS2 overexpression was associated with marked increases in aggrecan activity and chondrocyte matrix production (Fig. 5).
Discussion
PAPSS2 is mostly expressed during the formation of bone elements in the mouse embryo as well as in the cartilage of the newborn mouse (11,22,23).Consistent with these previous studies, the results of the present study indicated that PAPSS2 is highly expressed in the calvaria and bone tissues of 14-day-old c57BL/6J mice.A notable amount of PAPSS2 was observed in the liver lung and heart, however it was not significant.A minimal amount of PAPSS2 was observed in the brain, muscle, kidneys, spleen.Cartilage is well known to be avascular, alymphatic and aneural, and chondrocytes are its only cell type.Consequently, cartilage tissue has limited innate capacity for self-regeneration (e.g., after damage from physical injury or degenerative disease).This makes it vulnerable to changing environmental conditions.In addition to physical trauma, autoimmune reactions may lead to dysfunction of cartilage and predispose to severe joint conditions, including osteoarthritis (OA) and rheumatoid arthritis.To achieve a thorough understanding of the factors that regulate cartilage and bone formation, studies are required to identify key signaling pathways and molecules that control chondrogenesis.Chondrogenesis is a key stage in cartilage and bone formation and is the process through which mesenchymal cells differentiate into chondrocytes.In recent decades, interest in the development of novel tissue engineering strategies for cartilage repair has increased, as reflected by the large number of studies performed in this field (24)(25)(26).
To improve treatment of cartilage-associated diseases, the potential utility of strategies to regenerate functional cartilage tissue via the induction of chondrogenesis is becoming increasingly important.The chondrogenic ATDC5 cell line is derived from mouse teratocarcinoma cells and enters a sequential differentiation process analogous to that in chondrocytes.This makes them useful for in vitro studies of cell behavior during chondrogenesis.It also provides a useful model for studies addressing the role of signaling pathways in skeletal development and for the characterization of interactions of chondrocytes with novel synthetic materials.To date, >200 studies based on the ATDC5 cell line have generated a wealth of data (27)(28)(29)(30)(31)(32).
Mutations that inactivate the PAPSS2 gene are associated with severe inherited developmental skeletal disorders, including Kashin-Beck disease, SEMD in humans, and brachymorphism in mice (14,22,33).Under-sulfation due to the inhibition of PAPS synthetase controls extracellular matrix (ECM) expression during chondrogenesis (34)(35)(36)(37).PAPSS2 deficiency produces osteochondrodysplasias.These are genetically heterogeneous disorders that damage normal skeletal growth, linear growth, as well as cartilage and bone health.The physical presentation includes a short limb stature, cleft palate, generalized dysplasia and hitchhiker's thumb.Inbred mice display a distinct form of SEMD that is inherited through a recessive pattern attributed to a PAPSS2 polymorphism in 10q23-24.Diastrophic dysplasia is an autosomal recessive osteochondrodysplasia (38)(39)(40).It is caused by reduced levels of intracellular sulfate levels, which lead to under-sulfation of proteoglycans.Sequence polymorphisms in the PAPSS2 gene in OA patients were investigated by Oostdijk et al (22).OA is a musculoskeletal disorder featuring degeneration of articular cartilage.Single nucleotide polymorphism analysis in certain Japanese populations with OA affecting the knee has also provided evidence for involvement of the gene in OA pathogenesis; these investigations revealed differences in the distribution of two PAPSS2 isoforms that were only apparent in OA-affected tissues (10,11).
ATDC5 pre-chondrocytes undergo differentiation into chondrocytes that produce ECM components.The ATDC5 cell line is frequently used to study particular genes in chondrocyte differentiation.In the present study, overexpression of PAPSS2 or knockdown via shRNA suggested an important role for PAPSS2 in initiating differentiation of chondrocytes.After silencing of the PAPSS2 gene, the levels of multiple markers associated with ATDC5 cell differentiation were significantly reduced.PAPSS2 overexpression caused the According to the present immunocytochemistry results, the percentages of chondrocytes positive for COL2 and COLX were significantly higher in the PAPSS2 overexpression group than in the controls.Conversely, the percentages of chondrocytes positive for COL2 and COLX were significantly lower in PAPSS2 knockdown cells compared with those in the control cells.
For chondrogenesis to proceed effectively, differentiation requires specification and maintenance of lineage decisions through activation of stage-specific markers.Genes belonging to the SOX family, which includes >30 members, has a central role in regulating chondrogenesis.The SOX9 gene is expressed in mesenchymal precursors and developing chondrocytes until the pre-hypertrophic stage, but not in subsequent lineages.
SOX9 is required for chondrocyte specification and early differentiation (41)(42)(43).It maintains growth plate chondrocyte proliferation, delays pre-hypertrophy and facilitates subsequent hypertrophy prior to terminal maturation and apoptosis.In addition, SOX9 directly activates all major cartilage-specific ECM genes expressed by early-stage chondrocytes and is involved in the chondrocyte differentiation pathway at multiple stages.SOX9 has been demonstrated to induce COL2A1 expression by activating a 48-bp enhancer element residing in the first intron of this gene.It promotes the differentiation of mesenchymal stem cells into chondrocytes (44).SOX9 inactivation in the growth plate resulted in dwarfism due to shortening of columnar and hypertrophic zones and in advanced ossification due to premature pre-hypertrophy and matrix mineralization.These effects are typical of campomelic dysplasia, a severe Figure 5.Effect of ectopic PAPSS2 gene expression on chondrogenic genes.Upon reaching 80% subconfluency, ATDC5 cells were incubated in differentiation media containing either lentiviral plasmid expressing PAPSS2 small hairpin RNA for knockdown or pBMN-PAPSS2 vector for overexpression of PAPSS2 for 7 or 14 days.The expression levels of (A) COLX, (B) COL2, (C) Vnt4, (D) SOX9 and (E) β-catenin relative to β-actin expression were determined by reverse transcription-quantitative polymerase chain reaction analysis.(F) Cells were subjected to Alcian blue and neutral red staining, followed by dye extraction for absorbance measurements.The ratio of Alcian blue to neutral red was determined.Values are expressed as the mean ± standard deviation (n=3).* P<0.05 vs. control.PO, PAPSS overexpression group; PS, PAPSS suppression group; PAPSS2, 3'-phosphoadenosine 5'-phosphosulfate synthetase 2; COL, collagen; SOX, SRY-box.genetic disorder that affects the development of the skeleton, and are consistent with the notion that this disease arises due to growth plate and primordial cartilage defects.
The Wnt/β-catenin signaling pathway has an important role in regulating the growth and differentiation of chondrocyte cells (45)(46)(47).To study the possible role of PAPSS2 in the regulation of the Wnt4 expression as part of the Wnt pathway, ATDC5 cells were treated with differentiation medium containing PAPSS2 overexpression or shRNA vector.The results indicated that PAPSS2 treatment inhibits Wnt/β-catenin activity and chondrocyte differentiation by upregulating Wnt4 and decreasing β-catenin mRNA levels in ATDC5 cells.These results suggest that PASS2 suppresses chondrocyte dedifferentiation locally by modulating Wnt/β-catenin signaling activity.It may interact with parathyroid hormone-related peptide in a negative feedback loop, and Wnt/β-catenin regulates chondrocyte differentiation and initiation of the hypertrophic phase.Wnt/β-catenin in chondrocytes is regulated by several factors (47,48).The present study indicated that PAPSS2 regulates Wnt/β-catenin signaling expression during the transition of chondrocytes from proliferation to hypertrophic differentiation.The effects of PAPSS2 on Wnt/β-catenin signaling pathways were further examined and silencing as well as overexpression of PAPSS2 affected this signaling pathway.It is suggested that PAPSS2 is an upstream regulator of Wnt/β-catenin and its chondrogenic properties may be mediated through mechanisms that need further investigation.
The results of the present study rule out the possibility that PAPSS2 modulates osteopontin activity (either directly or indirectly) or chondrogenic genes via multiple mechanisms, including the Wnt/β-catenin signaling pathway.Cells maintained in medium to overexpress PAPSS2 proliferated at a significantly higher rate compared with the cells in the control group, while proliferation was significantly decreased for cells treated with medium to silence the PAPSS2 gene.PAPSS2 overexpression caused the levels of ColII, COlX, Sox9 and Wint4 to be increased.After silencing of the PAPSS2 gene, the levels of multiple markers ColII, COlX, Sox9 and Wint4 associated with ATDC5 cell differentiation were significantly reduced.The present study supports an essential role for PAPSS2 in chondrocyte differentiation via inducing early signaling events.More detailed studies are required to assess other genes identified to be affected by PAPSS2 knockdown, including transcription factors, chondrocyte differentiation enzymes, proteins associated with bone morphogenesis as well as ECM proteins, and their potential mechanisms regarding PAPSS2.
The present results indicate that PAPSS2 induces the chondrogenic differentiation of ATDC5 cells by crosstalk with Wnt signaling.PAPSS2 promotes ATDC5 differentiation by upregulating the production of collagenous matrix components.Wnt/β-catenin pathway activation after matrix formation leads to chondrocyte differentiation.Future studies will assess the key markers in the underlying pathways of bone formation or metabolic processes at the gene and protein levels, including the role of PAPSS2 in the detailed regulation of collagen assembly and its functional properties in Wnt/β-catenin signaling pathways during bone and cartilage formation.Transcription and growth factors that regulate expression at the gene and protein level, accumulation and degradation of PAPSS2 will be assessed.
Finally, the role of PAPSS2 in controlling the activity of other signaling pathways will also be addressed.
Figure 2 .
Figure 2. (A)The Phoenix ecotropic packaging cell line was only used for packaging of the plasmids, which contained pBMN-I-GFP (vector-only control) or pBMN-I-GFP-PAPSS2.After transfection for 48 h, the ATDC5 cells were induced with osteogenic induction culture media for 7 or 14 days.The level of PAPSS2 was assessed by western blot analysis.(B) ATDC5 cells were transfected with pLenti-shRNA PAPSS2 or pLenti-scrambled shRNA viruses for 48 h and treated with osteogenic induction media for 7 and 14 days.(C) Quantified expression values from A normalized to β-actin levels.PAPSS2 was decreased in control cells and was significantly elevated in cells of the PO group.(D) Quantified expression values from B normalized to GAPDH levels indicated that the protein expression levels of PAPSS2 slowly decreased in control cells and were markedly lower or even undetectable in the PS group.(E) Morphology of the parental ATDC5 cells and ACTD5 cells grown in differentiation media containing either pLenti PAPSS2-shRNA for knockdown or pBMN-PAPSS2 overexpression vector for 14 days (scale bar, 0.02 mm).Overexpression or knockdown of PAPSS2 did not significantly alter the appearance of ATDC5 cells.
Table II .
Percentage of positive immunocytochemical staining of chondrocytes of the PS, PO and control groups for types II and X collagen at 14 days of induction/transfection. | 2018-10-12T16:31:33.247Z | 2018-10-11T00:00:00.000 | {
"year": 2018,
"sha1": "bf844865c344ed5145b1c25b22597734b4c6723e",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.3892/etm.2018.6843",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "84df91406b71c76b3fa8e5eecbade059465686b9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
91499768 | pes2o/s2orc | v3-fos-license | Ichthyofauna of streams of the Rio Sapucaí basin , upper Rio Paraná system , Minas Gerais , Brazil
The Rio Sapucaí basin, in Minas Gerais State, Brazil, is one of the many watersheds of the upper Rio Paraná system. Ichthyofauna surveys in this basin, in general, are scarce. In addition, small rivers and streams of the region have been targets of anthropogenic actions (e.g., pollution) – which suggest that more ichthyological studies must be performed within the watershed. In this study we provide a survey of species that occur within three streams of the lower Rio Sapucaí basin. Samples were collected in April, July, and November 2017 and in May 2018. Collections resulted in 349 individuals belonging to 28 species, five orders, and 12 families. Among our findings are three putatively undescribed species and the first record of Oligosarcus argenteus and Pareiorhina hyptiorhachis within the Rio Paraná system.
Introduction
Ichthyofauna surveys are important for the conservation of freshwater fishes at both short and long-term scales.These studies provide additional information about species' distribution (e.g., Valdiviezo-Rivera et al. 2017, Bertora et al. 2018, Delariva et al. 2018, Honorio & Martins 2018, Oliveira-Silva et al. 2018) -which, in turn, may assist in new assessments about their "conservation status" (e.g., Melo et al. 2017).Additionally, surveys provide useful data for the establishment of freshwater protected areas (sensu Azevedo-Santos et al. 2018a).Therefore, ichthyological surveys should be carried out more frequently, especially in Brazilian freshwaters.
The Rio Sapucaí basin (~ 560,000 hectare; Magalhães Jr & Diniz 1997), Minas Gerais, Brazil, is part of the upper Rio Paraná system (Magalhães Jr & Diniz 1997).To our knowledge, only two ichthyological surveys have been published for this watershed.Ingenito & Buckup (2007) provided a list with the fishes of three localities in the upper portion of the basin near the Serra da Mantiqueira.Subsequently, Belei & Sampaio (2012) publishes a work with the fishes from the Rio Lourenço Velho, a direct tributary of the Rio Sapucaí.However, streams of the lower region of the watershed remain understudied.
Countless rivers and streams of the Rio Sapucaí basin have been targets of anthropogenic actions (e.g.small dams; see Belei & Sampaio 2012), which can significantly impact the overall biodiversity (Pelicice et al. 2017).These actions coupled with the lack of biodiversity knowledge suggest that more surveys must be conducted within the watershed.In this study we provide the results of a fish survey conducted in three different streams of the lower portion of the Rio Sapucaí basin, in Minas Gerais, Brazil.
Material and Methods
Fishes were collected in April, July, and November 2017, and in May 2018 (totaling four collections, one per month), across three different streams of the Rio Sapucaí basin (Table 1; Figure 1-2).Sampling occurred during daytime roughly 100 to 200 meters upstream of each stream.Collections were carried out with a small cast net (1.4 cm of mesh in opposite nodes), a hand net (~1.5 mm mesh), gill nets (1 and 2 cm in opposite nodes), and fishhooks of different sizes.Collections were performed with permission issued by Brazilian Institute of Environment and Renewable Natural Resources (IBAMA, in Portuguese) -license numbers 46904-1 and 63177-1.
Species reported in Table 2 were classified according to Fricke et al. (2018).
Results
A total of 349 individuals representing five orders, 12 families, and 28 fish species (Table 2) were collected from all reaches (i.e., R1, R2, R3).The order and family with highest species richness, considering all reaches, was Siluriformes and Characidae, respectively (Figure 3-4).We found the highest species richness at R3, with a total of 23 species, followed by R1 with nine and R2 with seven (Table 3).
Three putatively undescribed species were also collected: Astyanax sp. and 'Heptapterus' sp., both from R1, and Imparfinis sp., from R3.Additionally, we found individuals of Oligosarcus argenteus and Pareiorhina hyptiorhachis, which represent the first record of these two species within the Rio Paraná system (Table 3).Individuals of Trichomycterus septemradiatus were collected at R1, which also expands the distribution of this species into the Rio Paraná system.Lastly, we recorded Knodus moenkhausii and Poecilia vivipara, two non-native fish species within the Rio Sapucaí basin.
Discussion
Overall, members of the orders Siluriformes and Characiformes comprise the majority of species found in the three sampled streams of the Rio Sapucaí basin (see Figure 3).Dozens of investigators who have conducted fish surveys of rivers, reservoirs, or streams of the upper Paraná basin (e.g., Casatti et al. 2003, Smith et al. 2007, Smith & Petrere Jr 2007, Fagundes et al. 2015, Frota et al. 2016, Santos et al. 2017, Cavalli et al. 2018) have also found species richness to be highest in these orders.Therefore, the relatively high species counts in these two orders, as we found in this study, is an expected result for many regions of the upper Rio Paraná system.
The families with highest species richness in the lower Rio Sapucaí region are Characidae and Heptapteridae.However, in context of the Rio Paraná basin as a whole, Loricariidae has been reported to contribute higher species richness than Heptapteridae (Langeani et al. 2007).This suggests that loricariid species may have been undersampled in this survey.Specimens were collected only during the day (see Material and Methods section), which may have contributed to an undersampling of loricariids and possibly other groups (see below).Therefore, for future studies we recommend sampling at each stream during the night as well.
Three putatively undescribed species (i.e., Astyanax sp., 'Heptapterus' sp., and Imparfinis sp.) were collected during this survey (Figure 5a, b, c).In addition to this study, Ingenito & Buckup (2007) discovered six undescribed species within the upper Rio Sapucaí basin.With these results we believe more ichthyological surveys in rivers and streams of the Rio Sapucaí basin are necessary, as additional undescribed species likely remain to be discovered.Langeani et al. (2007) did not report Oligosarcus argenteus (Figure 5e) within the upper Rio Paraná system.Additionally, in a recent revision of the genus Oligosarcus, Ribeiro & Menezes (2015) reported this species as endemic to the Rio São Francisco and Rio Doce basins.In turn, Pareiorhina hyptiorhachis (Figure 5f) was recently described from the Rio Paraíba do Sul basin (Silva et al. 2013).Our study reports individuals of O. argenteus at R1 and individuals of P. hyptiorhachis at R1 and R2.Therefore, these findings represent the first records of these two species in the Rio Sapucaí basin, as well as in the Rio Paraná system in general.Trichomycterus septemradiatus (Figure 5h) was previously known only from its type locality, a single stream in the Rio Sapucaí basin (Katz et al. 2013).Our study reports individuals of T. septemradiatus at R1; therefore, we extend the distribution of this species within the basin.
Individuals of two non-native species, Knodus moenkhausii and Poecilia vivipara, were collected in this survey (see Table 3).Knodus moenkhausii (Figure 5d) has previously been assigned by different authors as non-native to the upper Rio Paraná system (e.g., Langeani et al. 2007, Souza et al. 2015, Azevedo-Santos et al. 2018b).Poecilia vivipara (Figure 5g) has also been reported by Langeani et al. (2007) as a non-native species introduced to the upper Rio Paraná system.Therefore, we consider K. moenkhausii and P. vivipara as non-native species within the Rio Sapucaí basin (sensu Langeani et al. 2007).However, sources of these introductions remain unknown.
Here we contribute to the knowledge of the fish fauna of the Rio Sapucaí basin, upper Paraná system.However, we recognize this study likely represents a small fraction of what remains to be sampled within this basin.The presence of putative undescribed species coupled with increasing anthropogenic effects highlights the need to conduct more surveys of the ichthyofauna of waterbodies of this region.
Figure 1 .
Figure 1.Partial view of Rio Sapucaí (under influence of the Furnas reservoir), in Minas Gerais, Brazil, with the location of each reach (R1, R2, and R3) sampled.
Figure 4 .
Figure 4. Species richness by families collected in reaches of three different streams of the Rio Sapucaí basin, Minas Gerais, Brazil.
Table 1 .
Localities sampled from the lower Rio Sapucaí basin, Rio Paraná system, Minas Gerais, Brazil composed by sandy, with a local composed by rocks.Some locals impacted due the cattle breeding.
Table 2 .
Fish species captured in three reaches of streams of the Rio Sapucaí basin, upper Rio Paraná system, Minas Gerais, Brazil. | 2019-04-03T13:10:20.729Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "d3194a4c07f1983177f83ddee7d248b972286be0",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/bn/v19n1/1676-0611-bn-19-01-e20180617.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d3194a4c07f1983177f83ddee7d248b972286be0",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Geography"
]
} |
267751088 | pes2o/s2orc | v3-fos-license | A Distinct Radial Acceleration Relation across Brightest Cluster Galaxies and Galaxy Clusters
Recent studies reveal a radial acceleration relation (RAR) in galaxies, which illustrates a tight empirical correlation connecting the observational acceleration and the baryonic acceleration with a characteristic acceleration scale. However, a distinct RAR has been revealed on BCG-cluster scales with a seventeen times larger acceleration scale by the gravitational lensing effect. In this work, we systematically explored the acceleration and mass correlations between dynamical and baryonic components in 50 Brightest Cluster Galaxies (BCGs). To investigate the dynamical RAR in BCGs, we derived their dynamical accelerations from the stellar kinematics using the Jeans equation through Abel inversion and adopted the baryonic mass from the SDSS photometry. We explored the spatially resolved kinematic profiles with the largest integral field spectroscopy (IFS) data mounted by the Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey. Our results demonstrate that the dynamical RAR in BCGs is consistent with the lensing RAR on BCG-cluster scales as well as a larger acceleration scale. This finding may imply that BCGs and galaxy clusters have fundamental differences from field galaxies. We also find a mass correlation, but it is less tight than the acceleration correlation.
Introduction
Tight empirical scaling relations are pivotal in physics and astronomy, particularly in the exploration of new fundamental concepts.A prominent example is the dark matter (DM) problem, manifested through the discrepancy between observed gravitational effects (inferred mass) and the mass calculated from luminosity (baryonic matter).However, an intriguing perspective arises when examining the empirical acceleration relation in galaxies.This perspective focuses on the discrepancy between the baryonic acceleration g bar ≡ GM bar (< r)/r 2 and the observed acceleration g obs , ascertained through methods like gravitational lensing, rotation curve analysis, or the Jeans equation.This inquiry into galactic accelerations uncovers a significant empirical scaling relation, which includes the identification of a characteristic acceleration scale, a key parameter in understanding these dynamics.
Recent discoveries have unveiled a tight empirical radial acceleration relation (RAR;McGaugh et al. 2016;Lelli et al. 2017) in spiral galaxies, providing a fresh perspective on the dark matter problem.The correlation can be parameterized between g obs and g bar as which exhibits a characteristic acceleration scale g † = (1.20 ± 0.02) × 10 −10 m s −2 (McGaugh et al. 2016;Lelli et al. 2017; Li ⋆ Corresponding author ⋆⋆ Humboldt fellow.et al. 2018).Additionally, the low acceleration limit (g bar ≪ g † ) recovers the baryonic Tully-Fisher relation (BTFR;McGaugh et al. 2000;Verheijen 2001;McGaugh 2011;Lelli et al. 2016Lelli et al. , 2019)), v 4 f = GM bar g † , which relates the flat rotation velocity v f and the baryonic mass M bar .Subsequent studies of the RAR in elliptical galaxies have yielded consistent results, not only in dynamics (Lelli et al. 2017;Chae et al. 2019Chae et al. , 2020) ) but also in gravitational lensing effects (Tian & Ko 2019;Brouwer et al. 2021).In elliptical galaxies, replacing v f with the velocity dispersion σ yields the baryonic Faber-Jackson relation (BFJR ;Sanders 2010;Famaey & McGaugh 2012), stating that σ 4 ∝ GM bar g † .These findings further emphasize the importance of the RAR in understanding the fundamental concepts behind the observed acceleration discrepancies in galaxies.Coincidentally, the RAR has been predicted by MOdified Newtonian Dynamics (MOND ;Milgrom 1983) four decades ago.
In addition to galaxies, the "missing mass" problem has also been observed in galaxy clusters (Zwicky 1933;Famaey & Mc-Gaugh 2012;Overzier 2016;Rines et al. 2016;Umetsu 2020), which represent the largest gravitational bound systems in the universe.Within the strong gravitational potential of galaxy clusters, the brightest cluster galaxy (BCG) is typically located at the center.However, even when accounting for the baryonic mass in the form of X-ray gas, the calculated mass remains insufficient when determining gravity using dynamics or gravitational lensing effects.
Upon examining the acceleration discrepancy on BCGcluster scales, recent studies have revealed a more significant offset from the RAR in galaxy clusters (Chan & Del Popolo 2020;Tian et al. 2020;Pradyumna et al. 2021;Eckert et al. 2022;Tam et al. 2023;Liu et al. 2023;Li et al. 2023).In particular, a tight RAR (Tian et al. 2020) has been investigated in 20 massive galaxy clusters from the Cluster Lensing And Supernova survey with Hubble (CLASH) samples, referred to as the CLASH RAR, expressed as This relation features a larger acceleration scale, g ‡ = (2.0 ± 0.1) × 10 −9 m s −2 , and a small lognormal intrinsic scatter of 15 +3 −3 %.The CLASH RAR implies a parallel baryonic Faber-Jackson relation (BFJR) in galaxy clusters, given by σ 4 ∝ GM bar g ‡ .More recent studies confirm this kinematic counterpart as the mass-velocity dispersion relation (MVDR; Tian et al. 2021a,b), found in 29 galaxy clusters and 54 BCGs.The MVDR exhibits a consistent acceleration scale g ‡ = (1.7 ± 0.7) × 10 −9 m s −2 , with the CLASH samples and a small lognormal intrinsic scatter of 10 +2 −1 %.In this work, we investigate the dynamical RAR in BCGs to address the second acceleration scale, g ‡ , on BCG-cluster scales.The confirmation of a distinct RAR necessitates a thorough examination of both gravitational mass and baryonic mass.To accomplish this, we employ integral field spectroscopy (IFS) for the internal kinematics of BCGs and model photometry for the stellar mass.It is intriguing to investigate both acceleration and mass correlations within the same sample.Moreover, by comparing dynamical and lensing samples, a universal acceleration scale can be scrutinized in galaxies, particularly for BCGs.
Data and Methods
Our primary objective is to investigate the correlation between the dynamical acceleration and the baryonic acceleration in BCGs, as well as their mass correlation.While the RAR in 20 CLASH samples has been examined in gravitational lensing (Tian et al. 2020), the RAR of BCGs in dynamics has not been systematically explored.Furthermore, we assess the consistency of the dynamical and lensing RAR and compare the results between BCGs and clusters.
Investigating the dynamical RAR requires spatially resolved velocity dispersion profiles and the estimation of baryonic mass for individual galaxies.In our BCG samples, we noted that most velocity dispersion profiles within our samples follow linear trends.This observation facilitates a simplified approach to analyzing galactic dynamics through the Jeans equation.Our primary objective is to derive the observed gravitational acceleration g obs with enhanced accuracy.To this end, we employed Abel's inversion (Binney & Mamon 1982;Binney & Tremaine 2008), an analytic formulation for linear velocity dispersion profiles.This technique, when combined with Bayesian statistics for fitting the velocity dispersion profiles, enables us to efficiently compute g obs using Abel's inversion of the Jeans equation.On the other hand, g bar is calculated using the accumulated baryonic mass at the same radius, employing an empirical Sérsic profile.
Data
Spatially resolved kinematic profiles in galaxies necessitate IFS to measure spectra for hundreds of points within each galaxy.At present, MaNGA provides an unprecedented sample of approximately 10,000 nearby galaxies to investigate the internal kinematic structure and composition of gas and stars (Bundy et al. 2015).MaNGA is a core program in the fourth-generation Sloan Digital Sky Survey (SDSS; Law et al. 2021).Also, MaNGA offers one of the most extensive samples of BCGs for IFS studies, presenting an unparalleled resource for in-depth analysis.
BCGs are distinguished not only by their luminosity and mass but also by their unique morphological and kinematic characteristics, which set them apart from typical galaxies.Typically located at the centers of galaxy clusters, BCGs often exhibit elliptical or cD galaxy morphologies, characterized by extended, diffuse envelopes indicative of their evolutionary history marked by mergers and accretion within dense cluster environments.Kinematically, BCGs generally exhibit significant velocity dispersion with less pronounced rotational features.
In this study, we investigate 50 MaNGA BCGs with complete IFS and photometry data for dynamical analysis, building upon the previous systematic exploration of their kinematics.The velocity dispersion of MaNGA BCGs has been systematically explored using IFS data (Tian et al. 2021a), so we further investigated these BCG samples for dynamic studies.Most of these BCGs were initially obtained from Yang's catalog (Yang et al. 2007), which was developed using a halo-based group finder in the SDSS Data Release 4 (DR4).Hsu et al. (2021) reidentified them visually in color-magnitude diagrams with the corresponding memberships.From their 54 galaxy sample, one BCG '8943-9102' is excluded due to its morphology being identified as a spiral galaxy.Because BCGs are generally classified as elliptical galaxies, identifying BCGs with spiral galaxy morphology may be considered a misclassification or misidentification.Additionally, three BCGs, '8613-12705', '9042-3702', and '9044-12703', are excluded due to uncertain stellar mass estimated from model photometry in SDSS.Ultimately, we utilized 50 BCGs with complete IFS and photometry data, listed with their plateifu IDs in Table 1.
To examine the baryonic mass contribution of BCGs, we carefully considered the total stellar and gas mass.In our study, the stellar mass within the MaNGA survey is estimated by utilizing photometric data from the SDSS, where galaxy luminosities are converted to stellar masses through a constant mass-tolight ratio (Law et al. 2021).Concurrently, the gas mass is ascertained primarily from the analysis of ionized gas emission lines, which are observed using MaNGA's integral field spectroscopy (Bundy et al. 2015).The gas fraction in our samples accounts for less than 3.2% of the baryonic mass.However, MaNGA primarily studies ionized gas in galaxies using optical spectroscopy through emission lines like Hα, [NII], [OII], [OIII], and [SII], but it's not designed for observing neutral atomic (HI) or molecular gas (H2) (Bundy et al. 2015).Its focus on optical wavelengths might miss regions with high dust or dominated by neutral or molecular gas, possibly underestimating total galaxy gas content.Because the RAR should be estimated at the same radius for both g obs and g bar , we compute the velocity dispersion measured up to approximately the effective radius and the accumulated baryonic mass based on the measured Sérsic profile at the same radius.Furthermore, we explored an alternative scenario regarding the possible gradient in the mass-to-light ratio in Appendix A. This analysis yields a stellar mass variation in the range of approximately 10% to 20%.
Velocity Dispersion Profile with Bayesian statistics
We adopted the line-of-sight (los) one-dimensional velocity dispersion of 50 MaNGA BCGs (Tian et al. 2021a) and employed the profile with a linear fit by Bayesian statistics.Tian et al. (2021a) Python (Cherinka et al. 2019).Surprisingly, those velocity dispersion profiles demonstrate remarkably flat even at the innermost region for most cases (Tian et al. 2021a).Instead of the flat profile, we improved the fitting with a linear relation and estimated the error for each data point by implementing a Maximum likelihood Estimation with the orthogonal-distance-regression (ODR) method suggested in Lelli et al. (2019); Tian et al. (2021b,a) The linear relation with the ODR method is adopted with the log-likelihood function as , where i runs over all data points, and σ i includes the observational uncertainties (σ x i , σ y i ) and the lognormal intrinsic scatter σ int by We model the one-dimensional velocity dispersion by two variables of y ≡ σ/⟨σ⟩ and x ≡ R/R e , which is normalized to mean velocity dispersion ⟨σ⟩ and effective radius R e .In our samples, only the uncertainty of velocity dispersion is obtained for fitting.The slopes and the intercepts with the ODR MLE are presented in Table 1 and illustrated for three examples in Fig. 1.
Dynamical Mass Inferred by Abel Inversion
In pressure-supported systems such as BCGs, the dynamics in equilibrium are governed by the Jeans equation in spherical coordinates (Binney & Mamon 1982;Binney & Tremaine 2008): where β = 1 − (σ 2 t /σ 2 r ) represent the anisotropy parameter.For simplicity, we consider the isotropic case as β = 0 and express the projected velocity dispersion σ I (R) in the form of an Abel integral equation with its inverse (Binney & Mamon 1982;Mamon & Łokas 2005;Binney & Tremaine 2008): where I(R) represents the surface density and Υ denotes the mass-to-light ratio, depending on R in general..Then, we can conduct g obs through Abel inversion, deducing from Eq. ( 4) and Eq. ( 5) expressed as (Binney & Mamon 1982;Mamon & Łokas 2005;Binney & Tremaine 2008) In this study, we model σ I (R) using a linear relation in the velocity dispersion profile due to mostly flat velocity dispersion profiles in our MaNGA BCG samples, see Fig. 1.Consequently, the total mass in Newtonian dynamics is defined as M tot (< r) = r 2 g obs (r)/G.Additionally, we estimate the deviation for the anisotropic models in Appendix B, resulting in at most 6% (or a scatter of 0.02 dex) for g obs .
Results
Our primary objective is to investigate the dynamical mass and the RAR in MaNGA BCGs and compare the results with the lensing RAR observed in CLASH BCGs and clusters.The observational acceleration is directly computed at the last data point using Abel's inversion applied to the velocity dispersion profiles.
On the other hand, the baryonic acceleration is estimated from the accumulated stellar mass, which is modeled with a Sérsic profile.Both accelerations are measured independently and presented on different axes.Moreover, we employ Bayesian statistics to assess the tightness of the correlation and examine their residuals with four galactic and cluster properties.Additionally, we illustrate the relationship between dynamical and baryonic mass for comparative purposes.
Radial Acceleration Relation & Mass Correlation
We implemented an ODR Markov Chain Monte Carlo (MCMC) analysis, to explore the linear correlation evident in the RAR.
Utilizing the python package (emcee; Foreman-Mackey 2016; Foreman-Mackey et al. 2019), we conducted an ODR MCMC analysis to estimate the slope m, intercept b, and intrinsic scatter σ int with two variables of y ≡ log(g obs /ms −2 ) and x ≡ log(g bar /ms −2 ).We employed non-informative flat priors for the slope and intercept within the range of [−100, 100], and for the intrinsic scatter with ln(σ int ) ∈ [−10, 10].The findings from our ODR MCMC analysis are illustrated in Fig. 2 for various samples.
Our sample, consisting of 50 MaNGA BCGs, showed a linear correlation with a slope of y = 0.58 +0.05 −0.04 x − 3.62 +0.43 −0.42 , which dominated the higher acceleration region.When we combined MaNGA BCGs with the lensing result of 20 CLASH BCGs, the linear correlation presents a shallow slope of y = (0.54±0.03)x− (4.01 ± 0.29), which extended to a lower acceleration region.Finally, we performed a complete fit including all 50 MaNGA BCGs, 20 CLASH BCGs, and 64 clusters data, which resulted in the following equation: with a tiny uncertainty lognormal intrinsic scatter of (4.9±0.9)%,corresponding to 0.02 dex.The related triangle diagrams of the regression parameters are presented in Appendix C.
In further analysis, we employed the same vertical MCMC method used in the CLASH RAR study (Tian et al. 2020).This approach yielded a shallow slope represented by the relation y = (0.50 ± 0.01)x − (4.30 ± 0.15), along with a similar uncertainty in the lognormal intrinsic scatter of (5.6 ± 1.0)%, corresponding to 0.02 dex.While the fitting result may vary slightly depending on the methods employed, the differences remain minor and are consistent with their respective uncertainties.
We subsequently analyzed the mass correlation in a logarithmic diagram between the accumulated total mass M tot and baryonic mass M bar , as shown in the right panel of Fig. 2. Using the ODR MCMC method for the entire sample to establish a linear correlation, the result yielded log(M tot /M ⊙ ) = (1.20 ± 0.02) log(M bar /M ⊙ ) − (1.74 ± 0.24) with a larger uncertainty in the lognormal intrinsic scatter of (19 ± 2)%.Unlike the consistency observed in acceleration correlation, CLASH BCGs and MaNGA BCGs dominate the same mass range but exhibit a larger scatter in mass correlation when combined.
Residuals
To investigate correlations of the residuals, we compute the orthogonal distance between Eq. ( 7) and each individual data against four global quantities of BCGs and clusters: the observational acceleration, radius, baryonic mass surface density, and redshift, see Fig. 3.The residuals of all samples are distributed within a tiny range from −0.22 to 0.23 dex.No significant correlations were observed in the residuals diagram, except for a slight correlation of cluster data concerning the outer radius.
Mimicking Baryonic Tully-Fisher Relation
Although the approach may seem conceptual, it is enlightening to analyze the BTFR as the kinematic analog of the tight dynamical scaling relation in this context.Defining the circular velocity as V ≡ √ g obs r, we can correspond V using the total acceleration of both BCG and cluster samples.The BTFR can be investigated with both V and the baryonic mass M bar .One of the main benefits of examining the BTFR is the expansion of the dynamic range of the distinct RAR concerning the baryonic mass.This range spans approximately 3.5 orders of magnitude in our samples, as illustrated in Fig. 4. Because our samples demonstrate a linear correlation, we adopted the ORD MCMC again for analysis.The resulting BTFR presents a tight relation with uncertainty in the lognormal intrinsic scatter of (11 ± 2)%, equivalent to 0.05 dex, and can be expressed as log M bar M ⊙ = 3.97 +0.07 −0.07 log The slope of this relation closely aligns with that observed in spiral galaxies (Lelli et al. 2019), being nearly parallel with a value close to four.However, a deviation is noted in the intercept, suggesting a larger acceleration scale.By assuming a fixed slope of four for g ‡ , the relation simplifies to log(M bar /M ⊙ ) = 4 log(V/kms −1 ) + (0.57 ± 0.02), maintaining the intrinsic scatter of 0.05 dex.Because Equation ( 2) induced a parallel BTFR, represented as V 4 = GM bar g ‡ , we are able to determine g ‡ = (2.0 ± 0.1) × 10 −9 ms −2 .This value aligns consistently with the g ‡ obtained from the distinct RAR.
Discussions and Summary
In this study, we report four key discoveries that significantly enhance our understanding of the RAR and its implications for BCGs and clusters.Our main findings are as follows: (1) We identified a linear correlation between MaNGA BCGs, CLASH BCGs, and clusters in RAR; (2) The acceleration scale g ‡ was found to be valid across a range spanning over two orders of magnitude in baryonic acceleration, suggesting the RAR to hold on even larger scales than previously considered; (3) The distribution of residuals within a narrow range highlights a tight correlation between the dynamical and baryonic components in BCGs and clusters; and (4) There is no significant accumulated mass correlation on BCG-cluster scales.
The distinct RAR on BCG-cluster scales provides a fresh perspective on the residual missing mass of MOND in galaxy clusters (Sanders 1999(Sanders , 2003;;Angus et al. 2008;Milgrom 2008;Angus et al. 2010;Famaey & McGaugh 2012), recasting it as an acceleration-dependent residual mass issue.Although MOND successfully explained the missing mass problem in galactic systems (Banik & Zhao 2022), it has been reported that additional mass, such as missing baryons (Sanders 1999(Sanders , 2003;;Milgrom 2008) or sterile neutrinos (Angus et al. 2010;Famaey & McGaugh 2012), is required in galaxy clusters, a phenomenon known as the residual missing mass.When examining the RAR within the context of the MOND framework, we can compute the residual missing mass by comparing it with two RARs, as described below.Here, g bar represents the baryonic acceleration estimated by the baryonic mass in our samples, while g M denotes the MONDian mass necessary to reproduce a distinct RAR.One can define a missing mass ratio Q = g M /g bar = M M /M bar in MOND when considering the same observational acceleration.To explore the possibility of compensating for the distinct RAR with missing mass, we connect the RAR as fitted in McGaugh et al. (2016) to the same observed acceleration g obs by Using Eq. ( 9), we can compute the factor Q for a given g bar .For example, with a median logarithm of baryonic acceleration in 50 BCGs at log(g bar ) = −9.5, we find Q = 2.2.However, the value of Q exhibits significant variability with g bar : for the highest acceleration in our MaNGA BCG samples, log(g bar ) = −8.9,Q decreases to 1.2, while for the lowest acceleration, log(g bar ) = −10.4,Q increases to 4.9.This variability indicates that a systematic constant offset in the mass-to-light ratio is insufficient to address the discrepancy.Consequently, it appears that the residual missing mass is correlated with the baryonic acceleration g bar .
In this study, we also evaluated the mass-to-light ratio Υ to validate the choice of using total stellar mass calculated from SDSS model photometry.For 50 MaNGA BCGs, we calculated Υ across the five SDSS bands: u, g, r, i, and z.The average values for these bands are (Υ u , Υ g , Υ r , Υ i , Υ z ) = (9.6,5.5, 3.1, 2.3, 1.9).
Our results indicate that the potential underestimation of the stellar mass is a serious concern.The residual missing mass would be other forms rather than the stellar mass, such as the underestimation of gas mass or sterile neutrinos.
Besides the residual mass on BCG-cluster scales, other possibility can be further investigated within the framework of relativistic MOND theories.Interestingly, given the consistency between our MaNGA BCGs and the CLASH samples measured by gravitational lensing, relativistic MOND offers a variety of results that extend beyond the standard MOND formulation.One particularly intriguing interpretation involves the ac-celeration scale being contingent on the depth of the potential well (eMOND; Zhao & Famaey 2012;Hodson & Zhao 2017).In BCGs and galaxy clusters, eMOND implied a stronger observational acceleration that deviates from the RAR suggested by the initial MOND.Moreover, recent advancements in relativistic MOND theories (Skordis & Złośnik 2021;Verwayen et al. 2023) also suggest the feasibility of an enhancement in galaxy clusters.
Investigating the mass consistency in MaNGA BCGs reveals implications for the dark matter distribution in galaxy clusters.Some MaNGA BCGs exhibit a consistent mass between dynamical and baryonic mass, suggesting insufficient dark matter within one effective radius.This finding poses a challenge to the merger model in the CDM paradigm, where one would expect a significant amount of dark matter at the center of galaxy clusters due to dynamical friction.To fully comprehend this issue, further analysis of these specific BCG samples is necessary, especially compared with computer simulations such as TNG (Springel et al. 2018;Nelson et al. 2019b,a), EAGLE (Schaye et al. 2015;Crain et al. 2015), andBAHAMAS (McCarthy et al. 2017), etc. Investigating the RAR in the context of BCGs and galaxy clusters is key to understanding the dark matter problem, particularly in relation to the residual missing mass.MOND, which accurately predicts the RAR's slope of 0.5, indicates a correlation between the residual missing mass and baryonic acceleration, warranting further exploration.Additionally, the RAR's application to BCG-cluster scales is crucial for assessing different dark matter theories, where it notably challenges the self-interacting dark matter model, especially in light of the BAHAMAS simulation results (Tam et al. 2023).Although our current dynamical RAR studies in BCGs would benefit from improved stellar mass estimation methods that are model-independent and enhanced dynamical mass evaluations using numerical models (Li et al. 2023), these initial findings broaden the scope of baryonic ac-
Fig. 1
Fig.1Three examples of MaNGA BCGs with the plateifu of'8554-6102', '8331-12701', and '9888-12703'.Upper panel: The two-dimensional map plot of Spaxel data for the stellar velocity dispersion.Lower panel: velocity dispersion profiles in terms of effective radius.The red circles represent the los velocity dispersion of concentric circles at different radii with the corresponding error bar.The solid blue lines represent the linear fit with the ODR MLE method.
Fig. 2 The RAR and mass correlation of both BCGs and clusters are presented.Left panel: Black and green diamonds represent 50 MaNGA and 20 CLASH BCGs, while blue circles indicate 64 CLASH galaxy cluster data.The black solid line illustrates the resulting RAR of all samples: log(g obs /ms −2 ) = 0.52 +0.01 −0.01 log(g bar /ms −2 ) − 4.18 +0.15 −0.15 .The orange shaded area illustrates the 1σ error of the best fit with the ODR MCMC method.The inset panel demonstrates the histograms of the orthogonal residuals of a whole sample (blue).For comparison, the galactic RAR is depicted by the dot-dashed line, while the dotted line represents the line of unity.Right panel: The black solid line represents the mass correlation of all samples: log(M tot /M ⊙ ) = 1.20 +0.02 −0.02 log(M bar /M ⊙ ) − 1.74 +0.24 −0.24 .All symbols are the same as those in the left panel.
Fig. 3 Fig. 4
Fig. 3 The orthogonal residuals against four global quantities of galaxies and galaxy clusters: the observational acceleration g obs (upper-left), radius R (upper-right), baryonic mass surface density Σ Bar (lower-left), and redshift (lower-right).The plot displays the residuals for 50 MaNGA BCGs (black diamonds), 20 CLASH BCGs (green diamonds), and 64 data points from CLASH clusters (blue circles), with the dashed line representing zero difference.
47 ± 0.08 -9.05 ± 0.11 Note.-(a) Redshift from MaNGA Pipe3D; (b) The last data point in terms of effective radius; (c) The slope m and intercept b fitted in the normalized velocity dispersion profile; (d) The baryonic mass including total stellar mass estimated by model photometry in SDSS DR15 and the measured gas mass in MaNGA marked with * on the plateifu ID; (e) The average los velocity dispersion from MaNGA IFS.
Table 1 .
Properties and Results of 50 MaNGA BCGs | 2024-02-18T16:02:07.999Z | 2024-02-19T00:00:00.000 | {
"year": 2024,
"sha1": "d962aad058e2bf95b695b9b837d097dc76d92fb6",
"oa_license": "CCBY",
"oa_url": "https://www.aanda.org/articles/aa/pdf/2024/04/aa47868-23.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "d962aad058e2bf95b695b9b837d097dc76d92fb6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
235360983 | pes2o/s2orc | v3-fos-license | IGHV-associated methylation signatures more accurately predict clinical outcomes of chronic lymphocytic leukemia patients than IGHV mutation load
Currently, no molecular biomarker indices are used in standard care to make treatment decisions at diagnosis of chronic lymphocytic leukemia (CLL). We used Infinium MethylationEPIC array data from diagnostic blood samples of 114 CLL patients and developed a procedure to stratify patients based on methylation signatures associated with mutation load of the IGHV gene. This procedure allowed us to predict the time to treatment with a hazard ratio (HR) of 8.34 (95% confidence interval [CI]: 4.54-15.30), as opposed to a HR of 4.35 (95% CI: 2.60-7.28) using IGHV mutation status. Detailed evaluation of 17 cases for which the two classification procedures gave discrepant results showed that these cases were incorrectly classified using IGHV status. Moreover, methylation-based classification stratified patients with different overall survival (HR=1.82; 95% CI: 1.07-3.09), which was not possible using IGHV status. Furthermore, we assessed the performance of the developed classification procedure using published HumanMethylation450 array data for 159 patients for whom information on time to treatment, overall survival and relapse was available. Despite 450K array methylation data not containing all the biomarkers used in our classification procedure, methylation signatures again stratified patients with significantly better accuracy than did IGHV mutation load regarding all available clinical outcomes. Thus, stratification using IGHV-associated methylation signatures may provide better prognostic power than IGHV mutation status.
Patient cohort
The patient cohort in our study was previously described in Kristensen (1,2). The clinicobiological characteristics of the patients included in our study are summarized in Table 1.
st section: Selection procedure of the CpG sites with qualitative methylation changes
The sensitivity and performance of the quantitative methylation assessment in a clinical sample can be significantly affected by inherent contamination of the cancer specimen with DNA from normal tissue. Consequently, the methods used for quantification of methylation levels in clinical material in principle assess the level of methylation in a background of healthy cells, and only methylation changes with large methylation differences are most likely to overcome this measurement limitation in in vitro diagnostic settings. To at least partly overcome this limitation, we focused our analyses on CpG sites displaying bimodal distribution of beta-values defined as an interquartile range of minimum 0.80 (IQR: range between the 25-percentile and 75-percentile), see Figure 1B.
nd section: Validation of EPIC microarray results with MS-HRM
To validate the methylation levels detected using the EPIC array, we used Methylation-Sensitive High-Resolution Melting (MS-HRM) (3).
The MS-HRM technique is based on qPCR amplification of bisulfite-treated DNA followed by high-resolution melting of the resulting amplicon. Differences in methylation level can be identified as differences in melting profiles of the amplicons. Due to limited amounts of the clinical material, we optimized assays targeting 4 of the final CpG sites included in the methylation-based classification procedure. We validated the methylation levels in 18 patients from our cohort (9 patients with high beta-value and 9 patients with low beta-value for each of the CpG sites using the beta-value cutoff at 0.65). In that procedure 500 ng of patient DNA was subjected to sodium bisulfite modification To assess whether the combined information from all 9 CpGs selected in our analysis (see Results section: "Development of a methylationbased classification procedure") more accurately stratified patients into two groups with different TTT than information from the single CpG sites, we first needed to assess the prognostic power of methylation changes at each of the CpG sites and then develop a classification procedure that allowed to combine information from all 9 CpG sites.
To determine the prognostic power of methylation changes at single CpG sites, we first determined a beta-value cutoff that most accurately selected patients with the shortest TTT (aggressive disease) for each of the CpG sites. To find the optimal beta-value cutoff, we plotted beta- The above analysis also showed that for all CpG sites the direction of the association of TTT with the beta-value was not always the same, and thus, not uniformly informative for the outcome (e.g. beta value at cg07250315 (a) in most of the patients experiencing outcome was hypomethylated, but for small subset of the patients was hypermethylated Then, to measure the accuracy of the classification at each beta-value cutoff, we stratified the patients using all the beta-value cutoffs listed above, and calculated HRs for "need of treatment" using Cox regression for all stratifications, and compared the HRs between the different cutoffs ( Supplementary Fig. S2). Fig. S2 for overview, and detailed analyses in the rightmost column of Supplementary Fig. S3a-i). Thus, this analysis did not identify an optimal beta-value cutoff for the individual CpGs sites, moreover showed that methylation changes at all CpGs identified patients with short TTT with similar accuracy.
Beta-value interval
Median TTT
Beta-value interval
Median TTT
Beta-value interval
Median TTT
Beta-value interval
Median TTT
Beta-value interval
Median TTT
Beta-value interval
Median TTT
Beta-value interval
Median TTT
Beta-value interval
Median TTT
Stratification groups using beta-value cutoff
As the CpG sites in our study were selected to independently predict TTT, we next explored whether the combination of information from all 9 CpGs allowed to more accurately identify patients with short TTT.
To combine information from all CpG sites, we counted the number of CpGs for each patient with a beta-value predicting short TTT. Then, the patients were stratified into poor versus favorable prognosis, according to the number of CpGs predicting short TTT. We used Cox proportional hazard regression to test whether there was a specific number of CpGs that most accurately selected patients with poor prognosis.
As we were not able to identify an optimal beta-value cutoff for the individual CpGs in analysis above ( Supplementary Fig. S3a-i), we performed the regression analyses for all combinations of beta-value cutoffs (indicated on the top of the forest plots in Supplementary Fig. S4). Interestingly, for the patients stratifications with most accurate HR (patients stratified between 1 and 2 CpGs predicting short TTT; red box in Supplementary Fig. S4), HRs were identical for all the beta-values cutoffs between 0.15 to 0.35 for cg07250315, and between 0.65 to 0.85 for the remaining CpGs (highlighted in red in Supplementary Fig. S4 and Supplementary Fig. S5b-f). Therefore, to simplify the patient classification procedure, we chose widest range of beta-values for the cutoff. Specifically, the CpG was scored as predicting short TTT (and assigned value of 1) if the beta-value was £0.35 for cg07250315 and if the beta value was ³0.65 for cg00185137, cg00395579, cg02198280, cg03282117, cg07395110, cg12032915, cg21394039, and cg21740960. The final patient classification procedure is described in Fig. 2, and beta-values are provided in Supplementary File 1. Supplementary Fig. S5. (legend on previous page)
Supplementary Tables
Supplementary Table S1. Association analyses between methylation-based classification and IGHV status with clinicobiological biomarkers.
OR indicate the odds ratio for having poor prognosis or being U-CLL patient given the biomarker status in the specific row.
* OR by Woolf. | 2021-06-08T06:16:40.078Z | 2021-06-03T00:00:00.000 | {
"year": 2021,
"sha1": "72ec2125073618b1f8f36606cca3fc7355b7a8c5",
"oa_license": "CCBYNC",
"oa_url": "https://haematologica.org/article/download/haematol.2021.278477/73433",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cb2382914377186f6133236cea916cdc4931bcb4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
21508620 | pes2o/s2orc | v3-fos-license | Immobilization of a bubble in water by nanoelectrolysis
A surprising phenomenon is presented: a bubble, produced from water electrolysis, is immobilized in the liquid (as if the Archimedes' buoyant force were annihilated). This is achieved using a nanoelectrode (1 nm to 1 $\mu$m of curvature radius at the apex) and an alternating electric potential with adapted values of amplitude and frequency. A simple model based on"nanoelectrolysis"(i.e., nanolocalization of the production of H2 and O2 molecules at the apex of the nanoelectrode) and an"open bubble"(i.e., exchanging H2 and O2 molecules with the solution) explains most of the observations.
A surprising phenomenon is presented: a bubble, produced from water electrolysis, is immobilized in the liquid (as if the Archimedes' buoyant force were annihilated). This is achieved using a nanoelectrode (1 nm to 1 µm of curvature radius at the apex) and an alternating electric potential with adapted values of amplitude and frequency. A simple model based on "nanoelectrolysis" (i.e., nanolocalization of the production of H 2 and O 2 molecules at the apex of the nanoelectrode) and an "open bubble" (i.e., exchanging H 2 and O 2 molecules with the solution) explains most of the observations.
Microbubbles have many applications: medicine (contrast agents, 1 gas embolotherapy, 2 blood clot lysis 3 ) or nuclear industry. 4 Questions about their stability remain topical. 5 H 2 /O 2 microbubbles are easily generated by water electrolysis and their production is controlled by the electric potential. Recently, H 2 microbubbles have been used to rotate a microobject. 6 Droplets may also be controlled by electrolysis. 7 Many applications will result from a more effective control on the microbubbles and, especially, on each individual microbubble. Our preceding paper 8 showed a first example of such control: the nanolocalization of the production of microbubbles at a unique point of a tip-shaped electrode, under precise values of the amplitude and the frequency of the alternating potential. This phenomenon will be called nanoelectrolysis (for bubble production). The relation between microbubble production and current intensity will be given in a forthcoming paper. The present paper describes a surprising phenomenon, which represents an example of control of a single microbubble: the immobilization of a bubble in the liquid (as if the Archimedes' buoyant force were annihilated).
The experimental procedure is described in Ref. 8. Water electrolysis is performed using an aqueous solution containing 10 −4 mol/L of H 2 SO 4 and a periodic alternating applied potential, here of rectangular shape: 9 V (t) = V m for 0 < t < T /2, V (t) = −V m for T /2 < t < T (T is the period and ν = 1/T the frequency). The amplitudes V m typically range from 2 V to 30 V. Two Pt electrodes are used and one of the two electrodes-called nanoelectrode-is tip-shaped, with a curvature radius, at the apex of the electrode, ranging from 1 nm to 1 µm. In the previous paper, 8 we showed that, for definite values of the amplitude and the frequency of the potential, the microbubbles are produced at a single point, the apex of the nanoelectrode. We start in such conditions, so that microbubbles are successively generated at the apex of the electrode and then naturally go up towards the airsolution surface, owing to the Archimedes' buoyant force. Let us now describe the observed phenomenon. Immediately after the production of a microbubble, if we rapidly a) Electronic mail: olives@cinam.univ-mrs.fr increase the frequency (generally, up to at least 500 or 1000 Hz; and then, maintaining constant the frequency; V m remaining always constant), then the microbubble remains immobile in the solution, 10 at some distance from the electrode and nearly at the vertical of the apex of the electrode (Fig. 1). In addition, this configuration is stable, since the bubble remains at the vertical of the apex of the electrode and at the same distance from the electrode, when this electrode is moved in any direction with respect to the solution (see the video 11 ). This immobilization is observed during various minutes or hours (at constant amplitude V m and frequency). The diameters D ′ of the immobilized bubbles typically range from some µm to 400 µm. This diameter may remain constant, in some cases, but very frequently decreases with time, as shown in Fig. 2, till the (optical) disappearance of the microbubble. As the diameter D ′ decreases with time ( Fig. 3), we observe that the distance r C between the center of the bubble and the apex of the electrode also decreases (Fig. 4). The evolution of the diameter decreasing rate dD ′ /dt as a function of the diameter is shown in Fig. 5, for three different experiments.
In order to understand such a strange phenomenon, we first note that the bubble cannot be considered as a closed system (i.e., with no exchange of matter with the solution) and its immobilization as a balance between the Archimedes' buoyant force and a new force. Indeed, such a (downward) force should increase with the distance between the bubble and the apex of the electrode (in order to explain the stability of this equilibrium, shown in the video 11 ), which seems unrealistic. Our explanation is then based on a bubble considered as an open system, exchanging matter with the solution through its (immobile) surface. In this case, the new forces which might counterbalance the Archimedes' one are the flux of momentum entering the bubble and the hydrodynamic forces. However, these forces are intimately associated with the fields of the pressures, velocities, and densities and the determination of these fields is a very complex coupled problem (involving hydrodynamics, diffusion, and mass fluxes kinetics at the bubble-solution interface). This problem is not treated here and will be investigated in the future by using computer simulations. As an approximation, hydrodynamics equations (concerning the pressures and the velocities) may be separated and, in the following, we show that a simplified model based on the mass balance, diffusion, and interface fluxes equations may explain-at least qualitatively-most of the above observations. In our previous paper, 8 we showed that the bubble production is nanolocalized (at the apex of the nanoelectrode) when the potential v (applied to the dielectric layer, at the electrode-electrolyte interface) reaches a threshold value v 0 . Similarly, we expect that the chemical reactions of electrolysis will be also nanolocalized for a (lower) threshold value v ′ 0 . According to our model, 8 this implies that, in the potential amplitude-frequency plane, the domain for the nanolocalization of the electrolysis reactions is shifted toward higher frequencies, with respect to that for the nanolocalization of bubble production. Thus, after the increase in frequency needed for the immobilization of the bubble, the new present amplitude-frequency values correspond to conditions of no bubble production, 8 but we here assume that they correspond to nanolocalization conditions for the electrolysis reactions: i.e., these reactions still occur and are nanolocalized at the apex of the electrode, although the rate of production of H 2 and O 2 molecules is not enough to generate bubbles. These molecules, produced at the apex of the electrode, thus remain and diffuse in the solution, without generating any bubble. In our model, the immobilized bubble is considered as an open system, which may exchange H 2 and O 2 molecules with the solution through its surface. The whole system (solution and bubble) is considered in a steady state.
The bubble contains H 2 and O 2 molecules, and the general model for a bubble containing these two components is given in the supplementary material. 11 This model shows that, in the steady state (and for not too small diameters), the bubble contains nearly the same number of moles of H 2 and O 2 (although the production of O 2 molecules is lower than that of H 2 , and the solubility of O 2 is higher than that of H 2 ; this is due to the low diffusion coefficient of O 2 ). 11 In the following, for the sake of simplicity, we present the model for a bubble containing only one component, e.g., H 2 . Let us use the subscript i = 1 to denote the H 2 component in the solution and the subscripts i = 2, 3, ... for the other components of the solution (such as O 2 , H 2 O, H + , ...). The diffusion flux of H 2 is given by Fick's law: (ρ i and v i are, respectively, the volume mass density and the velocity of the component i; v is the barycentric or convection velocity defined by ρ v = ρ 1 v 1 + ρ 2 v 2 + ..., ρ being the mass density of the solution; D 1 is the diffusion coefficient = 4.6 × 10 −9 m 2 /s for H 2 in water at 20 • C).
In the solution, the H 2 mass balance equation (ṁ 1 is the mass production per unit time at the apex of the electrode, which is considered as the origin point; δ is the Dirac measure-in space-at this origin point; m 1 , averaged on a period, is considered constant in this simple model) leads to (with the help of Eq. (1), ∂ρ1 ∂t = 0-steady state-and div v = 0-incompressibility).
At each point of the surface S of the bubble, the flux of H 2 entering the bubble is (assuming only exchanges of H 2 between the bubble and the solution; n is the unit vector normal to S, oriented towards the interior of the bubble; v S is the normal velocity of the surface, parallel to n; obviously, v S = 0 for an immobile bubble), which gives, according to Eq. (1) We assume that the exchange of H 2 between the solution and the bubble is controlled by the simple kinetic law where p ′ = p a + 2γ R ′ is the pressure in the bubble (p a the atmospheric pressure, γ the surface tension, R ′ = D ′ /2 the bubble radius), H the Henry's constant, and K the mass transfer coefficient (γ = 7.28 × 10 −2 N/m, the water-air value at 20 • C; H = 1.62 × 10 −3 kg/(m 3 atm) for H 2 in water at 20 • C (Ref. 12)). As a simple model, we use the ρ 1 field produced only by diffusion (i.e., satisfying to Eq. (3) with v = 0), without taking into account the presence of the bubble (i.e., the condition Eq. (5)) where r is the distance to the origin point O (which is the apex of the electrode). If r 0 denotes the distance at which ρ 1 (r 0 ) = H p a (corresponding to the equilibrium of H 2 between the solution and H 2 gas at the atmospheric pressure), Eq. (6) takes the form whereγ = γ pa . A simple calculation then gives the variation of the mass m ′ of the bubble where da is the area measure, a S = 4πR ′2 the area of S, and j S (r C ) the value of j S (from Eq. (8)) at r = r C (and for the radius R ′ ; C is the center of the bubble, r C the distance OC). Eq. (9) expresses that the mass or the diameter of a bubble (the center of which is) situated at r C = r 0 /(1 + 2γ R ′ ) remains constant, whereas that of a bubble situated at r C > r 0 /(1 + 2γ R ′ ) (respectively, at r C < r 0 /(1 + 2γ R ′ )) decreases (respectively, increases). Note that, although Eq. (5) is not strictly satisfied with the approximate expression Eq. (7), it is however "qualitatively" satisfied if r T (i.e., the distance OT) is equal to r 0 /(1 + 2γ R ′ ), T being a point of S such that OT is tangent to S (see Fig. 6). Indeed, at T, we have ∂ n ρ 1 = n · gradρ 1 = 0 (since gradρ 1 is directed from T to O, according to Eq. (7)) and j S = 0 (since r T = r 0 /(1 + 2γ R ′ )), so that Eq. (5) is satisfied at T, i.e., on the horizontal circle Γ of S generated by these points T. Let us denote S − the part of S situated below the circle Γ, and S + that situated above Γ. At any point M of S − , we have ∂ n ρ 1 = n · gradρ 1 < 0 (gradρ 1 being directed from M to O) and j S > 0 (since r M < r 0 /(1 + 2γ R ′ )), so that the two members of Eq. (5) have the same positive sign. In a similar way, the two members of this equation have the same negative sign, if M belongs to S + . With respect to the sign of the two members of Eq. (5), we may then consider that this equation is "qualitatively" FIG. 6. Geometrical configuration. O is the apex of the nanoelectrode, C the center of the bubble, S its surface, T a point of S such that OT is tangent to S, Γ the horizontal circle generated by these points T, S− the part of S situated below Γ, and S+ that situated above Γ. satisfied. Thus, H 2 molecules enter the bubble through S − (j S > 0) and leave the bubble through S + (j S < 0). Note that the main assumption is here the approximate expression Eq. (7) (which does not take into account either the velocity field or the presence of the bubble).
A first consequence of this situation is that r C > r 0 /(1 + 2γ R ′ ) (since r C > r T ) which, as noted above, indicates that the diameter of the bubble decreases with time. This is consistent with the experiments, since bubbles decreasing in diameter with time are the most frequently observed. Another consequence is the following relation between r C and R ′ From the relation m ′ = M1 RT p a (1 + 2γ R ′ ) 4π 3 R ′3 between R ′ and m ′ , and Eq. (9), one easily obtains R being the gas constant, T the temperature, andH = H/M 1 . If R ′ is not too small, this expression (with r C given by Eq. (10)) leads to a relatively constant value of dR ′ /dt as R ′ decreases. This may explain the relatively constant value of dD ′ /dt during a large first part of each experiment (see Figs. 3 and 5). In addition, for large values of R ′ , i.e., also large values of r C (according to Eq. (10)), Eq. (11) leads to a "universal" limit value of dR ′ /dt which seems to be in agreement with the observations. Indeed, for large values of D ′ , we observe that dD ′ /dt has nearly the same value, of about −0.15 µm/s, in very different experiments (see Fig. 5). By applying Eq. (12), this value leads to a mass transfer coefficient K ≈ 4 µm/s (at 20 • C) which may be considered as a qualitatively acceptable value: indeed, in the literature, the value of this coefficient is not well known and ranges from some µm/s to some hundreds of µm/s. 5,13,14 Note that the decreasing behavior of dD ′ /dt for the small diameter values (Fig. 5) is not explained with this model. We currently try to improve the present model and to automate the experimental procedure in order to completely determine the conditions in which this surprising immobilization phenomenon occurs. The presented experimental setup, i.e., the immobilized bubble and the nanoelectrode microdisplacement actuator, constitutes a point probe of acoustical pressure, 15 as well as a promising tool for sizing microbubbles. 16 We thank Serge Mensah and Younes Achaoui for helpful discussions and acknowledge the financial support from "Smart US" ANR agreement. | 2017-02-06T16:27:25.000Z | 2015-11-05T00:00:00.000 | {
"year": 2015,
"sha1": "3f546cd42bcaa1e049c3c3b82babef993ee5b47d",
"oa_license": null,
"oa_url": "https://hal.archives-ouvertes.fr/hal-01222331v4/file/2016_APL.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3f546cd42bcaa1e049c3c3b82babef993ee5b47d",
"s2fieldsofstudy": [
"Chemistry",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics",
"Chemistry"
]
} |
5870183 | pes2o/s2orc | v3-fos-license | COMPATIBLE MAPPINGS AND COMMON FIXED POINTS " REVISITED "
A fixed point theorem involving a Meir-Keeler type contraction principle is refined by diminishing continuity requirements.
INTRODUCTION.
In [1], the concept of compatible maps was introduced as a generalization of commuting (yg gy) maps and weakly commuting maps (see [2]).Self maps f and g of a metric space (X,d) are compatible iff iirnnd(fgzn, gfzn)= 0 whenever {zn} is a sequence in X such that fxn, gxn-.t for some x.To demonstrate the utility of this concept, a Meir-Keeler type theorem of Park and Bae [3] was generalized by replacing the commutativity requirement by compatibility and extending the concept of (e,6)-f-contractions for two functions as given in [3] to four functions as follows.DEFINITION 1.1.[1] Let A,B,S,T be self maps of a metric space (X,d).A and B are (t,)- S,T-contractions iff A(X)C_ T(X),B(X)C_ S(X), and there exists a function &(0,oo)(0,oo) such that () > for all > 0 and for ,y x: (i) < d(Sr, Ty) < 6(.) implies d(Az, By) < e., and (ii) A By whenever Sr Ty.
As the preceding suggests, if A: XX, we shall use Ax to denote A(r) when convenient and no confusion is likely.We also let N denote the set of natural numbers.
Of interest to us is the following result which combines the two main theorems proved in [1].
THEOREM 1.1.Let S and T be continuous self maps of a complete metric space (X,d), and let A and B be (e,6)-S,T-contractions such that the pairs A,S and B,T are compatible.Then A,B,S,T have a unique common fixed point if one of the following conditions (a) or (b) is satisfied: (a) A and B are Continuous.(b) is lower semi-continuous.
Our purpose in "revisiting" [1] is to show that the preceding theorem can be appreciably generalized by using property (ii) in Definition 1.1 more extensively.In fact, we shall show that condition (b) can be dropped and that only one of the functions A, B,S, or T need be continuous.By so doing, we answer question 4.1 in [1], and highlight the role played by "compatibility" in producing common fixed points.
RESULTS.
We need the following from Proposition 2.2 in [1].Moreover, if Azn, Bzn-t for some X and if A is continuous, then BAzn--At.
The next result contributes to economy of effort.PROPOSITION 2.2.Let A,B,S and T be self maps of a complete metric space (X,d) such that the pairs A,S and B,T axe compatible.Suppose that for z,y x, Sz # Ty implies d(A,By) < d(Sz, Ty). (2.1) If : lv, u,v X such that (.) p Au Su By Tv, then 1 Ap S BI T.
PROOF.Since p Au Su and A and S axe compatible, Sp SAy ASu Ap.But then, if p Ap, Tv Sp by (.), so that (2.1) implies d(p, Ap) d(Bv, Ap) < d(Tv, Sp) d(p, Ap), a contradiction.Therefore, p Ap Sp.By symmetry, p Bp Tp.D We now state and prove our main result.THEOREM 2.1.Let S and T be serf maps of a complete metric space (X,d) and let A and B be (,$)-S,T-contractions such that the pairs a,S and B,T axe compatible.If one of A,B,S, or T is continuous, then A, B, S, T have a unique common fixed point.
PROOF.Since A and B axe (,$)-S,T-contractions, a(X)gT(X),B(X)C_S(X), and as a consequence of (i) and (ii) in the definition we know that Sz Ty implies Az By, and d(Az, By) < d(Sz, Ty) if Sz # Ty. (2.2) In paxticulax, d(Az, By) <_ d(Sx, Ty) for z,y X.
(2.5)Moreover, since B(X) c_ S(X), there exists v E X such that S...v Bu T_.u, so that Av Bu by (2.2).From the preceding we infer, Az Bu Tu Sv Av, and we conclude by Proposition (2.2) that Az is the desired common fixed point of A,B,S, and T. Of course, a common fixed point is assured by symmetry if B is continuous.Now suppose that one of S or T, say S, is continuous.As above, (2.3) and Proposition 2.1 imply that ASz2n, SAz2n, SSz2n-Sz.(2.6)In this instance, we use the fact that A(X)c_ T(X) to produce v n .X for each n N such that Tvn=ASz2n.Then (2.6) implies d(SSz2n, Tvn)--,d(Sz, Sz)=O, so that (2.2) implies that d(Sz, Bvn) <_ d(Sz, ASz2n)+d(ASz2n, Bvn)O; i.e., Bvn, Tvn--,Sz.Consequently, by (2.2) we can write d(Bvn, Az <_ d(Tvn, Sz)-.O, so that Bvn-.Az and BVn-Sz; we conclude that Az Sz by "uniqueness of limits".Again, since A(X)c_ T(X), there exists u E X such that Tu Az Sz.Thus, Bu Az by (2.2).
We have: Az Sz Bu Tu, so that Proposition 2.2 implies that A,B,S,T have a common fixed point.By symmetry, the conclusion also holds if T is continuous.
We conclude by noting that the uniqueness of the common fixed point p follows easily from (2.2). r COROLLARY 2.1 Let A,B,S,T be self maps of a complete metric space (X,d) such that the pairs A,S and B,T are compatible, and A(X) C_ T(X),B(X) C_ S(X).If 3 r E (0,) such that d(Az, By) < r d(S:,Ty) for z,y 6 X, then A,B,S, and T have a unique common fixed point provided one of these four functions is continuous.
COROLLARY 2.2.Let/" be a bijective self map of a complete metric space (X,d).Suppose that for any > 0, > 0 such that for all z, y x < d(fz, fy) < + implies d(z,y) < e, then f has a unique fixed point.
PROOF.The conclusion follows from Theorem 2.1 with (e) + 6, !$ T, and A B I, the identity map, which is continuous and commutes with, and is therefore compatible with, any self map of x.El 3. AN EXAMPLE AND CONCLUSION.
It is natural to ask if we could drop all continuity requirements in Theorem 2.1 and still obtain the conclusion.The following example shows that this would be impossible.EXAMPLE 3.1.Let X =[0,1] and let d be the absolute value metric.Let Ax Bx (/2 for z e (0,1] and 1/2 if z 0) 3" Tz (z for x (0,1] and if z 0).Then A(X)=B(X)=(O, 1/2]C_S(X)=T(X)=(0,1].A and S are compatible, since A and S commute.Moreover, Ax By 1/215 Tyl for all z,y E X. Consequently, Corollary 2.1, and hence, Theorem 2.1, is false without continuity requirement on at least one function. The literature abounds with attempts to generalize theorems which use inequalities of the form (i) in Definition 1.1 by substituting more elaborate expressions M(z,y) for d(Sz, Ty).In the instance in which only one function is continuous, as in Theorem 2.1, care should be exercised.
For example, if for d(Sx, Ty) in (i) we substitute M(z,y)=maz {d(Sz, Ty), d(Sz, Ax), d(By, Ty)}, Theorem 2.1 is false.To see this, modify the above example by letting T(0)= S(0)=0 and A(O)=B(O)= 1. (See also the paper [4] by Rao).In fa.ct, this modified example is a counterexample to the main theorem, Theorem 1, in [5].It is shown in a paper [6], which is yet to appear, that if the contractive definition in [5] is modified by introducing the function of our Definition 1.1 and requiring that be lower-semicontinuous, then Theorem of [5] is valid.All of which suggests that the hypothesis of our Theorem 2.1 is quite tight. | 2017-07-28T04:56:22.527Z | 1994-01-01T00:00:00.000 | {
"year": 1994,
"sha1": "711651b2c2affe45c4cb6b62684047894ccc2bb4",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ijmms/1994/989106.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "741c52f816b28fa2bd1f31a0296685aa72c430f0",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
239466933 | pes2o/s2orc | v3-fos-license | Facile Vacuum Annealing-Induced Modification of TiO2 with an Enhanced Photocatalytic Performance
In this work, the photocatalytic performance enhancement of hydrothermally prepared TiO2 was achieved by facile vacuum annealing treatment. Calcination of TiO2 powder in air (CA-TiO2) maintained its white color, while gray powder was obtained when the annealing was performed under vacuum (CV-TiO2). Fourier transform infrared, total organic carbon, X-ray photoelectron spectroscopy, and electron paramagnetic resonance analyses proved that vacuum annealing transformed ethanol adsorbed on the surface of TiO2 into carbon-related species accompanied by the formation of surface oxygen vacancies (Vo). The residual carbon-related species on the surface of CV-TiO2 favored its adsorption of organic dyes. Compared with TiO2 and CA-TiO2, CV-TiO2 exhibited an improved charge carrier separation with surface Vo as trapping sites for electrons. Vacuum annealing-induced improvement of crystallinity, enhancement of adsorption capacity, and formation of surface Vo contributed to the excellent photocatalytic activity of CV-TiO2, which was superior to that of commercial TiO2 (P25, Degussa). Obviously, vacuum annealing-triggered decomposition of ethanol played an important role in the modification of TiO2. In the presence of ethanol, vacuum annealing was also suitable for the introduction of Vo into P25. Therefore, the current work offers an easy approach for the modification of TiO2 to enhance its photocatalytic performance by facile vacuum annealing in the presence of ethanol.
■ INTRODUCTION
Since the discovery of the Fujishima−Honda effect in 1972, 1 tremendous efforts have been devoted in investigating the potential applications of photocatalytic oxidation and reduction reactions. 2,3 Due to its excellent stability, nontoxicity, and cost effectiveness, titanium dioxide (TiO 2 ) has become one of the most studied photocatalysts. 4−6 However, the photocatalytic performance of TiO 2 is limited by its intrinsic wide band gap and fast recombination of photogenerated charge carriers. 7,8 Thus, the enhancement of photocatalytic performance of TiO 2 by modification is of great interest. 5,6,9 The photocatalytic activity of TiO 2 can be improved either by reducing its band gap to expand its photoresponse range from the UV to the visible range, which takes up the main proportion of solar light, or by preventing the recombination of photogenerated charge carriers by trapping or transfer. 6,10,11 To promote its charge carrier separation ability, TiO 2 is commonly coupled to other semiconductors with a smaller band gap or to metallic nanoparticles. 12−15 The commercially available TiO 2 (P25, Degussa) possesses high photocatalytic activity due to the mixed anatase and rutile phase, allowing efficient charge carrier transfer. 16,17 In order to reduce its band gap, TiO 2 is usually doped with metallic or nonmetallic elements to form an intermediate band state into its band gap, which extends the absorption properties of TiO 2 to visible light. 9,18,19 Asahi et al. 20 reported the incorporation of nitrogen (N) into TiO 2 for photocatalytic degradation of organic pollutants under visible light. Carbon (C)-doped TiO 2 for efficient water splitting under visible light has been reported. 21,22 C-doped TiO 2 showed high crystallinity and a unique microstructure, contributing to its remarkable photocatalytic degradation ability. 23 Self-doped TiO 2 with titanium (Ti 3+ ) was proposed and enabled an increase in the activity under visible light for efficient photocatalytic degradation of methylene blue (MB) and rhodamine B (RhB). 24 Co-doped TiO 2 with Ti 3+ and N exhibited synergistic effects for photocatalytic water oxidation. 25 Besides, the introduction of surface oxygen vacancy (Vo) into TiO 2 was proposed as a credible alternative to improve the photocatalytic activity of TiO 2 . 8,26−31 This straightforward method has been widely achieved by annealing treatment under a reducing atmosphere, and it can realize not only doping TiO 2 but also creating defects, which improves the surface adsorption of the molecule and prolong the charge carrier lifetime. 32 The resultant TiO 2 photocatalyst, so-called "black titania", was obtained upon a thermal treatment under high pressure of H 2 and exhibited obvious visible light absorption. 33 Due to the H doping of TiO 2 , the optical band gap energy (E g ) value of black titania was reduced to 1.54 eV by introducing electronic states forming between the valence band and conduction band. Other emerging methods, such as the annealing of TiO 2 in anaerobic medium (without oxygen) or under vacuum, led to the formation of Ti 3+ and Vo, the amounts of which determined the color of TiO 2 (from gray to black). 32,34,35 Since vacuum annealing avoids the use of highly flammable H 2 gas, it offers safer working conditions and needs more investigations.
In this work, we performed vacuum annealing by sealing hydrothermally prepared TiO 2 in a vacuumed glass tube. Meanwhile, calcination of TiO 2 in air was also conducted for comparison. The variations of TiO 2 before and after heat treatment were characterized in detail. Photocatalytic activities of TiO 2 photocatalysts were checked by decomposing organic pollutants, and the differences of their photocatalytic activities were discussed.
■ RESULTS AND DISCUSSION
Characterizations of TiO 2 Photocatalysts. X-ray diffraction (XRD) and diffuse reflectance spectroscopy (DRS) measurements were performed to investigate the influences of heat treatment on the crystalline structure and optical band gap of TiO 2 , CA-TiO 2 , and CV-TiO 2 . All TiO 2 photocatalysts show similar diffraction patterns (Figure 1a), attributed to the anatase phase of TiO 2 (JCPDS-21-1272). 36 Therefore, hydrothermally prepared TiO 2 maintains the anatase phase after calcination in air and annealing under vacuum. The broad peak (indicated by the arrow, Figure 1a) in the diffraction pattern of CV-TiO 2 is due to the formation of amorphous carbon. 37 The sharper and narrower (101) peak of CA-TiO 2 and CV-TiO 2 indicates the enhancement of crystallization of TiO 2 after heat treatment. 38 Based on the full width at half maximum of (101) plan and Debye−Scherrer formula (eq S1), 34 the average crystallite sizes of TiO 2 , CA-TiO 2 , and CV-TiO 2 are estimated to be 9.9, 16.0, and 21.7 nm, respectively. DRS spectra ( Figure 1b) show that all TiO 2 photocatalysts absorb in the UV range (200−400 nm), and no visible light absorption can be observed. Compared with TiO 2 and CA-TiO 2 , CV-TiO 2 shows decreased absorbance, which is also observed in Chen and co-workers' work. 39 It is clear that both TiO 2 and CA-TiO 2 appear to be white (the common color of TiO 2 , Figure 1b, inset), while CV-TiO 2 turns out to be gray, similar to carbon-doped TiO 2 . 35 This obvious color difference indicates that annealing of TiO 2 under vacuum produces gray calcinate. 22,35 Vacuum annealing-induced formation of gray calcinate darkens the color of CV-TiO 2 . The resultant gray calcinate is composed of carbon-related species resulting from ethanol decomposition on the surface of TiO 2 , which will be proved by Fourier transform infrared (FT-IR), total organic carbon (TOC), and X-ray photoelectron spectroscopy (XPS) analyses later. The gray calcinate cannot be washed away. It is reported that C-doped TiO 2 showed obvious absorption of visible light due to the C doping into the crystal structure of TiO 2 . 23,35 Since CV-TiO 2 shows no absorption of visible light, there is no doubt that vacuum annealing of TiO 2 results in the formation of gray calcinate without C doping of CV-TiO 2 . When existing on the surface of CV-TiO 2 , the gray calcinate may impair the absorption of UV Figure 1c) to be 3.17, 3.18, and 3.20 eV for TiO 2 , CA-TiO 2 , and CV-TiO 2 , respectively. 40 All E g values are around 3.2 eV, which is in agreement with the intrinsic E g of anatase. 19 All TiO 2 photocatalysts only absorb UV light due to the wide band gap, as they are composed of TiO 2 with a crystal phase of anatase, which agrees with the XRD analysis ( Figure 1a). Therefore, XRD and DRS analyses proved that calcination has no influence on the crystal phase and E g values of all TiO 2 photocatalysts but increases the crystallinity and sizes of CA-TiO 2 and CV-TiO 2 .
The microscopic structures were observed by scanning electron microscopy (SEM) and transmission electron microscopy (TEM), and the images are shown in heat treatment enables TiO 2 nanoparticles to merge into bigger ones. Compared with CA-TiO 2 , CV-TiO 2 shows larger particles in size, which indicates that vacuum annealing favors further fusion of TiO 2 nanoparticles. Thus, it is obvious that heat treatment enlarges the average crystallite sizes of TiO 2 , especially vacuum annealing.
The N 2 adsorption−desorption isotherms and Barrett− Joyner−Halenda (BJH) pore size distribution curves are employed to investigate the surface area and pore diameter of TiO 2 , CA-TiO 2 , and CV-TiO 2 . Figure 3 shows that the isotherms of all samples are of type II according to the IUPAC classification. 32 Absorption of N 2 by TiO 2 , CA-TiO 2 , and CV-TiO 2 can be ascribed to the interaggregated pores, which are structured by the neighboring nanoparticles, as shown in the SEM images (Figure 2a−c). 32,41 The surface areas and average pore diameters of TiO 2 , CA-TiO 2 , and CV-TiO 2 are determined to be 128. 30 TiO 2 photocatalysts, CV-TiO 2 shows the smallest surface area and biggest pore diameter, corresponding to its largest average crystallite size, implying that vacuum annealing favors the fusion of TiO 2 nanoparticles.
Energy dispersive X-ray spectroscopy (EDS) was conducted to determine the elemental composition of all TiO 2 photocatalysts. As the EDS analysis was performed by depositing all TiO 2 photocatalysts on aluminum foil, the influence of conductive paste, which contained C, was excluded. EDS spectra ( Figure S2 To characterize the chemical structure and binding energy of all samples, XPS analysis was performed. XPS spectra ( Figure 4b) of all TiO 2 photocatalysts show characteristic peaks of C (1s), Ti (2s, 2p), and O (1s). The C 1s XPS spectra (Figures 4c and S3) show that the peaks of C−C (284.6 eV) and C−O (286.1 eV) decrease after heat treatment. As mentioned above, there is no carbon in CA-TiO 2 , and thus, its C (1s) peak must come from the adventitious carbon species. 23 The decrease in the C−C peak (284.6 eV) proves that heat treatment induces evaporation and decomposition of ethanol adsorbed on the surface of TiO 2 in air but leads to the carbonation of ethanol under vacuum, producing carbon-related species, as the C 1s peak height of CV-TiO 2 is higher than that of CA-TiO 2 . 23,46 There is no C−Ti peak (280.4 eV) in C 1s spectra ( Figure S3), which suggests that no C is doped into the lattice of TiO 2 , which is in accordance with DRS analysis. 22,23 Ti 2p XPS spectra (Figure 4d) exhibit almost the same Ti 2p 3/2 (458.9 eV) and Ti 2p 1/2 (464.6 eV), implying the unchanged binding state of Ti 4+ after heat treatment. 43 According to the O 1s XPS spectra (Figure 4e), the Ti−O−Ti peak (530.1 eV) is almost the same for TiO 2 , CA-TiO 2 , and CV-TiO 2 , while CV-TiO 2 presents a new peak (532.1 eV) ( Figure S3), indicating the formation of oxygen vacancy (Vo). 23,47,48 Electron paramagnetic resonance (EPR) spectra (Figure 4f) show that CV-TiO 2 exhibits a clear signal characterized by the magnetic field strength centering at 3366 G and the electron's so-called g-factor of 2.004 originating from the presence of unpaired electrons trapped on Vo. 14,49−51 There is no Ti 3+ signal since its detection at room temperature is not possible. 52 As TiO 2 and CA-TiO 2 have no signal, EPR analysis further proves that vacuum annealing induces the formation of Vo in CV-TiO 2 . This easy approach for the creation of Vo is also suitable for P25. In the presence of ethanol, vacuum annealing induced the formation of Vo in P25, while no Vo is identified for vacuum annealing of P25 without ethanol ( Figure S4). Consequently, vacuum annealing results in the evaporation and decomposition of ethanol, forming carbon-related species on the surface of TiO 2 and the formation of Vo on its surface by the extraction of O from the Ti−O−Ti bond. 44,45 Photocatalytic Activity. The photocatalytic activities of all TiO 2 photocatalysts were evaluated by the photocatalytic degradation of organic dyes under UV light. In the absence of TiO 2 photocatalysts, the photolysis of RhB, methyl orange (MO), and MB is negligible (Figure 5a,c). The photocatalytic activities of TiO 2 , CA-TiO 2 , and CV-TiO 2 are compared to the photocatalytic performance of P25 (as a reference photocatalyst). As shown in Figure 5a, compared with TiO 2 and CA-TiO 2 photocatalysts (∼10%), increased adsorption of RhB by CV-TiO 2 (∼30%) was observed, although it possesses the smallest surface area (Figure 3). This high adsorption capacity of RhB may result from the presence of carbon-related species on the surface of CV-TiO 2 , which favors the adsorption of organic dyes. Under UV light irradiation, CA-TiO 2 and CV-TiO 2 show enhanced photocatalytic activity relative to TiO 2 . The photocatalytic performance of CV-TiO 2 is drastically improved, and it is even higher than that of P25. Cycling tests are performed to check the stabilities of TiO 2 , CA-TiO 2 , and CV-TiO 2 . Figure 5b shows the repeated four cycling tests of photocatalytic degradation of RhB by recovering all TiO 2 photocatalysts after each run. It is obvious that CV-TiO 2 shows the best photocatalytic activity, and the photocatalytic degradation rate becomes slow as the adsorption of RhB gradually decreases. The photocatalytic activity of CV-TiO 2 is also investigated by decomposing MO and MB. Figure 5c shows that CV-TiO 2 can efficiently decompose MO and MB, which is similar to the degradation of RhB. The excellent photocatalytic performance of CV-TiO 2 may originate from its high adsorption capacity and the presence of Vo, which contributes to the separation of photogenerated electrons and holes.
Proposed Photocatalytic Mechanism. Figure 6a shows the transient photocurrent response of TiO 2 , CA-TiO 2 , and CV-TiO 2 . Compared with TiO 2 , CA-TiO 2 and CV-TiO 2 present clear enhancement of transient photocurrent, indicating the improved charge carrier separation properties, which originate from their improved crystallinity. 17,34 The relative lower photocurrent of CV-TiO 2 than that of CA-TiO 2 may be caused by the trapping of electrons by Vo. 53,54 Trapping electrons by Vo on the surface of CV-TiO 2 can prevent the recombination of electrons and holes, further elongating the lifetime of holes. Thus, more holes contribute to the formation of more hydroxyl radicals (OH • ), which play a key role in the photocatalytic degradation of RhB by oxidation. 55 Photoluminescence (PL) spectra further support this assumption. Figure 6b shows that the fluorescence intensity of CV-TiO 2 is lower than that of TiO 2 and CA-TiO 2 . The emission signals in the PL spectrum are from the recombination of photogenerated electrons and holes. 35 The lower fluorescence intensity implies less recombination of electrons and holes as a result of the trapping of electrons by Vo. The reduced recombination of electrons and holes favors the formation of more OH • for photocatalytic oxidation. Therefore, the enhanced photocatalytic activity of CV-TiO 2 is due to the improved crystallinity, increased adsorption capacity, and formation of Vo.
■ CONCLUSIONS
In summary, the effects of heat treatment on the crystal phase and structure, band gap, morphology, composition, and photocatalytic properties of TiO 2 photocatalysts were investigated in detail. It is proved that all TiO 2 photocatalysts are composed of the anatase phase, and calcination improves the crystallinity and increases the average crystallite size and pore diameter but decreases the surface area of CA-TiO 2 and CV-TiO 2 with quite different appearances. CA-TiO 2 maintains the white color of TiO 2 because calcination in air can completely evaporate and decompose the ethanol adsorbed by assynthesized TiO 2 . However, vacuum annealing leads to the carbonization of ethanol by dehydration and the extraction of lattice O from TiO 2 , leaving gray calcinate and Vo on the surface of CV-TiO 2 . The gray calcinate is composed of carbonrelated species, which darken the color of CV-TiO 2 and favor the adsorption of organic dyes. Meanwhile, the formation of Vo on the surface of CV-TiO 2 traps photogenerated electrons, which promotes the charge carrier separation and enables more holes to participate in the formation of oxidizing species involved in the photocatalytic degradation. The improved crystallinity, enhanced adsorption capacity, and formation of Vo contribute to the superior photocatalytic performance of CV-TiO 2 relative to P25 in the degradation of organic pollutants. Thus, the decomposition of ethanol plays a vital role in vacuum annealing-induced modification of TiO 2 . This work offers an easy approach for the modification of TiO 2 to enhance its photocatalytic performance by facile vacuum annealing in the presence of ethanol. Vacuum annealing with ethanol-induced formation of Vo is also suitable for commercial TiO 2 (P25). Therefore, this easy method may be universal for the modification of semiconductor photocatalysts to improve the photocatalytic activity. The quantitative creation of Vo is of great importance, as too much Vo is detrimental to the photocatalytic performance. Further work concerning the controllable formation of Vo via vacuum annealing by controlling the amount of adsorbed ethanol is underway.
■ EXPERIMENTAL SECTION Materials. Titanium tetraisopropoxide (TTIP, 95%, Macklin) was used as the precursor for the hydrothermal synthesis of TiO 2 . Deionized water (DI, 18.2 MΩ·cm, Millipore system), ethanol, isopropanol, and acetone (AR grade, Rionlon) were used as solvents. A custom-built highborosilicate glass tube was applied for vacuum annealing. Nafion D-521 dispersion (5% w/w in water and 1-propanol, Alfa Aesar) and fluorine-doped tin oxide substrates (FTO, Youxuan Tech) were used to prepare the working electrode for transient photocurrent measurements. RhB (AR grade, Shanghai Zhongqin), MO, and MB (AR grade, Beijing Chemical Reagent) were selected as model organic pollutants for the evaluation of the photocatalytic activity.
Synthesis of TiO 2 Photocatalysts. The synthesis of TiO 2 was conducted according to a reported hydrothermal method. 56 In a typical synthesis, TTIP (10 mL) was placed in a Teflon liner, and DI water (3 mL) was added dropwise while stirring at room temperature. Then, the mixture was stirred for 30 min, transferred to a Teflon-lined autoclave, and kept at 180°C for 24 h in a high-temperature furnace. After cooling down to room temperature, white precipitates appeared at the bottom of the Teflon liner and were separated by centrifugation. The as-synthesized TiO 2 powders were washed three times with ethanol and dried by lyophilization overnight. The white TiO 2 powders were placed in a porcelain crucible or sealed in a high-borosilicate glass tube for calcination in air or annealing under vacuum at 500°C for 2 h at a heating rate of 2°C/min. The treated TiO 2 powders in air and under vacuum were labeled as CA-TiO 2 and CV-TiO 2 , respectively. All the as-prepared and sintered TiO 2 powders were collected for further characterizations and tests.
Characterizations. The diffraction patterns of TiO 2 , CA-TiO 2 , and CV-TiO 2 were recorded using XRD (X'Pert PRO) with a Cu Kα radiation source (λ = 1.5406 Å), and their corresponding crystallite sizes were estimated using the Debye−Scherrer formula (eq S1). 34 The absorption properties of all TiO 2 photocatalysts were studied by UV−vis DRS (UV-2600), and their E g values were estimated from the intercepts of their corresponding K−M plots (eqs S2−S4). 40,57 The morphologies of all samples were observed by SEM (Apreo S) and TEM (JEM-2100F). Brunauer−Emmett−Teller surface areas and the nitrogen adsorption and desorption isotherms of all TiO 2 photocatalysts were measured using an accelerated surface area and porosity analyzer (ASAP 2460). The pore size distribution was collected by the BJH method. The elemental composition was determined by EDS (X1 Analyzer, AMETEK). To do so, the powders of each TiO 2 photocatalyst were dispersed in DI water, deposited on aluminum foil, and dried naturally in air. Further chemical structure analyses were performed by FT-IR spectra (Bruker Alpha) and XPS (Escalab Xi + ) with Al Kα radiation. The binding energy of the C 1s peak (adventitious carbon species) was fixed at 284.6 eV to set the binding energy scale. 23 The contents of organic carbon in all TiO 2 photocatalysts were analyzed on a TOC analyzer (TOC-L) at 900°C. EPR (Bruker ER200DSRC) spectra of all samples were taken by applying an X-band (9.44 GHz, 2.47 mW) microwave and sweeping magnetic center field (3362 G) at room temperature. The transient photocurrent curve of each sample was measured using an electrochemical workstation (CHI 660D) in a typical three-electrode potentiostat system with a xenon lamp (300 W) as the light source. Alcohol suspensions of TiO 2 photocatalysts (20 μL, 10 mg/mL) were added to FTO substrates precoated with the Nafion D-521 dispersion. The surface area of the FTO substrate is 1 × 1 cm 2 . Steady-state PL spectra were recorded on a spectrofluorometer (FLS920) under 280 nm laser excitation wavelength.
Photocatalytic Degradation Tests. Photocatalytic degradation tests of organic pollutants were carried out under UV light irradiation (mercury lamp, 500 W) in a photochemical reaction apparatus (Beijing Princess Technology) equipped with short-wavelength pass filters (<400 nm) to filter the visible light. TiO 2 , CA-TiO 2 , and CV-TiO 2 (1 mg/mL) were added into aqueous solutions of RhB (10 ppm, 25 mL) and stirred in the dark for 120 min before photocatalytic tests. For comparison, the photocatalytic performance of commercial TiO 2 (P25, Degussa) was also evaluated. The photocatalytic activity of CV-TiO 2 was further assessed through the degradation of MO (25 ppm) and MB (10 ppm) organic dyes under the same conditions. The photolysis of organic dyes under the same conditions without TiO 2 photocatalysts was performed as the control. The concentrations of organic dyes were monitored by following the maximum absorbance of RhB at 554 nm, MO at 464 nm, and MB at 664 nm. The ratios of residual organic dyes versus irradiation time were calculated by the expression C/C 0 , where C and C 0 were the recorded concentration at different time intervals of irradiation and the initial concentration, respectively. The stability and reusability of all TiO 2 photocatalysts were checked by performing cycling tests of photocatalytic degradation of RhB four times.
Digital photograph of vacuum annealing of TiO 2 ; vacuum annealing of P25; EDS and XPS of TiO 2 photocatalysts; EPR spectra of P25; UV−vis spectra of organic dyes; and corresponding digital photographs during the photocatalytic degradation (PDF) | 2021-10-18T18:34:19.294Z | 2021-09-28T00:00:00.000 | {
"year": 2021,
"sha1": "60596cbebcda7455599ee198dd73b424952f7e78",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.1c03762",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "62ec2d2f80899d47874d8b790643f8359896f1ff",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221655281 | pes2o/s2orc | v3-fos-license | Fluctuation guided search in quantum annealing
Quantum annealing has great promise in leveraging quantum mechanics to solve combinatorial optimisation problems. However, to realize this promise to it's fullest extent we must appropriately leverage the underlying physics. In this spirit, I examine how the well known tendency of quantum annealers to seek solutions where higher levels of quantum fluctuations are present can be used to trade off optimality of the solution to a synthetic problem for the ability to have a more flexible solution, where some variables can be changed at little or no cost. I demonstrate this tradeoff experimentally using the reverse annealing feature a D-Wave Systems QPU for both problems composed of all binary variables, and those containing some higher-than-binary discrete variables. I further demonstrate how local controls on the qubits can be used to control the levels of fluctuations and guide the search. I discuss places where leveraging this tradeoff could be practically important, namely in hybrid algorithms where some penalties cannot be directly implemented on the annealer and provide some proof-of-concept evidence of how these algorithms could work.
Introduction and background
Quantum annealing, in which combinatorial optimisation problems are mapped directly Hamiltonians and solved using sweeps of quantum parameters, has been a subject of much interest recently. This is in part due to the wide variety of potential applications, in a diverse * Electronic address: nicholas.chancellor@gmail.com range of subjects, including for instance air traffic control [1], hydrology [2], protein folding [3], flight gate assignment [4], finance [5][6][7], and even quantum field theory [8]. This subject has further attracted interest because of the experimental maturity of the flux qubit devices produced by D-Wave Systems Inc. which allow for large scale experimentation.
One crucial direction in the growth of flux qubit quantum annealing is an increase in the variety of controls which users can be applied to the experimental quantum annealing process on flux qubit annealers. Traditionally formulated quantum annealing starts from an easy to prepare ground state of a so called driver Hamiltonian and monotonically interpolates the Hamiltonian to a problem Hamiltonian with an unknown ground state. However, major advantages can be gained by using a different control pattern known as reverse annealing, which starts in a state which is a guess for the solution of the optimisation problem, turns on fluctuations, and searches nearby states in Hamming distance by taking advantage of thermal dissipation [9]. Likewise, controls have been added which allow different qubits to be annealed differently [10].
These new features have proven useful in a variety of ways, reverse annealing for instance is motivated by the ability to implement more complex algorithms than traditional forward annealing [11], and these algorithms have shown promising initial experimental results. For example it was shown in [6] that starting from the output of a simple classical algorithm can lead to a large improvement over forward annealing. References [12,13] showed that iterative methods can help over non-negative matrix factorization. The work in [14] showed experimentally that adding mutation performed using reverse annealing can aid the performance of genetic algorithms. Furthermore the simulation of the celebrated Kosterlitz-Thouless phase transition in [15] would not have been possible without reverse annealing techniques. Similarly, anneal offsets have shown promise in synchronizing the freezing of qubits [10].
The tendency of fluctuations to lead to uneven sampling of ground state manifolds has traditionally been viewed as a drawback for quantum annealing [16][17][18]. However, it has been observed that when coupled with classical techniques, this uneven sampling could be a positive feature because the states which quantum annealers find tend to be very different from those found by classical solvers, and therefore could give a more complete picture of the manifold [19] if both were used together. In this paper, I explore a different advantage of this preferential search, the fact that it tends to find states which are flexible in the sense that some variables can be changed at little to no energy cost.
In this paper I experimentally investigate the role which quantum fluctuations can play in the local search which reverse annealing implements. This is done by using specialized Hamiltonians which represent hard problems for the annealer (although not necessarily hard in the computational sense) and have sets of local minima in their energy landscape where fluctuations are enhanced. I also show that the anneal offsets can be used to guide the search by locally enhancing fluctuations on some parts of the system. The technique of locally enhancing fluctuations is reminiscent of the methods proposed in [20], and provides some experimental validation of these concepts.
I use these fluctuations to trade of optimality in solutions for flexibility, in other words find solutions which are a bit less optimal, but for which certain variables can be changed at little or no cost. I argue that this is a property which is likely to be relevant in some real world situations and give a motivational example of how it can be used in a hybrid quantum classical algorithm to find a more optimal solution in the presence of a global penalty function which is not encoded into the annealer.
On the devices studied here (D-Wave 2000Q quantum processing units, QPU), dissipation plays an important (often positive [21]) role in the annealing process, and the reverse annealing techniques used here fundamentally rely on dissipation. The intuition developed here however is likely to carry over into the more coherent protocols proposed in [22][23][24]. This is relevant because coherence rates can be improved through a variety of routes, both in supercoducting flux qubit architectures [25,26], and trapped ion quantum annealers [27]. Furthermore, there is significant evidence that in the fully coherent regime fast quenches but coherent quenches, known as 'diabatic' quantum computing may be a promising path to a quantum advantage [28]. This is due both to adiabatic mechanisms involving multiple energy levels [28,29], and mechanisms related to energy transfer [30][31][32].
This paper is structured as follows, first in section I, I describe the details of the experiment and discuss how the Hamiltonians used for the experiment are constructed. In section II I give the core experimental results, demonstrating how fluctuations can enable a tradeoff between optimality and flexibility of solutions, as well as how anneal offsets can be used to guide the search by emulating these non-engineered fluctuations. Next in section III, I give a motivational example of how trading off optimality and flexibility can be useful. I then discuss some of the more detailed aspects of the experimental methods and concluded the paper with some discussion.
I. EXPERIMENTAL SETUP
These experiments involve both specially engineered Hamiltonians to construct a search space with the necessary properties, and the use of advanced control features of the QPU, both anneal offsets and reverse annealing, which are used in combination. I first describe how the Hamiltonians are constructed, and than how they are used in the actual experimental protocols. Before, I do this, it is useful to provide some background on the operation of the quantum annealer. This QPU realizes a transverse field Ising Hamiltonian where A(s) and B(s) are non-linear functions of a control parameter 0 ≤ s ≤ 1, X i is a Pauli X acting on qubit i, and H prob is a programmable Ising problem Hamiltonian, where Z i is a Pauli Z acting on qubit i, the details of how J ij and h i are chosen is discussed later. The QPU is designed so that A(0) 1 and the ratio A(s) B(s) decrease monotonically with s. While the details of how these quantities depend on the annealing parameter s is known, they are not important for this study beyond these basic facts. I do employ a more advanced feature known as anneal offsets, which slightly changes the form of Eq. 1, and will be discussed in due course.
A. Hamiltonian construction
The goal of the experiments in this paper are to study the ability of a quantum annealer to use fluctuations to find high quality solutions which are flexible in the sense that changing some elements of the solution will not effect the energy of the solution, or will only affect it very little. Since I am not developing this study as a benchmark against classical methods, I have focused on designing Hamiltonians which are difficult to solve for the annealer, and have a known solution, but which are not necessarily computationally hard problems. To this end, the problems used here build on the planted solution construction from [33], which yields limited computational hardness [34,35] (for state-of-the-art solution planting techniques, see [36]). Furthermore, I use many more clauses than would be desirable to construct the hardest problems in the interest of ensuring that the problem graph is connected and to reduce the degeneracy of the ground state manifold.
The methods which I use, proposed in [33] constructs problems with planted solutions by generating overlapped frustrated loops on the edges of the underlying graph via random walks which terminate when they intersect their own path. I use planted solution problems with loop size greater than eight and 8, 000 loops on a QPU with approximately 2, 000 qubits (some of which are reserved for specialized features as discussed later in this section), with a coupling arrangement knows as a chimera graph. Figs. 2 and 3 depict chimera graphs with a 3x3 grid of eight qubit unit cells, the 2000Q has the same eight qibit unit cells arranged in a 16x16 grid.
In addition to having a known planted solution, the experimental Hamiltonians also need features which can explore the ability of the annealer to use fluctuations to find more flexible solutions. Since I intend to study the ability to find more flexible solutions in both a binary and discrete setting, two different strategies need to be employed, gadgets where variables are allowed to become "free" should be embedded, henceforth referred to as "gadgets" as well as chains of qubits which encode discrete variables using the domain wall encoding described in [37], henceforth referred to as "chains".
Fortunately, the planted solution construction does not require a full chimera graph to be effective. This means that the construction can be performed with some qubits reserved for either gadgets or chains, and these features can be added in later. The gadgets are constructed with the following properties: To embed discrete variables, I use the domain wall encoding from [37] to encode a variable with 16 possible values within a 15 qubit chain with unit ferromagnetic coupling strength. For completeness, I review the domain wall encoding in the appendix. I use the field controls of the annealer to control the potential on this chain such that the 0 value of the variable (all qubits are in the |0 configuration), has the same energy as the minimum energy in a 'soft' region which corresponds to seven consecutive values of the discrete variable which are randomly chosen to start anywhere from two to six (recall that I use a convention where the allowed values run from 0 to 15). The chain is coupled to to the rest of the problem Hamiltonian on the first and last qubit of the soft region, such that the planted solution must be frustrated if the domain wall is in the soft region. All other values of this variable have an energy which is two energy units higher than either the minimum of the soft range or the 0 state of the variable. Henceforth I refer to a chain where the domain wall is in the soft region as a 'soft chain'. The qubit chain used to encode the domain wall variable is randomly placed within the planted solution problem by performing 15 steps of a non-self-intersecting random walk on the hardware graph, an example of a chain within the larger Hamiltonian is depicted in Fig. 3.
The potential within the soft range is always equal to the 0 value of the variable at the midpoint m of the range. Away from the midpoint the potential increases such that E(m+j) = E(m)+s|j/2| where the parameter 0 ≤ s ≤ 1 is the "softness parameter" of the chain. Lower values of s allow for more fluctuations since it costs less energy for the domain wall to move away from the centre of the chain.
For both the gadget and chain versions of the problem, ten Hamiltonians were created at random. Other than the reduced strength of the gadget couplings, the free and locked gadget runs use the same ten Hamiltonians. Each Hamiltonian incudes either 15 gadgets or 15 chains.
B. Annealing protocol
The key feature of the Hamiltonians constructed for these experiments is that they have known planted solutions. This is crucial for the purpose of this study, to explore the ability of the device to trade off between optimality and flexibility, we need to start off in a state which is known to be optimal. Fortunately the reverse annealing feature [9] allows for a search around the planted (or any other classical) state. The reverse annealing feature uses a protocol which starts the QPU in a state determined by the user at s = 1, anneals to a value s held for a time τ and then anneals back to s = 1 as depicted in Fig. 4 . Thermal dissipation allows the device to seek out lower energy states during the reverse annealing protocol.
In addition to the reverse annealing feature, I also make use of another feature called anneal offsets [10]. The function of this feature is to offset the annealing parameter on different qubits. In particular, I offset the parameter values of either the chains or the gadgets (a subset of qubits I call g), which makes the Hamiltonian where i ∈ g means that qubit i belongs to a gadget or chain and i / ∈ g means that it does not. The effect of these offset is to either locally enhance (negative δs) or suppress (positive δs) fluctuations locally within the gadgets or chains. The effect of combined anneal offsets and reverse annealing is depicted in Fig. 4 .
For all experiments reported here, the anneals in the reverse annealing protocol were performed at the maximum allowed rate, which traverses from s = 0 to s = 1 in 5 µs and a hold time τ of 20 µs was used. The same parameters were used for chains and gadgets. For all values of s , I used a linearly spaced grid of 11 values of δs evenly spaced between −0.2 to 0.2, inclusive of the end points. Since not all qubits are capable of the full range of offset values, the maximum magnitude allowed (positive or negative) value was used when the desired value fell outside of the range. Because I wanted to study extreme values of s as well as studying more values within a region of interest where the data were observed to change rapidly with s , I chose a the non-uniform grid of 19 values of s depicted in Fig. 5.
II. RESULTS
In this section I discuss the results of the experiments, which demonstrate how both existing and introduced fluctuations can be used to guide the search which a quantum annealer performs. First I will introduce how the number of free gadgets or soft chains can be controlled by different parameters, such as the value of s and the anneal offsets applied to the chains or gadgets. Measures of the performance of these different control settings will be introduced in Sec. II A and further discussed in Sec. II B. A proof-of-principle example for how guided search can be useful will be discussed in Sec. III.
The first result which we find is that the number of free gadgets and soft chains both can be increased by decreasing the value of s , in other words by increasing the range of the search. Fig. 6 shows this effect for gadgets, not only are more gadgets free the lower value of s , this effect is also much stronger when the gadgets are not locked, indicating that the free variables have a significant effect on the dynamics. For s 0.45 the dynamics are highly localized and very few if any gadgets are free, to fewer soft chains. As with the gadget example, nontrivial reverse annealing dynamics are seen for 0.38 s 0.45. We further observe that if we apply anneal offsets to the locked gadgets, we can mimic the effect of the free variables, as Fig. 8 shows the proper choice of anneal offsets renders the distributions indistinguishable for the locked and unlocked gadgets. I show in sec. II B, that introduced fluctuations from anneal offsets can be as effective if not more so than fluctuations due to truely free variables.
The question now becomes whether anneal offsets can similarly mimic the effect of a lower hardness coefficient for chains within the planted solution Hamiltonian. Fig. 9 indicates that it cannot, while a negative anneal offset parameter, δs < 0 increases the number of soft chains at intermediate values of s , it decreases the number at low s . Therefore no value can be used to mimic the behaviour of a lower hardness coefficient simultaneously in both regimes. This is likely due to the more complicated structure of the chain encoded discrete variables. I demonstrate in sec. II B that in contrast to the locked versus unlocked gadget example, anneal offsets cannot make up the difference between the minimum and maximum hardness of the chains.
A. Conditional performance
Simply analysing solution optimality is a losing proposition, since I have designed the experiments such that, by construction, there is no way of improving beyond the starting condition. However, there is still hope to find high quality solutions which meet conditions which the global solution does not. I define this as conditional performance, the best performance attainable which also meets certain conditions. Because of how the gadgets and domain wall variables have been constructed, the condition I have chosen to analyse is how many gadgets can be in the free configuration, or chains can be in a soft configuration. This is an interesting criteria since free gadgets and soft chains both make the solution more flexible, allowing for modifications which can be made with little or no energy cost. This flexibility could be important in real world scenarios, for instance if small changes to the solution may need to be made after the time of solving to account for unpredictable events, or if the annealer is being used as part of a hybrid solving technique where difficult to encode global constraints are not included (for an example of the latter see [6]). In Sec. III, I give an example where flexible solutions can be used to gain an advantage when an additional non-linear constraint is added.
For a fair comparison, we should compare the results from the annealer with a trivial classical strategy of simply frustrating the couplings between the gadgets or chain and the rest of the problem, this 'trivial' strategy leads to a cost per gadget or chain of 2 energy units compared to the most optimal solution. Solutions with a lower cost per gadget/chain, are in principle interesting solutions, whereas those which have a higher energy than the trivial approach are not, since there is a know method which will always attain a better solution using the same starting information. Since the focus of this work is proof-of-concept rather than benchmarking, I will not explore whether or not there are other, less trivial, classical algorithms which can have better conditional performance than the annealer. Energy cost per free gadget for ten different Hamiltonians using best performing value of s blue plusses are without anneal offsets, red crosses are best anneal offset (including the possibility of no offset). Red boxes and blue circles represent mean for without and with anneal offsets respectively with error bars representing standard error . Black dashed line is a guide to the eye at a cost of 2. To start off, let us examine the conditional performance for the Hamiltonian with gadgets inserted without using anneal offsets. As Fig. 10 shows, even without anneal offsets the annealer is able to outperform a trivial algorithm in all but one case, in which the energy cost is more only if every gadget is made free. When different anneal offsets on the gadgets are allowed, the energy cost per free gadget never exceeds 1.5.
For discrete variables represented as domain walls, reverse annealing is also usually able to find a solution which beats the trivial approach, in fact Fig. 11 shows that even without using anneal offsets, the annealer was always able to find a solution which was better than the trivial approach when the soft region of the chain is flat (softness parameter s of 0). Even when the region of the chain which is being searched out is not flat, but a sloping minima (softness parameter s of 1), the annealer is able to beat the trivial approach in most cases, and always does both on average, and for all cases examined with less than 14 soft chains. The results for the higher softness parameter are depicted in Fig. 12.
I have now shown that reverse annealing in combination with anneal offsets can be effective at modifying solutions to meet certain conditions, but have not elucidated why or how this might happen, in the next subsection I examine potential underlying mechanisms and discuss what the data can teach us about anneal offset strategies.
B. Performance with anneal offsets and locked gadgets
It is now worth examining more closely the role which quantum fluctuations play in conditional performance, by comparing Fig. 11 and 12 (averages directly compared in Fig. 13 (left)). We are able to see that better solutions are possible with a lower softness parameter s, the question we have not explicitly answered yet, is whether the same is true for the fluctuations the free spins cause in the gadgets. To do this we need to compare the 'free' and 'locked' versions of the gadgets as introduced in Sec. I. As Fig. 13 (right) shows, in the absence of anneal offsets having locked gadgets is very detrimental to performance, at least if more than about 6 gadgets are desired to be free. On the other hand there is barely any difference once anneal offsets are employed, suggesting that the offsets can enhance the fluctuations and guide the search. Conversely, the effect of anneal offsets seems to be rather minimal for discrete variables encoded in chains.
The first question to ask is what is the optimal value of s for given a desired number of free gadgets and soft chains, and how is this affected by factors like whether or not gadgets are locked and the softness parameter used Figure 14: Average value of s (averaged over ten Hamiltonians) used to obtain optimal conditional performance with a desired number of gagets free, top (chains soft, bottom). Red and magenta squares represent cases where no anneal offsets are used and are unlocked (softness parameter 0) and locked (softness parameter 1) respectively. Blue and black circles represent represent the cases where anneal offsets are used, and and are unlocked (softness parameter 0) and locked (softness parameter 1) respectively. Error bars represent standard error. In all cases the largest value of s was taken in the event of a tie. Insets are the same plots but zoomed out.
for chains, as well as whether or not anneal offsets are used. Fig. 14 shows the optimal value of s for both gadgets and chains under different circumstances. The first thing to notice from this figure is that, perhaps unsurprisingly, s decreases monotonically (within statistical uncertainty) with the desired number of free gadgets or soft chains, this behaviour makes intuitive sense, because changing more variables requires a broader search. Furthermore, constant with Fig. 13, the values of s based on whether or not anneal offsets are used differ much more for gadgets than for chains, indicating that allowing anneal offsets greatly changes the optimal strategy for gadgets, and does not change it as much for chains. Furthermore, except for when about 14 or more free gadgets are desired, the optimal value of s when anneal offsets are used is almost the same for locked and unlocked gadgets, supporting the hypothesis that increased fluctuations from anneal offsets can act as an effective proxy for truly free variables.
To better understand the role anneal offsets are playing, it is worth examining how the best choice of anneal offset depends on the number of free gadgets, or soft chains desired. As Fig. 15 shows, the best strategy is indeed to use stronger offsets in the locked gadget case, and to use them to enhance rather than suppress fluctuations on the gadgets, suggesting that there is indeed a mechanism where offsets artificially guide the search by making the locked gadgets behave as if they have free qubits. Fig. 15 further shows that domain wall encoded discrete variables show very different behaviour to the gadgets, in particular, up to statistical uncertainty, the offsets used in the discrete variable case monotonically approaches zero as more soft chains are desired, while for gadgets with free binary variables, there non-monotonic behaviour, and a trend toward locally enhancing fluctuations if more free gadgets are desired. This difference is likely due to the more complex structure of the domain wall encoded variables, leading to less tolerance to fluctuations before they no longer faithfully encode the intended variable.
III. MOTIVATIONAL EXAMPLE FOR FLEXIBLE SOLUTIONS
Now that we have shown that the underlying dynamics of quantum annealers can be used to find solutions which are more flexible, it is worth demonstrating an example where such solutions could be useful. To do this, we consider a problem which natively fits onto the chimera graph, but is also subject to global non-linear penalty. Such global penalties are likely to be encountered in realistic problems, and for example may arise when a shared resource is being used for different purposes and there is a penalty which depends on the total amount required. A simple example of how such a constraint could arise in the real world is minimising the total cost of a project if a company owns X number of a piece of equipment, so there is no penalty for a solution which uses any number up to X, however there is a cost associated with renting every additional piece of equipment beyond the original X.
While techniques are known to implement global nonlinear penalties on quantum annealers, for example those proposed in [38,39], these techniques require a fully connected graph and number of auxilliary qubits equal to the number of original qubits, such an encoding is not practical for large problems on existing quantum annealers. We consider an alternative strategy for solving such problems, we first encode the entire problem except for the global penalty onto the annealer, and use reverse annealing techniques to find solutions with various levels of trade-off between flexibility (for example measured by the number of free gadgets) and optimality. I then perform greedy optimisation as described in the methods section starting from the best solution found at each level of flexibility. This greedy optimisation is performed against the entire problem including the non-linear penalty.
Before considering the results for the QPU-sized problems used in earlier demonstrations, it is worth demonstrating this approach with a simpler 16 qubit example. To do this, we consider the Hamiltonian used in [21]. Similarly to the Hamiltonian considered in [40]. This Hamiltonian has both a local minimum where eight of the 16 qubits are "free", able to exist in either the zero or one state without incurring an energy penalty, and a global minimum where none of the qubits are free, the (unique) ground state and first excited state manifold of this Hamiltonian are depicted in Fig. 16. At least for short runtimes, the close avoided crossings in these devices mean that quantum annealers will typically find the false minimum with more free qubits due to a close avoided crossing relatively late in the annealing schedule [21].
We now consider the ability of the solution to adjust to non-linear penalties of different strength. The global non-linear penalty I elect to use is non-linear function of the Hamming distance D from a random state where n is the number of qubits involved in the Hamiltonian. The states which the annealer returns will be a Hamming distance D = n 2 away from most random states, therefore this penalty offsets the Gaussian from Figure 16: The 16 qubit gadget used in [21]. Edges represent ferromagnetic coupling of unit strength, and circles represent qubits. Red colouring indicates that a qubit is subject to a field of +1, while magenta colouring indicates −1 and grey indicates no field. On the top figure diagram the arrows indicate the unique ground state which satisfies +1 fields on the outer qubits, but frustrates the −1 field. The bottom diagram is the first excited manifold, where a superimposed 0 and 1 indicates that a qubit is "free" and can take either value without affecting the energy. the point where a typical solution will sit by its standard deviation, √ n + 1. This will guarantee that the nonlinear penalty will have a substantial gradient for typical starting states.
Equipped with this definition we consider the results of adding a non-linear penalty followed by a greedy search for the 16 qubit problem mentioned earlier. As Fig. 17 shows, it is much easier for the greedy search heuristic to compensate for the global non-linear penalty starting from the higher energy but more flexible solution which the annealer finds as compared to the true minimum, the result is that for moderate penalty strength, the more flexible state is a superior choice for a starting configuration.
A. Synthetic use case: optimizing with global non-linear penalties We now consider what happens when we apply a non-linear penalty followed by greedy search to states with different numbers of free gadgets found for QPU scale problems. While neither the original problem, nor the non-linear penalty are based on anything which one might encounter in the real world, recall that situations where a problem containing a non-linear penalty must be solved are realistic, this can therefore be considered a "synthetic" use case for a quantum annealer, not directly based on an application, but with a structure which is likely to be encountered in the real world. We start by considering the best solutions the annealer could find This plot is for Hamiltonian number 7 and for the best solution found including the use of anneal offsets, although it is typical of the behaviour seen in both cases. The green line is included as a visual aid and follows the state with zero gadgets free. These data were averaged over 300 choices of random states, and in cases where multiple states were tied for the lowest energy for a given number of free gadgets, a new state was chosen at random for each sample.
with different free gadget numbers for a single Hamiltonian, in this case Hamiltonian number 7. As Fig. 18 shows, as the penalty strength is increased to a moderate value, the best solution is no longer obtained from starting a greedy search at the true energy minimum, but from starting with a more flexible state with more gadgets free. For these experiments I only consider the best solution found with each number of gadgets free, choosing at random in the event of a tie. From Fig. 19 we can see that the behaviour seen in Energy difference Figure 19: Energy difference between greedy search performed with a non-linear penalty starting in planted solution (with no gadgets free) and the best performing state found via reverse annealing. Top right figure only considers solutions found without using anneal offsets, while the top right one is the same but including offsets. The coloured lines represent the average over all ten Hamiltonians while the grey squares represent individual Hamiltonians. The bottom plot shows only the averages, with the blue circles representing the method inducing offsets and red squares without. These data were averaged over 300 choices of random states, and in cases where multiple states were tied for the lowest energy for a given number of free gadgets, a new state was chosen at random for each sample. Fig. 18 is indeed typical of results found both with and without anneal offsets although, unsurprisingly, the cases where anneal offsets are used perform better on average since lower energy solutions can be found by using anneal offsets.
Finally, we consider the optimal number of free gadgets in the starting state for different Hamiltonians and penalty strengths. Fig. 20 shows that, for both the strategy using anneal offsets, and the one which does not, the typical number of gadgets free in the best performing state increases for a while with penalty strength and then settles to an average across all Hamiltonians of around seven gadgets free. While it is possible that the average number of gadgets free is slightly higher for the strategy using offsets, the difference is relatively small. It is however clear that for the solutions which used anneal offsets, there is a much wider variety of solutions, and in particular, a tendency to use some solutions with many more gadgets free
IV. METHODS
All reverse annealing experiments were performed using the maximum allowed annealing rate on both the forward and reverse anneal, at this rate the entire (forward) anneal would be completed in 5 µs. All experiments used a hold time τ of 20 µs. All annealer calls were set to perform 1, 000 individual runs. The reverse annealing experiments presented here were performed using the D-Wave [41]. Greedy optimisation was performed by checking all single bit flips and performing the one which reduces the energy the most, choosing at random in the event of a tie. The greedy procedure is repeated until no single bit flip will reduce the energy.
All plots were produced in the Python language [42] and the matplotlib plotting package [43], code used to produce the plots and perform the experiments is available from the same public repository as the experimental data. Heat-map plots with non-linear grids were plotted such that the centre of each cell aligns with the value of each axis. The NumPy [44] and SciPy [45] packages were also used as well as jupyter notebooks [46] and the IPython interpreter [47].
V. DISCUSSION AND CONCLUSIONS
In this paper, I have demonstrated how fluctuations can guide quantum annealers to trade off optimality for more flexible solutions, as well as motivated cases where such a tradeoff could be useful. The particular useful case I focus on is when a problem involved global penalties which cannot practically be implemented on the annealer. While in the past the tendency of quantum annealers to find solutions where fluctuations are stronger has been seen as a weakness, for instance in inhibiting the ability to uniformly sample ground states, I demonstrate ways in which it could be useful.
In addition to demonstrating that the existing fluctuations on the annealer can help guide searches toward more flexible states, I show that locally offsetting the annealing schedule of the qubits can be used to guide the search. This provides experimental motivation for methods like those proposed in [20], which incorporate bitwise uncertainty into algorithms.
While not explored here, it is likely that analogous effects could be seen in quantum inspired algorithms based on spin like systems, for example quantum Monte Carlo techniques [48] which should show analogous effects to the fluctuations observed here. In fact the proofof-concept numerics in [20], exhibited that fluctuations can attract quantum Monte Carlo dynamics preferentially to some minima over others. This work has introduced new ways in which quantum annealers and related algorithms can be used, beyond directly finding the most optimal solution, an important direction in hybrid quantum-classical computing. By laying the groundwork for how modifying fluctuations locally can be used algorithmically to guide a search, the work here opens a new path to using these modified fluctuation strengths algorithmically, in a similar vein to currently used reverse annealing techniques, but guiding the direction of the search, rather than the starting point. constrained to take either the 1 or 0 value. As was discussed in detail in [37], any interaction between two discrete variables can be realized using two body couplings between the qubits in the domain wall encoding, and arbitrary penalties can be realized by putting fields (single body terms) on the chain. Moreover, the domain wall encoding of a binary variable simply reduced to a normal qubit representation, as depicted in Fig. 21(bottom).
I am interested in simple couplings which force frustration in the planted solution problem if the domain wall variable takes one of its soft values, while simultaneously avoiding the need for minor embedding. To do this, I place a single ferromagnetic coupler between the qubits encoding the discrete variables and the other qubits at each end of the soft region. For the additional energy penalties on the chain, I make use of the fact that a single (non-extreme) value of the discrete variable can be penalized using a term of the form δ i = 1 2 (Z i −Z i−1 ). For the extreme values can be penalized in the same way, but omitting terms which correspond to virtual qubits. This method is described in more detail in [37], and software for realizing these encodings can be found at [50]. | 2020-09-15T01:01:20.258Z | 2020-09-14T00:00:00.000 | {
"year": 2020,
"sha1": "7a19793229b1cb79f52657a962d140264f0c28f5",
"oa_license": null,
"oa_url": "http://dro.dur.ac.uk/32771/1/32771.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d8202d9896bdb9535d2168e91a9fa95df2e86d60",
"s2fieldsofstudy": [
"Computer Science",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
252563947 | pes2o/s2orc | v3-fos-license | “It Was Such a Different Experience”: a Qualitative Study of Parental Perinatal Experiences When Having a Subsequent Child After Having a Child Diagnosed with Autism
Children who have an older sibling diagnosed with autism have an increased likelihood of being diagnosed with autism or developing broader developmental difficulties. This study explored perinatal experiences of parents of a child diagnosed with autism, spanning pre-conception until the subsequent child’s early developmental period. Qualitative interviews were conducted with ten parents of a child diagnosed with autism, and ten parents of a child with no neurodevelopmental diagnosis, each of whom had gone on to have a subsequent child. Thematic analysis occurred concurrently with data collection and involved comparisons between the two samples. Four themes were identified in relation to the perinatal period of a subsequent child following the autism diagnosis of an older child. These were parental experiences of “apprehension”, “adjustment”, and “adaptation”, underpinned by the “importance of support”. Many experiences of parenting were similar between the two groups, with comparison between the groups identifying the role of autism in an increased focus, concern, and hypervigilance to their child’s development. Having a child diagnosed with autism intensifies some of the common experiences of parenting and infancy. The challenges identified by parents throughout the experience of parenting an infant after having a child diagnosed with autism indicate that the development of supports could help empower families in this situation going forwards.
. The challenges that behaviours associated with autism can present, and their influence on general child development and daily functioning, can result in varied impacts on caregivers and families (Gau et al., 2012), as can the time-intensive demands on parents to support therapy attendance and implementation (Hastings & Johnson, 2001;Sawyer et al., 2010). As such, having a child diagnosed with autism has been found to impact parents in both the short and long term, including: parental stress and well-being; family functioning; financial stress; and planning for subsequent children (Davis & Carter, 2008;Hastings & Johnson, 2001;Navot et al., 2016).
Qualitative studies have examined the lived experience of parents of caring for an autistic child, revealing themes of this lived experience from early development through to post-diagnosis including: the centrality of autism in the family's life and routines (DeGrace, 2004;Myers et al., 2009;Woodgate et al., 2008); parental isolation (Myers et al., 2009;Woodgate et al., 2008); insufficient support and the challenges of navigating the systems (Nicholas et al., 2016;Woodgate et al., 2008); and the re-configuring of their understanding of parenting (Nicholas et al., 2016). A meta-synthesis of qualitative research in this area identified six themes across studies of parents experience of parenting a child with autism: pre-diagnosis, diagnosis, family life adjustment, navigating the system, parental empowerment, and moving forward (Depape & Lindsay, 2015).
However, as has previously been highlighted (Brian et al., 2018), this field of work is developing, and many aspects of the experience have yet to be explored in the literature. With children being diagnosed from as early as 30 months of age (van't Hof et al., 2021), for many families the diagnostic process-or a preceding period of parental queries or concerns-often coincides with decisions around having further children (Tessema et al., 2021). Previous examinations in this field have found that the parental experience of having a child diagnosed with autism affects family planning in multiple ways. For parents who have had a child diagnosed with autism, the most prevalent perceived cause for autism was genetic factors, and for many parents this played a significant role in their family planning decisions (Selkirk et al., 2009). Further qualitative research in this field has found that whilst individual cognitive factors-such as parental cognitive flexibility-impact family planning decisions, ongoing challenges managing life after the diagnosis were also important in making decisions to not have, or to delay having further children (Navot et al., 2016).
Around 20% of infants who have an older sibling diagnosed with autism are later diagnosed with autism themselves (Ozonoff et al., 2011), and a further 20-30% develop broader developmental difficulties (Messinger et al., 2013). However, unlike single-gene inheritance disorders, such as fragile-X syndrome, or cystic fibrosis, whilst the genetic elevated likelihood of having a subsequent child with autism is known, no prenatal test or parental genetic testing can further inform this. Additionally, there is no test in early infancy to diagnose autism and provide parents in these early developmental stages of any certainty around whether or not their child is likely to be autistic and provide direction in what supports may be needed. However, early diagnosis and early intervention are hypothesised to improve longterm outcomes (Johnson et al., 2007;Rutter, 2006). Parents of children diagnosed with autism have indicated a high awareness of the message around the importance of early intervention but have also indicated that this results in both promise and pressure (Edwards et al., 2017).
Whilst research has explored how the diagnosis of a child with autism has short-and long-term impacts on multiple aspects of families lives, for those parents who do go on to have further children, research has yet to explore how their experience of having had a child diagnosed with autism impacts their experience of parenting a subsequent child. In particular, it is unknown how this situation impacts parents' perinatal experiences when having subsequent children. Given the expected uncertainty of parents-knowing of the elevated likelihood of subsequent children having autism, and the importance of early intervention-we might expect this period of parenting to be impacted.
It is of additional responsibility as researchers to better understand the parental lived experience of this time period, as this sibling cohort-of younger siblings of children diagnosed with autism-has been, and continues to be, the focus of a great deal of prospective research. This includes research on early interventions, often implemented by their parents. It is thus of key importance for clinicians and in particular researchers in this space, to better understand the lived experience of parents-how having had a child diagnosed with autism before having another child impacts their experience of parenting, and thus how this may impact on the implementation of interventions, and the wellbeing of parents and families. Therefore, this qualitative study aimed to explore the lived experience of pregnancy and parenting of a subsequent child after having had a child diagnosed with autism.
Participants
Two groups of participants were sought for this study. Inclusion criteria for the first (focal) group were parents who had a child diagnosed with autism before the pregnancy of a subsequent child, and that the younger child was under 6 years old. This age restriction was employed to increase the chance that parents would be able to accurately recall aspects of pregnancy and the first 6 months of their child's life. Inclusion criteria for the second (comparison) group were parents who had two or more children, with the youngest child under 6 years old, where none of their children had a diagnosis of a neurodevelopmental condition. Exclusion criteria for both groups were non-English speaking parents (with a level of conversational fluency required), parents who lived further than 1-h commute from Perth, Western Australia, to enable in-person interviews, or participants under 18 years old. The aim was to recruit mothers and fathers; however, only mothers volunteered to participate. A convenience sample of ten participants in each group (n = 20) was recruited through social media, directly contacting parenting groups, and through a database of previous research participants. Participant recruitment was reviewed throughout the data collection and initial data analysis phases. Recruitment ceased when no new preliminary themes were identified, with the ensuing sample size then sufficient to meet the study aim.
All participants were from two parent households, with two or three children in each family. For the focal group, diagnostic information was collected for older siblings with an autism diagnosis, as well as for any other siblings where relevant. Varied diagnostic presentations (with different severity levels across domains of communication and restricted and repetitive behaviours, and with or without language impairment) and ages of diagnosis (23 months-8 years) were reported and are summarised in Table 1 along with information on family structure and sibling details. For the comparison group, family structure is reported. Families had 2 or 3 children, with 40% of focal families having two children, and 80% of the comparison families having 2 children. For the focal group, 70% of the focal children (discussed in the interview) were male and their mean age was 3.5 years (SD = 2.6), and for the comparison group, 40% of the focal children were male and their mean age was 3.3 years (SD = 1.8).
Procedures
A phenomenological approach (Braun & Clarke, 2006;Creswell et al., 2007) with in-depth interviews was used to obtain a comprehensive understanding of how having had a child diagnosed with autism impacted parents' experiences of pregnancy and parenting a subsequent child. The first author (DC) led this study as part of her doctoral research. At the time she was a clinical Table 1 Demographic data, family structure, and diagnostic information for participants DSM Diagnostic and Statistical Manual: (American Psychiatric Association, 2000, GDD Global Developmental Delay a Under DSM-5 a diagnosis of Autism Spectrum Disorder (ASD) is given, with level 1 indicating "Requiring support", level 2 indicting "Requiring substantial support", and level 3 "Requiring very substantial support". Similarly, a diagnosis of language impairment is or is not given for children diagnosed under DSM-5 (these levels and specifiers were not present in DSM-IV diagnoses) b All siblings are same age (i.e., the twin) or older than the focal child in both groups c All families had one child diagnosed with autism before the pregnancy of the focal child. Diagnostic information for all siblings from families is presented here, including additional siblings Characteristics of sibling(s) Sex Male 10 (62.5%) 6 (50%) Female 6 (37.5%) 6 (50%) Age (years) 7.5 (2.4) 6.7 (2.4) Diagnosis ASD (level 2) (DSM-5) 5 (31.1%) ASD (level 3) (DSM-5) 1 (6.2%) ASD (level not specified) 3 (18.8%) Autistic disorder (DSM-IV) 4 (18.8%) Language impairment (DSM-5) 2 (25.0%) GDD 1 (6.2%) ASD queried 2 (12.5%) 12 (100%) Nil 2 (12.5%) Age of ASD diagnosis 3.9 (1.9) N/A psychologist trainee, having previously worked closely with parents and families of children diagnosed with autism in the Perth community in research (primarily conducting clinical assessments) and service roles. The first author (DC) engaged in the data analysis process under mentorship from the senior author (KE), who has experience with numerous qualitative studies, including doctoral research looking at parental experience broadly and postdoctoral research exploring the perspectives of autistic individuals and their caregivers. The other authors (AW and MM) provided feedback throughout to contextualise the research study within the clinical and research landscape where they have extensive experience. An initial intake screening phone call was utilised to confirm study eligibility and provide study information. The first author (DC) provided interested participants with an information sheet and obtained written consent prior to conducting each interview. Before the commencement of interviews, the intake survey was conducted to collect family demographic information. The primary form of data collection involved in-depth interviews using an interview guide to ensure questioning was consistent within and between the two groups. The interview guide was created in collaboration with two senior autism researchers and a child health nurse, and included a minimal number of broad, data-generating questions as recommended for phenomenological studies (Brod et al., 2009;van Manen, 2016). The initial question asked of parents was as follows: "Could you tell me about your experience of the pregnancy, birth, and early development of X"? Temporal probes were then used to follow up and guide the interview, prompting for different time periods, and content probes were used to prompt for different elements of the experience, such as concerns, difficulties, or supports (see Fig. 1). Openended probes were used to facilitate parents' telling of their stories. The interview guide was piloted with two parents, and their feedback was used to amend prompts to ensure they encompassed a wide breadth of experiences.
Interviews took place face-to-face in a quiet, private location of the participants' choosing, either at their own home or in clinical rooms. Mean interview time for the focal group was 66 min (ranging from 47 to 102 min) and 46 min for the comparison group (ranging from 35-81 min). Field notes describing the context and interviewer reflections were made by the first author (A1) within the day following the interview. Interviews were audio recorded with the participant's consent and were transcribed verbatim by the first author using Microsoft Word. Transcriptions were imported into NVivo software (QSR International, 2018) to assist in storing and managing the analysis.
Data Analyses
Interview transcripts were thematically analysed by the first and senior authors to explore the experiences of parents through the following four steps (Braun & Clarke, 2006;Creswell et al., 2007). First, the focal and comparison group transcripts were reviewed in their entirety by the first and senior authors to ensure they were familiar with the entire breadth of parental experiences, allowing them to attain a depth of understanding for the analysis (Braun & Clarke, 2006;Englander, 2012). Second, the first author inductively coded the focal and comparison group transcripts, through repeated reading of the text, identifying excerpts that were reflective of the experience, and assigning codes that reflected these broader themes or meaning "Could you tell me about your experience of the pregnancy, birth, and early development of X"?
Temporal probes:
Tell me about when you found out you were pregnant. Tell me about during the pregnancy. Tell me about around the time of the birth. Tell me about the first few weeks after the birth. Tell me about the first 6 months.
Content probes:
Were there any particular concerns that you had around this time? Was there anything around this period of time that you felt made your situation more difficult? Was there anything around this period of time that you felt made your situation easier?
Were there any supports that you felt you did benefit from at this time? Was there any information or supports that you felt you would have benefited from at this time?
With the care of your child in this period, what do you think worked well and is there anything you would you do differently? What would be your advice to other parents in the same situation?
units (Braun & Clarke, 2006;Creswell et al., 2007). Third, the first and senior authors met on several occasions to review the focal group codes and develop a thematic structure through arranging and re-arranging similar codes together for continuous comparison to form preliminary themes and sub-themes (Braun & Clarke, 2006). Preliminary themes and sub-themes for the focal group were reevaluated by the first author as new data became available, and the transcripts were reviewed repeatedly for significant statements in an attempt to find meaning and understanding through themes (Braun & Clarke, 2006). During this phase, codes from the comparison group were compared to the thematic structure established for the focal group to determine if these experiences were unique to the focal group or common across both groups. No additional themes were identified for the comparison group during the inductive coding process undertaken by the first author. Discussion between all authors continued throughout this final phase until a final thematic structure, consisting of four themes and nine sub-themes, was confirmed. Reflexivity was addressed throughout the data analysis process to ensure the findings were not the result of preconceived ideas by the authors. The first author (DC) kept a field journal as a strategy to reflect on the lens through which she was thinking about and interpreting the data collected and on the analysis of this data (Krefting, 1991). This reflection, alongside collaborative discussions throughout the study between the authors, assisted in identifying the expectations and viewpoints of the authors at all stages of the study, such as to be intentional throughout data collection to not provide leading questions or posit assumptions in interviewing participants, or in interpretating the data (Olmos-Vega et al., 2022). Dense descriptions and numerous participant quotations were utilised throughout to ensure the findings reflected the experiences portrayed by the parents (Krefting, 1991). As multiple quotes from different parents reflected each theme, priority was given to select quotes from the breadth of parents to ensure all participants' voices were represented.
Results
The analysis of the experiences of parents who have a child diagnosed with autism and have gone on to have a subsequent child resulted in four main themes (Fig. 2). The first three themes, "apprehension", "adjustment", and "adaptation", refer to separate phases that the parents moved between. The final theme, "importance of support", underpinned the other three themes, in particular parents' description of transitioning between the three phases. In reporting the findings, we provide a description of participants, the thematic framework and the themes. Each theme comprises several sub-themes. For each theme, we provide: (a) a description of each subtheme; (b) illustrative quotes that describe the sub-theme in the words of participants; and (c) a summary of the similarities and/or differences between the focal and comparison groups. A summary table of the themes and sub-themes with example quotes is provided in Table 2.
Uncertainty -"No One's Got a Crystal Ball"
A common expression of apprehension amongst parents was experiencing uncertainty around what to expect. For parents, some of the uncertainty around parenting of subsequent children was experienced during the family planning stages. A common experience amongst parents was investigating the likelihood of having another child on the autism spectrum: "definitely when I found out that I was having her [second child], that thought entered my mind straight away". Parents sought information from medical professionals, other parents, and online sources, resulting in different outcomes and responses. Some parents recalled being told of the increased likelihood of having another autistic child, for example: "the likelihood of having another child with autism was actually really quite high…[this] was quite confronting". Others felt that the increased likelihood was not elevated such that it was concerning to them: "the odds were pretty good… statistically speaking… so that put my mind at ease". The remaining parents found information "conflicting", leading to ambiguity in the decision-making process. For some, finding out the sex of their baby during pregnancy prompted reflection on the possible challenges: "obviously knowing that it was more common in boys…I would say that's probably the first thing that entered my mind". Uncertainty often led to "a gap" between children due to pausing their family planning. Multiple parents reported that the process of identifying developmental concerns and the diagnostic process meant that they "[weren't] even thinking about [having another child]". Parents felt they "wouldn't have been able to deal with it" yet due to "fear of having another child with autism". This additional time helped parents to feel "ready to have another one". Two parents did not experience uncertainty, as they were unaware of the increased likelihood of having another autistic child; hence, their experiences were not impacted. As one parent articulated: "if I'd been told…more information about it…there probably would have been more age difference between them". The theme of uncertainty was endorsed by some of the comparison group parents; however, the nature of the uncertainty was different. Parents of typically developing children primarily reflected on the uncertainty of how they would balance another child around the commitments of work and their older child(ren) as a factor in deciding the timing of having subsequent children.
Hypervigilance-"You're Constantly Looking"
Another expression of the apprehension described by parents was through hypervigilance. Many parents described hypervigilant responses throughout pregnancy and the first 6 months of life as they attempted to manage this uncertainty. The first response was increased attention and ascribed meaning to infant development to ensure the "presence of concerning developmental signs" was identified early, because "I don't want to be in the position again where my poor first child got no therapy for four years". Parents described a constant internal dialogue during daily activities, where they questioned if their infant "should be" behaving in specific ways. For some parents this was deliberate, for example "I'm paying more attention to …", "I'm really looking at …" and "I was quite paranoid about all of that stuff… not blasé about anything in terms of developmental". This resulted in worry for some parents, for example: "If I could have gotten help for the newborn I would have. That's how worried I was". For others this attention was less deliberate, for example: "I don't know if you're looking for it, but I think you're obviously more aware of it". This vigilance to monitoring development sometimes involved consulting clinicians (for example, "everything that can be asked got asked") and extensive research that "consumes you". This higher level of vigilance compared to their older child was explained with "Even though you always paid attention to [developmental behaviours], they have more meaning now" and "I'm aware of …how important those things are". For some parents, their second response was to augment their child's opportunities or environment to support development. This was described as putting in a "more concerted effort" during pregnancy and the early developmental period, such as "reading…medical reports", "food and diet" and doing "my own therapy at home" using knowledge from older children to boost language learning. This theme of hypervigilance was not described by the comparison group.
Comparison to Older Child-"You Don't Want to Compare, but You Do"
Apprehension also presented through the comparisons that parents made, intentionally and incidentally, with their elder child as "it's just your reference point". Parents expressed that the experience of parenting between their children was "definitely different", with parenting their younger child "like an experience I'd never experienced… a completely different feeling". Differences and similarities between children were noted by parents, with parents reporting they "often" or "always" felt like they are comparing. Comparisons of differences in attention, facial expressions, and eye contact were noted by parents, for example, noting that "Yeah oh my god it was like the attention, the attention, it's like she's actually looking in my eyes. Whereas (autistic child) never ever did that…all those differences there were mind blowing really". Noting differences in development were described as "a relief", with differences interpreted as reassurance that development was "on track": "I just turned to my husband and said, 'We don't have problems with this one'". The differences for parents made the experience of parenting between their children very different, where "it really was [like] having a baby for the first time". The experience of comparison was difficult for some parents when there were similarities to their autistic child, for example: "it was still panic, because, you know you're looking at every move … you're looking at all these things that would remind you of (autistic child)" and "we had definitely flagged by the time he was two months old that something wasn't quite right again. There was no eye contact ever…he was very much the picture of my first child all over again". This theme of comparison to older children was described differently by the comparison group, with parents generally not reporting concern or ascribing meaning to differences between children. For example, describing differences in developmental milestones as "kids do things differently and that's okay", and "you can't compare them, because they're just different personalities".
Finding Balance-"It Was Hard to Try and Manage It All at Once"
One of the core themes of adjustment reported by parents was around the challenges of balancing the demands of their parenting role, both between children as well as amongst other roles and responsibilities. For many this involved the challenges in balancing time and energy to ensure that "the logistics of having multiple children" were achieved-sometimes impacting on parental stress and wellbeing. Many parents discussed their attempts to meet all their children's needs equitably, taking time to ensure that "everything is given and done very equally, very fairly. I'm very, very conscious of that" and "they'd both have their needs and I really had to… just figure out how I could meet them both". A challenge was that support was not always sufficient: "if I need something done, it needs to be me" and "being time poor was…the biggest thing…there was no time". Coping strategies for time scarcity included humour and considering alternative perspectives, as one mother said, "I've always joked about my second and third child-"I'm not neglectful, I'm building resilience". Another challenge to achieving balance was influenced by the focus on managing the needs of an autistic child alongside a newborn. The challenges of balancing the needs of a developing infant were described to be complicated, as "the focus of caring for the newborn is secondary to maintaining support of autism", with some parents articulating the challenge of balancing time and attention between their children, for example: "because (autistic child) had this disability he required more attention than what (subsequent child) does…and so (subsequent child) missed out a fair bit". Parents described that their autistic children "were my main aim, not my newborn baby" and that "the siblings get forgotten about". One parent summarised this dynamic with: "the bottom line is that our whole house now is about autism and that's how we parent". Parents who felt this had not been a primary challenge still noted the impact: "we were probably lucky that our first child was the one with autism because everything was about her…and we had the time and we didn't have to feel guilty about [older] kids not getting attention". The theme of finding balance was also experienced by all parents in the comparison group, who discussed the challenges of balancing their time between commitments and between children, for example: "when [our second child] came into the fold, you felt like, am I giving enough attention to the baby because I'm giving a lot of attention to the three-year-old… I guess we were consciously trying to work out how do you negotiate time". This theme was experienced differently, however, with the challenge of achieving balance mainly focussed on how to split time between children, without the complicating factor of a focus on autism.
Managing the Emotional Response-"It Was Just Really Hard"
Parents' experience of adjusting involved the process of managing the emotional response to their situation. For many this involved a back and forth between proactively adjusting, focussing on the present and experiencing anxiety or guilt. This experience varied considerably between parents, as did the extent of the emotional experience that was recounted. Common emotional responses reported included feeling anxious, "stressed", "nervous", "panic", "guilt", "shame", and/or "fear." For some the uncertainty of the situation and "trying to juggle all the demands" made them feel "very, very overwhelmed" and as if they were "a wreck". For some parents the situation also evoked a sense of "mummy guilt", either because they felt "I'm failing…to do everything" for their children or because of the potential role their genetics had played: "the parental guilt is kind of huge…I've done this to you…I wouldn't swap him for the world and I love him to pieces but I wish I could make life easier for him and I can't". For many parents, the process of adjusting involved learning how to cope with these challenging emotional experiences, and just "crossing the bridge, just getting over all the problems one at a time and not worry[ing] about everything". For the comparison group, the adjustment process likewise evoked emotional responses, with feelings described of being stressed and overwhelmed in attempting to balance time, with parents "not feeling like you ever win at 'mumming'" and feeling that the challenges of this meant that "there's an enormous amount of guilt associated with parenthood". These emotional responses, whilst present, overall appeared to be of a lesser intensity and were reported less frequently for the comparison group.
Acceptance-"That's How We Contribute to the World"
One of the core themes of the parental experience was a sense of acceptance of everything that came with parenting their children, including the uncertainty. For some, this occurred during the family planning stages, feeling that "at the end of the day they're all kids, they're all special and unique in their own way … we didn't want to [not] have more kids because we were scared about having another child on the spectrum". For some parents this acceptance of uncertainty occurred during pregnancy or in early development, with parents learning to "try not to worry and stress about…what might happen, because whatever will happen will happen". Another strategy was focussing on the positives of having an autistic child, such as "I'm not saying it was easy, but I've accepted it…and I just don't need people feeling sorry for me… We have a beautiful child who's doing really well" and so "it wouldn't have worried me if I had another child with autism" because "I wouldn't fear it… it's what's made our lives really amazing and really special and why we're so close". This theme was not described by the comparison group.
Empowered Parenting-"We Can Do This"
A core theme related to acceptance and moving forward was parents' increased confidence in their parenting abilities, leading to a sense of empowered parenting. For some parents this confidence came from having already experienced pregnancy and parenting, feeling that "you have an instinct with your second child, you're not a new mum", and in particular their awareness of their own growth and abilities as parents, including in how to parent an autistic child. Parents felt that they "know more now…so if we do have to go through it again" they would feel prepared. Parents felt that they had benefited from their experience in parenting their elder child, both in their knowledge of parenting as well as their awareness of the systems and supports that were available should they be required. This sense of empowerment meant parents were more confident in how they would handle any possibilities. For example, when considering early development, they felt "that's the difference this time round…I'm not stressed if he's not doing something…I have information on how to help him get to that stage… Everything we've learnt with (autistic child)…has equipped us with knowledge", and "how we feel about autism and the skillset that we have is good. We're in a good place now". This theme was described by the comparison group, with parents reflecting that the experience of having had a first child had prepared them and made them more confident about having a second, although without the focus on autism. As one mother said, "I kind of felt like I had gained enough experience through having a first to know what to expect with the second", with parents "not questioning" how they parent as much with a second (or third) child.
Joy of Parenting-"He's the Sunshine of Our Lives"
The last element of acceptance was descriptions of the joy of parenting their subsequent child. Mothers explained that it was joyful to spend one-to-one time with their subsequent child with "it was nice to just do stuff with [my infant] and it was just him and I" and "I had … time exclusive with [my infant], so that was awesome". This feeling of joy became stronger as their child's personality started to emerge, for example: "he's completed our family…and [his older sister] adores him and he adores her". This theme was similarly described by the comparison group.
Importance of Supports
A pervading theme discussed by all parents was how key having a "supportive network in place" was in enabling them to cope with a subsequent child. The importance of support was highlighted throughout the different stages of the experience, with parents relying on support throughout as an integral element in supporting adjustment and adaptation. Parents explained that the positive role of support impacted decisions from family planning: "I probably wouldn't have had another child if I didn't have family around", to helping them cope when they "needed a break". As one parent summarised "just get as much as you can around you to help you…that's what got me through". The type of supports varied, with family, in particular grandparents, being relied upon for practical support, such as babysitting their younger child when taking their autistic child to therapy as "it was sometimes hard enough just taking him, let alone taking another baby as well". For many parents the support of other parents, in particular other parents of autistic children, was integral to wellbeing, with parents finding that "like-minded mothers…became my outlet, my mental health improved because I was with people who understood". Parents also discussed the difficulties of not "get[ting] the help when I really needed it" when the load of managing was difficult and "it's always been me carrying all of that". Some parents reported finding that practical support was required but not available: "All I needed was another pair of hands. I have no practical support, and I've looked for practical support and I couldn't get any". Parents also identified other supports that would have been beneficial but were not available, including: "I think psychological support could have been really, really helpful during the pregnancy [and] afterwards" and "mental support at that time was so difficult for me". This theme was described by the comparison group, similarly highlighting the reliance on family and other parents, but without the discussion of the additional difficulties of managing therapy.
Discussion
The findings from this study describe that having another child, when you have had a child diagnosed with autism, is a unique and multi-layered experience. We compared the themes identified from the focal group to a comparison group of parents, to better understand which elements of the experience were unique. Across the two groups, we were also able to identify similarities in the experience of parenting. Both groups reported the uncertainty that surrounds the unknowns throughout pregnancy and how the addition of a child creates change. They also both described the challenges in adjusting throughout this change-learning how to manage both physically and emotionally the new demands of another child, and balance these with existing demands. This experience of the demands of caring for later born children altering parents' interactions and capacity with firstborns mirrors previous research on parenting (e.g., Belsky et al., 1984;Dunn & Kendrick, 1980;Kreppner, 1988). Both groups also described the empowerment and confidence that can come from knowing how to support and advocate for their children, through the experience gained from parenting their first child, as well as the joy of children and what they add to their lives.
Whilst many experiences of parenting were similar between the two groups, the comparison between the groups identified the role of autism in an increased focus, concern, and hypervigilance to their child's development. Having a child diagnosed with autism appears to intensify some of the common experiences of parenting throughout the process of pregnancy and the accompanying anticipation and uncertainty; the process of adjusting and finding balance with the addition of a new child; and the process of moving forwards and managing uncertainty and challenges as a parent. The findings of the current study lend support to broader research in the field of parenting experiences of parents of a child diagnosed with autism (Depape & Lindsay, 2015;Hoogsteen & Woodgate, 2013;Navot et al., 2016;Nicholas et al., 2016;Woodgate et al., 2008), as well as work in the field of parenting experiences of parents of children with chronic health conditions (Ray, 2002;Rempel & Harrison, 2007), and genetic disorders such as Down Syndrome (Steffensen et al., 2022).
For parents of children on the autism spectrum, prior qualitative studies have identified similar themes that articulate the difficulties experienced by parents. A meta-synthesis of parents' experience of caring for an autistic child indicated overall themes related to diagnosis, but further identified core themes of family life adjustment, parental empowerment, and moving forward (Depape & Lindsay, 2015)-similar to the theme of adjustment and the sub-theme of "finding balance" identified in this study, as well as the overall theme of adaptation and the subthemes of "acceptance" and "empowered parenting". Likewise, the themes of the challenges of balancing demands, as well as the parental approach of supporting their autistic children through vigilant styles of parenting in order to be doing all they can, have previously been identified (Woodgate et al., 2008). The report of supports and services being required, but often insufficient to meet the needs, has also previously been documented for parents of autistic children (Nicholas et al., 2016;Woodgate et al., 2008). These similarities highlight that which was also identified by the difference between the focus group and comparison group in this study: that the experience of having a child diagnosed with autism is influential in parents' experience of parenting, including with subsequent children.
When examining broader research, studies exploring parents of children with chronic health conditions have identified some similar themes to those found in the current study. Themes of "parenting plus" and "extraordinary parenting" have been previously identified-described as the additional demands of parents and the additional expectations parents place upon themselves in the context of providing additional support and monitoring for their child in the context of additional needs and uncertainty (Ray, 2002;Rempel & Harrison, 2007), with parallels to the "uncertainty" and "vigilant parenting" identified in this study. Similarly, themes around parents sense of a need to be present and focussed, similar to the "finding balance" theme identified here, have been identified in qualitative research examining parents of children with Down Syndrome (Steffensen et al., 2022). Recognising the parallels between the experiences of parents of children with chronic conditions or genetic disorders and parents of autistic children lends reinforcement for applying similar standards of care to support parents regardless of the unique nature of the child's condition. The findings of the current study extend prior research by identifying the ongoing impact of this experience for parents when parenting subsequent children.
Limitations and Future Research
Limitations of this research include the restriction of parental experiences reported coming from mothers, despite actively recruiting for fathers as well. Whilst the preference for this study was to explore the parental experience more broadly, despite follow-up with those fathers that did express an interest, participation did not eventuate, restricting our study to that of the maternal experience. Research has previously highlighted the difficulty in recruiting fathers to participate in paediatric research more broadly (Macfadyen et al., 2011). Further research in this space is needed to better understand the reasons why fathers are less likely to participate in research, in order to better address these issues such that research into parental experiences can more wholly reflect the lived experience of families.
Further, parents interviewed here were only a subgroup of parents with children on the autism spectrum. This research specifically recruited for parents who had had a child diagnosed with autism and went on to have another child. This precluded parents who have not had subsequent children after having a child receive a diagnosis of autism, as well as parents who have had subsequent children before receiving a diagnosis for an older child. However, given the similarity in the essence of the themes to those of parents with an autistic child more broadly, we might expect these similar experiences for parents with different family situations. Future research could employ broader inclusion criteria to further establish this.
In recruitment we attempted to sample broadly amongst parents; however, with a small sample (n = 20) the demographic breadth is likely not representative of the broader Australian population. More targeted explorations of this experience for families with more diverse ethnic backgrounds, as well as families from lower socio-economic family backgrounds, would be important to expand the scope of understanding from the current sample. Further targeted or incentivised recruitment to families to participate in research may assist in this. The current study focussed on families with an autistic child, in comparison to families with a child with no diagnosis or concern of a neurodevelopmental disorder. We do not know from the current sample the impact of having other physical or developmental conditions and how this would impact the parenting experience of having a subsequent child. Given the previously discussed similarities between parenting, for example, a child with a chronic health condition to parenting an autistic child, future research should aim to explore the potential similarities and differences between different parent groups in the experience of having subsequent children when a previous child has received a physical health or developmental diagnosis.
The findings of this study carry broad implications for researchers working with the parents of autistic children. Building a better understanding of the experience of parents in this situation is imperative for clinicians and researchersin particular given the focus of an ongoing field of research conducted with this population. The findings from this study indicate that the development of supports could help empower families in this situation going forwards. In considering the support needs discussed by families, we can identify both the importance of and the requirement for different types of social needs; with emotional, instrumental, information, and appraisal supports (Heaney & Israel, 2008) all identified by families as needed during this time period. Many families reported that supports of this nature were obtained through their social networks; with family (importantly grandparents) and other parents (importantly other parents of autistic children), being relied upon to meet support needs rather than having access to supports through formal structures, as has been previously indicated (Prendeville & Kinsella, 2019). For those families whose social networks were less able to meet these needs, further supports are required, including: improved information and appraisal supports to provide clarity for parents around what is best practice for the monitoring and assessment of infant siblings of children diagnosed with autism, in order to provide more information about whether their child's development is typical and when they might need to intervene; and better systems of access for support through the emotional and instrumental challenges of this time period.
Further, with parent-mediated pre-emptive interventions gathering more evidence of efficacy within early infant development (from 9 months; e.g., Whitehouse et al., 2019Whitehouse et al., , 2021, the critical role of parents' support in shaping their children's development is increasingly highlighted. These interventions provide an opportunity to support the challenges identified by parents throughout the experience of parenting an infant after having a child diagnosed with autism, in particular considering the integration of informational and appraisal supports through these interventions. Next steps in this space would ideally be to evaluate the efficacy of such a pre-emptive intervention within an earlier perinatal phase.
Author Contribution DC: collaborated with the design and executed the study, completed the analyses, and wrote the paper. AW and MM collaborated with the design of the study and writing of the final manuscript. KE: collaborated with the design of the study, analyses, and the writing of the final manuscript. All authors approved the final version of the manuscript for submission.
Funding Open Access funding enabled and organized by CAUL and its Member Institutions AW is supported by the Imogen Miranda Suleski award from the Telethon Kids Institute and by an Investigator Grant from the National Health and Medical Research Council (#1173896).
Declarations
Ethics approval This study was performed in line with the principles of the Declaration of Helsinki. Approval was granted from The University of Western Australia ethics committee reference number RA/4/20/4767. Written informed consent for the participants' involvement in the study was obtained from all participants.
Conflict of Interest
The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2022-09-28T15:06:10.477Z | 2022-09-26T00:00:00.000 | {
"year": 2022,
"sha1": "5aac7b4adaedf1cf435d0ac94ea12ac31c1b8dd2",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s41252-022-00282-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "9d8f1823084187338b72f4873d76129a8e663b1e",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": []
} |
232168658 | pes2o/s2orc | v3-fos-license | Identification by deuterium diffusion of a nitrogen-related deep donor preventing the p-type doping of ZnO
Deuterium diffusion is investigated in nitrogen-doped homoepitaxial ZnO layers. The samples were grown under slightly Zn-rich growth conditions by plasma-assisted molecular beam epitaxy on m-plane ZnO substrates and have a nitrogen content [N] varied up to 5x1018 at.cm-3 as measured by secondary ion mass spectrometry (SIMS). All were exposed to a radio-frequency deuterium plasma during 1h at room temperature. Deuterium diffusion is observed in all epilayers while its penetration depth decreases as the nitrogen concentration increases. This is a strong evidence of a diffusion mechanism limited by the trapping of deuterium on a nitrogen-related trap. The SIMS profiles are analyzed using a two-trap model including a shallow trap, associated with a fast diffusion, and a deep trap, related to nitrogen. The capture radius of the nitrogen-related trap is determined to be 20 times smaller than the value expected for nitrogen-deuterium pairs formed by coulombic attraction between D+ and nitrogen-related acceptors. The (N2)O deep donor is proposed as the deep trapping site for deuterium and accounts well for the small capture radius and the observed photoluminescence quenching and recovery after deuteration of the ZnO:N epilayers. It is also found that this defect is by far the N-related defect with the highest concentration in the studied samples.
2 Zinc oxide (ZnO) is an attractive wide bandgap semiconductor with expected strong potentials in the fields of electronics, optoelectronics and spintronics. The lack of reliable p-type doping in ZnO has so far hindered its development in some of those fields. While n-type conductivity is easily obtained in ZnO, reliable p-type conductivity of ZnO has proven to be difficult to achieve in a reproducible way, if it has ever been achieved. The most promising acceptor species in ZnO appears to be nitrogen, which was shown to give rise to two acceptor levels in ZnO that are fully ionized at room temperature [1]. The shallowest level has a binding energy of ~165 +-40 meV and was first identified from donor-acceptor recombination on the photoluminescence spectra of ZnO:N [2]. This level is also detected in capacitance-voltage measurements [1], along with a deeper acceptor, having a binding energy of 0.48 eV but large capture cross section. However, the concentration of these acceptor centers only accounts for less than 1% of the total chemical concentration [N] in the ZnO:N films, and the epilayers are usually found to be fully or almost fully compensated. It should be noted that most recent theoretical calculations predict that the substitutional N O level forms a deep acceptor center with an energy level 1.3 eV or deeper above the valence band, [3] while the deep acceptor model has been confirmed experimentally [4].
Deuterium/hydrogen diffusion has long proven to be a most helpful technique to help figuring out the mechanisms making doping inefficient and to explain compensation mechanisms [5] [6]. In this study, deuterium diffusion is investigated in the much-debated context of p-type doping of ZnO.
Nitrogen-doped ZnO epilayers were grown by molecular beam epitaxy (MBE) on m-plane (101 0) nonpolar ZnO substrates after the growth of a thin (Zn,Mg)O buffer layer, the role of which being to limit the diffusion of donor impurities from the substrates [7]. Experimental conditions similar to those of Ref. [8] were used, which resulted in non-intentionally doped (n.i.d.) ZnO epilayers exhibiting an extremely low contamination by residual impurities and a net residual donor concentrations as low as ~10 14 cm -3 as measured by capacitance-voltage C(V) measurements, even without the use of a (Zn,Mg)O barrier layer. Three epilayers were grown and doped with different levels of nitrogen using a 3 radiofrequency (r.f.) plasma cell fed with nitrogen gas. C(V) measurements of similar ZnO:N layers show that such epilayers remain n-type [1].
The samples were cut in two pieces by using a wire saw, in order to keep a reference of the as-grown state. To introduce deuterium in the samples, they were exposed during 60 minutes to a r.f. plasma of pure D 2 , under 1 mbar and with 36W net excitation power at room temperature (~300 K). They were placed in remote position to avoid direct exposition to the energetic species of the plasma and ensuing defects. m-plane ZnO bare substrates were placed with the epilayers inside the plasma chamber during each deuteration. These reference samples were analyzed along with the deuterated ZnO:N layers to verify that the experimental conditions were rigorously the same from one deuteration to another.
Secondary ion mass spectrometry (SIMS) using a last-generation CAMECA IMS 7f equipment was performed to measure the concentration and depth distributions of deuterium and nitrogen atoms into the samples. Cs + primary ions were accelerated at 15 keV. The depth resolution was established to be less than 3.6 nm/decade (depth over which the signal drops by a factor of 10). This value was extracted from the slope of the magnesium signal at the interface between the (Zn,Mg)O buffer layer and the ZnO:N epilayer. and can be considered as an upper limit for the depth resolution. Nitrogen was measured by selecting mass 30 instead of mass 14 because it offers a better detection limit for nitrogen atoms in ZnO (about 1x10 17 cm -3 ) due to the formation of N 14 O 16 secondary ions. Nitrogen and deuterium absolute concentrations were quantified with a 10% uncertainty using implanted standards analyzed in the same conditions. The sample characteristics are presented in Table I. Figure 1 presents the deuterium and nitrogen SIMS depth profiles of the deuterated ZnO layers. Note that for the n.i.d. sample A, the nitrogen concentration is not reported since it lies below the detection limit (<1x10 17 cm -3 ). In the nitrogen-doped samples, a sharp peak for mass 30 is observed at 1.1µm in sample B (respectively 1.6 µm in sample C). These peaks occur at the interface between the ZnO substrate and the (Zn,Mg)O buffer layer, thereby materializing the growth start. Note that these sharp peaks are not related to nitrogen, but rather to a silicon contamination of the substrate surface that led to an interference with secondary ions Si 28 H 2 at mass 30. Otherwise, step-like doping profiles for nitrogen are obtained in both samples B and C. Looking in details at sample C data, after the beginning of the growth (i), the mass 30 signal lies at the nitrogen detection limit. When the nitrogen plasma source is lightened (ii), nitrogen is detected as a ~ 6x10 17 cm -3 concentration step. The next increase to 4.5x10 18 cm -3 occurs when the nitrogen source shutter is opened (iii). Altogether, these profiles illustrate how controlled the nitrogen doping can be in MBE grown ZnO epilayers.
Regarding the deuterium concentration profiles, it is necessary to remind that all samples were exposed to the same deuteration protocol. First, note that deuterium diffusion was effectively achieved at room temperature in all epilayers, which had not yet been evidenced before this work. Theoretical predictions [9] putting forward the high mobility of interstitial H in ZnO are thus confirmed by these present results.
The most important result provided by Fig. 1 is the following: the penetration depth of deuterium in the ZnO epilayers decreases as the nitrogen concentration increases (Fig. 1) The deuterium profiles are further analyzed in the framework of a diffusion model with two types of trapping sites. This two-trap model was initially developed for describing hydrogen diffusion in amorphous [10] and polycrystalline [11] silicon. In Fig. 2, the deuterium profiles can be subdivided into 3 regions: (i) the first 50 nm below the surface present high concentrations of deuterium (>10 20 /cm 3 ) due to its trapping on subsurface defects induced by the plasma process [12]. This region is not of great interest to analyze; (ii) at larger depths, we observe a fast diffusion regime for deuterium, well described Fig. S1(a)) comparable to the fast diffusion regime seen in our ZnO:N samples. Interestingly, the infrared absorption line at 2466 cm -1 was detected at 90K (see supplemental material Fig S1(b)). This value is close to the one found at 2470.3 cm -1 at 8K in Ref. [14]. These authors identified the interstitial anti-bonding (AB) site on oxygen atoms to be a shallow trapping site for hydrogen in the ZnO lattice, which leads us to suggest that the AB interstitial Such a capture radius of ~3 Å appears surprisingly small. Indeed, in the case of a pure coulombic attraction between D + and Nleading to the formation of N-D pairs (D + + N -→ N-D), the capture radius is obtained from the equivalency between Coulomb potential energy between charged particles of opposite signs (+q, -q) and thermal energy [15] : /4 . In this formula, q is the elementary charge, ε the dielectric constant, k Boltzmann constant and T the temperature. This model is accurately verified for H captured by substitutional acceptors such as B in Si [16], Al in SiC [17] or boron in diamond [12]. However, with ε = 7x10 -11 F.m -1 for ZnO [18], the equation yields = 70 Å at room temperature. This result is more than 20 times larger than the ~ 3 Å value assessed from the diffusion experiments. One could argue that a screening of the coulombic interaction in the presence of free carriers could reduce the capture radius. Following Ref. [19], more than 10 21 free electrons/cm 3 would be needed to account for ~ 3 Å. This is not consistent with the doping range of the investigated samples. We can therefore conclude that the mechanism of coulombic attraction between elementary charges (+q,-q) cannot explain the short capture radius of the nitrogen-related traps observed in this work.
While it seems reasonable to assume that deuterium is present under a D indicates that nitrogen introduces in the ZnO epilayers a center which is strongly non-radiative. Finally, the most striking result concerns the deuteration of the nitrogen-doped epilayer in Fig.3(c). It is remarkable that the deuterated epilayer C recovers the PL intensity measured in the undoped epilayer A. Note that there is no evidence of the DAP disappearance after deuteration, since the band tail of nearband-edge luminescence is stronger. In fact, the deuterium diffusion mainly results in the passivation of the non-radiative nitrogen center responsible for the PL intensity quenching.
In this final part, let us discuss the nature of nitrogen-related centers which could lead to small capture radii for deuterium. As shown in the previous diffusion analysis, small capture radii dictate that the deuterium trapping center is neutral. In a n-type semiconductor, this situation can only occur if the trapping center is a deep donor. Therefore, none of the reported acceptor states could explain the observed deuterium diffusion profiles because in a n-type semiconductor, all acceptors are ionized by compensation. This holds for the 1.3 eV acceptor level of N substituting O [3], the 0.17 eV double acceptor level attributed to a molecular N 2 at zinc site (N 2 ) Zn [21,22], the 0.48 eV acceptor level [1].
The first neighbor nitrogen pairs at oxygen sites N O -N O has also been identified as deep acceptors, with binding energies slightly deeper than that of N O [23].
On the contrary the more-tightly bond nitrogen pairs (N 2 ) O , i.e molecular nitrogen at the oxygen site, is not only expected to be a likely N-related defect due to its low formation energy under the zinc-rich conditions required to grow 2D ZnO epitaxial layers along this orientation, but also to be a deep doubledonor [24,25]. Such a deep donor is expected to be neutral in the presence of shallower deuterium donors with an ionization energy of 35 meV [26]. This scenario is fully consistent with our diffusion results. The passivation of the (N 2 ) O centers would involve the formation of ((N 2 ) O , D n ) complexes, the atomic configuration of which remains to be studied theoretically. This scenario is also consistent with our optical results: the ZnO photoluminescence is quenched with nitrogen doping by a non-radiative center, presumably (N 2 ) O . The PL intensity fully recovers after their passivation by deuterium, presumably as ((N 2 ) O , D n ) complexes.
Finally, another key result is given by the change in diffusion mechanism occurring at almost identical deuterium and nitrogen concentrations on the SIMS profiles (Fig. 2). This reveals that the proposed (N 2 ) O defect is by far the nitrogen-related defect with the highest concentration in the studied samples and provides an answer to the long standing search for the dominant incorporation mechanism of nitrogen in ZnO:N grown under zinc-rich conditions.
In conclusion, deuterium diffusion carried out on m-plane homoepitaxial ZnO:N samples grown by MBE in slightly Zn-rich conditions and using a r.f. plasma nitrogen source, reveal that a large amount of the nitrogen incorporated in the epilayers is introduced as nitrogen-related deep donor defects. The (N 2 ) O defect with a rather low formation energy is a credible candidate for such a donor center. Its concentration has to be drastically reduced in favour of the nitrogen-related acceptors to achieve reliable p-type ZnO, which might be done by promoting O-rich rather than the usual Zn-rich growth conditions.
Supplementary materials
See supplementary materials for FTIR absorption experiments performed on undoped ZnO single crystals after deuteration. | 2021-03-10T04:32:31.371Z | 2021-03-09T00:00:00.000 | {
"year": 2021,
"sha1": "2989aa0521062c2c05f86b4fc1c87d58840c6f6a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2103.05498",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2989aa0521062c2c05f86b4fc1c87d58840c6f6a",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
138881331 | pes2o/s2orc | v3-fos-license | Experimental study of electrical discharge machine (die sinking) on stainless steel 316L using design of experiment
The objective of this study is to investigate the influence of Electrical Discharge Machining (EDM) input parameters on characteristics of EDM process. A combination between two advanced materials which stainless steel 316l as workpiece and copper impregnated graphite as electrode have been selected in this study. The copper impregnated graphite is considered as a hybrid material for the electrode which exponentially used in tool and mould making industry. The study was conducted using 2 levels of full factorial method in design of experiments. Analysis of variance (ANOVA) and mathematically modelling were developed for material removal rate (MRR), electrode wear rate (EWR), surface roughness (SR) and dimensional accuracy (DA). The first order model is required to fit dimensional accuracy linear model. However, second order model are required to fit MRR, EWR and SR quadratic models respectively. The result shows that the peak current was the most significant factors to all variable responses. Based on confirmation run, all the results are less than 15% error, thus, indicating the model that were developed for MRR, SR, EWR and Dimensional Accuracy are reasonable accurate. © 2015 The Authors. Published by Elsevier B.V. Selection and Peer-review under responsibility of the Scientific Committee of MIMEC2015.
Introduction
EDM is a non-conventional machining process which the material from work pieces is eroded by series of discharge sparks between the work pieces and tool electrode. During EDM process, the tool and the work piece are submerged in dielectric fluid and operated by a small gap. EDM technology is developed and is widely use in applications such as die and mild machining and micromachining. The very hard and brittle materials can be machined easily to the desired form. It removes electrically conductive materials by means of rapid, repetitive spark discharges from pulsating direct-current power supply with dielectric flow between work piece and the electrode. There are many parameters that may affect the machining results such as peak current, servo voltage, pulse on time, pulse off time, jet flushing, etc.
Hindus et al. reported that for MRR, the significant factor was pulse on time followed by peak current. Higher MRR was obtained at 15A and 30V. Tools wear increases with increase in current and decrease in voltage. For TWR the significant factor was peak current followed by pulse on time [1]. In EDM process, it is difficult to find a single optimal combination of process parameters for the performance parameters, as the process parameters influence them differently. Thus, there is a need for a multi-objective method to derive solutions to this problem [2]. Nikhil Kumar et al. conducted comparatively study for MRR on EDM using electrode of Copper and Graphite. From the results, it was found that graphite electrode is more favourable than the copper electrode for the machining of the steel work piece for MRR and tool wear rate [3]. For copper graphite electrode (EDM-C3), it can be concluded that the significant parameters that affect the MRR are the interaction between peak current and pulse on time which is the same as for EDM-3 electrode. Another significant factor that affects the MRR is the interaction between the servo voltage and pulse on time [4]. Subramaniam et al. conducted the study about effect of electrode materials on EDM of 316l and 17-4 PH stainless steel. They found that copper electrode gives the better MRR for both materials tested. Copper tungsten electrode gives low electrode wear [6]. Mehul Manoharan et al. reported that for high discharge current, copper electrodes show highest MRR, whereas brass gives good surface finish and normal MRR. The MRR could be improved by carrying out research on electrode design, process parameters, EDM variations, powder mixed dielectric and electrically insulated electrodes. Special attention must be paid to surface integrity since EDM is a thermal method. The basis of controlling and improving MRR mostly relies on empirical methods. The particle could then form an electrically conducting path between the electrode and work material, causing unwanted discharges, which become arcs and reduce the sparking efficiency [7]. Considering the above, this study investigates the influence of EDM input parameters on characteristics of EDM process. A combination between two advanced materials which stainless steel 316L as workpiece and copper impregnated graphite as electrode have been selected in this study.
Research methodology
The experiments in this study were performed on Sodick AM3L die sinking EDM machine. The experimental works begin with material identification, machining response parameters and machining process parameters. Copper impregnated graphite (EDM-C3) was chosen for the electrode and stainless steel 316l as the workpiece material. The EDM die sinking parameters selected were peak current, servo voltage pulse ON time and pulse OFF time. Table 1 shows the design matrix based on 2 4 full factorial experiments. The depth of machining was set to 3mm and the machining time is recorded. The response studies are material removal rate (MRR), electrode wear rate (EWR), surface roughness (SR) and dimensional accuracy (DA). Experiments were conducted based on design of experiment (DOE) approach. Design expert software version 9 was used to determine the main effects of the process parameters. The analysis of variance (ANOVA), full factorial experiment, response surface methodology (RSM) and central composite design (CCD) methods were conducted in order to discover the significant factors as well as to develop mathematical models of the various EDM die sinking responses.
Results and analysis
The full factorial experimental design of 2 4 experiments with 4 centre points had been conducted. The Design Expert version 9 software was employed to perform all the data analysis. From ANOVA, due to curvature is significant, thus, response surface methodology with type of central composite design (CCD) is required to fit the second order model. The "face-centered CCD" involves 28 experimental observation with corresponds to an α value of 1 which consist of four central points.
The responses of MRR, EWR, SR and DA were evaluated by F-test of ANOVA to determine the significant effects. Based on analysis done by the Design Expert software, if the value of probability (Prob > F) is less than 0.05, it indicates that the factors are significant to the response parameters. The results of fit summary revealed that the model was statistically significant and valid for quadratic regression. Regression models are statistical models which used for studying how changes in one or more variables will change the value of another variable. The models are used to determine the optimum setting for best response value. Predicted equation can be developed using Equation (1) and (2): (1) If there is curvature in the system, then a polynomial of higher degree must be used, such as the second-order model.
Regression models in term of coded value are shown in Eqs. 3 to 6. A quadratic transformation recommended by Box-Cox plot need to be performed specifically for MRR and EWR. This is to ensure that the quadratic models are statistically significant.
Confirmation Run
Confirmation runs were conducted in order to evaluate margin error between theoretically prediction and confirmation test results. Basically the objective of confirmation runs is to evaluate whether the optimum parameters predicted were in allowable range. The margin error should be less than 15%.
Single Objective Confirmation Run
Single objective confirmation runs for each response were performed and final results obtained are exhibited in Tables 2, 3, 4 and 5, respectively. Individual response desirability must be defined according to the ultimate goal. The goal of MRR is high, EWR is low, SR is low and Dimensional Accuracy is low.
Multiple Objective Confirmation Run
Multiple objective confirmation run was performed and final result is obtained as exhibited in Table 6. Based on confirmation runs, all the results are less than 15%, thus, indicating the developed models for MRR, SR, EWR and Dimensional Accuracy are reasonable accurate. All the actual values are within 95% prediction interval (PI). Therefore, no prejudice on the model developed for each response.
Conclusion
Feasibility of EDM process for stainless steel 316l by using cooper impregnated graphite electrode has been proven. All parameter selected for this experiment, peak current, pulse on time and pulse off time are significant factors. The servo voltage does not have significant effects to the machining responses in RSM.
Mathematical models developed to predict the various machining characteristics are statistically valid. The quadratic models were obtained and significant for MRR, EWR and SR. However, the Dimensional Accuracy response only fitted for first order model or linear equation. The margin error obtained from all responses studied in this research work were all accepted as the results within prediction interval (PI) and below than 15%.
Recommendation
Depth of cut should set at least 5mm instead of 3mm prior to get more precise data especially for electrode weight. Workpiece surface integrity such as recast layer, heat affected zone (HAZ), microstructure and micro cracks should be investigated also for better understanding of EDM phenomenon. In order to get optimum settings for EWR and SR, it is suggested that proper screening of range of parameters should be carried out. Others factors influencing EDM machining responses such as jet flushing, dielectric fluid, etc should be investigated. | 2019-04-29T13:07:46.888Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "dfe897eb0bac51ace1f25f23213a05c3e63f78ea",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.promfg.2015.07.026",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0a3e645abd5d07e032e167ef8ec51ebd98f96450",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
233862948 | pes2o/s2orc | v3-fos-license | RNA methylation-mediated LINC01559 suppresses colorectal cancer progression by regulating the miR-106b-5p/PTEN axis
Long noncoding RNAs (lncRNAs) regulate multiple biological effects in cancers. Recently, RNA methylation has been found to modify not only coding RNAs but also some noncoding RNAs. How RNA methylation affects lncRNAs to affect colorectal cancer (CRC) progression remains elusive. The expression of LINC01559 was explored through RNA sequencing, quantitative real-time PCR (qRT-PCR) and in situ hybridization (ISH). The preliminary exploration of its function was performed using Western blotting (WB) and immunohistochemistry (IHC). Functional experiments in vitro and in vivo were conducted to explore the biological functions of LINC01559 in CRC. The LINC01559/miR-106-5p/PTEN axis was verified through fluorescence in situ hybridization (FISH), luciferase assays, and rescue experiments. RIP-sequencing, m6A RNA immunoprecipitation (MeRIP) assays and bioinformatic analysis were conducted to determine the upstream mechanism of LINC01559. The results showed that LINC01559 was downregulated in CRC compared with normal controls. Lower expression of LINC01559 in CRC patients predicted a poor prognosis. In addition, PTEN was found to be positively correlated with LINC01559, and miR-106b-5p could be the link between LINC01559 and PTEN. Then, silencing LINC01559 restored the malignant phenotype of CRC cells, while cotransfection of miR-106b-5p inhibitor neutralized this effect. Mechanistically, we found abundant m6A modification sites on LINC01559. Then, we uncovered these sites as potential targets of METTL3 through experiments in vivo. The results revealed a negative functional regulation of the LINC01559/miR-106b-5p/PTEN axis in CRC progression and explored a new mechanism of METTL3-mediated m6A modification on LINC01559. These results elucidate a novel potential therapeutic target for CRC treatment.
(Invitrogen, ThermoFisher Scientific, Carlsbad, CA, USA) was used to transient siRNA and overexpression vector transfection.
Western blot analysis (WB)
Total proteins were prepared from CRC using RIPA buffer (50 mm Tris-HCl, 150 mm NaCl, 1 mm EDTA, 0.1% SDS, 1% Triton X-100, 0.1% sodium deoxycholate) with proteinase inhibitor cocktail (Solarbio, Beijing, China). The lysates were centrifuged at 12,000 rpm for 15 min at 4°C and protein concentration was measured by BCA kit (Beyotime Biotechnology, Beijing, China). Equal quantities of protein were electrophoresed through a 10% sodium dodecyl sulfate/polyacrylamide gel and transferred to PVDF membranes (Millipore, Massachusetts, MA). The PVDF membranes were blocked with skim milk powder dissolved by TBST (5%) at room temperature for 1 h, and then incubated with primary antibodies at 4°C overnight.
The primary antibodies included anti-N-cadherin (No. 66219-1-Ig), anti-E-cadherin from Cell Signaling Technology (MA, USA). Secondary antibodies hybridized the membrane at room temperature for 1 h. Then the membranes were visualized by the chemiluminescence kit (Absin, Shanghai, China). WB strips were detected through Image-Pro Plus 6.0 and analyzed through Student's t test.
Wound healing assay
Cells were cultured in standard conditions until 80-90% confluence and treated with mitomycin C (10 µg/ml) during the wound healing assay. The cell migration was assessed by measuring the movement of cells into the acellular area created by a sterile insert. The wound closure was observed after 48 h.
Transwell assays
To assess the migration and invasiveness of HCT116 and SW480 cells, we used Transwell chambers (Corning, NY, USA). Briefly, 3×10 5 cells in serum-free medium were placed in the upper chamber. Dulbecco's modified Eagle's medium (500 ml) supplemented with 10% fetal bovine serum was added to the lower chamber. After incubation in a humidified atmosphere containing 5% CO2 at 37°C for 72 h, Giemsa staining of A375 cells that had migrated or invaded into the lower chamber was performed. Stained cells were photographed under an IX53 inverted microscope (NIKON, Tokyo, Japan), and the Image-Pro Plus software program (Media Cybernetics, Rockville, MD) was used to count the cells.
Cell proliferation assay
Cells were seeded in 96-well plates at 0.8-1×10 3 per well. Cell proliferation was evaluated using Cell Counting Kit-8 (Dojin Laboratories, Tokyo, Japan) according to manufacturer's instructions. We collected cell samples at 24h, 48h, 72h and 96h, respectively. Then 10 µl of CCK-8 solution was added to culture medium, and incubated for 2h. The absorbance at 450 nm wavelength was determined with a reference wavelength of 570 nm.
Following staining with Hoechst 33342 at room temperature for 30 min in darkness and one or two washes with PBS, the cells were observed using a Micro system (ImageXpress, Downingtown, PA, U.S.A.). Five fields were randomly selected and photographed, and the number of EdU-positive cells was calculated.
Tube formation assay
Twenty-four-well plates were coated with 60 ml Matrigel (BD Biosciences, USA) at 37 °C for 1 h for gel formation. A total of 1×10 5 stably transfected cells in medium containing 10% FBS were plated into the pre-solidified Matrigel and started the process to form capillary tubes and networks once seeded on Matrigel. Six hours after incubation, plates were observed under microscope and photographed (Nikon, Japan).
The numbers of branching points generating at least three tubules were counted.
The statistical analysis of survival data
The data of DFS and follow-up data (current status, survival) was obtain through telephone follow-up survey, periodic re-examination and readmission. CRC recurrence or death were regarded as terminal point event. The days of DFS were calculated from date of diagnosis to date of terminal point event occurrence. The expression of LINC01559 was detected through q-PCR assays in 41 pairs of CRC tissues and adjacent normal tissues. The high level of LINC01559 was designed as the expression above the median, and the low level was designed as the expression below the median. The outcome variable 1 represented CRC recurrence or death due to CRC and 0 represented censoring (loss of follow-up, endpoint inoccurrence or others).
Survival data were obtained by the Kaplan-Meier method, with significance assessed by the log-rank test in SPSS.
Legends to Supplemental Material Supplemental Methods and Materials
Additional file 1: Figure S1. Key signaling pathways in CRC from KEGG.
Additional file 16: Figure S11. qRT-PCR assay was utilized to estimate the over-expressed efficiency of OV-METTL3 in HCT116 and SW480 cells and si-METTL3 efficiency of SW480. | 2021-05-07T00:04:19.353Z | 2021-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "9a5e0257321b9518d0ccf29380305d8f25ec4826",
"oa_license": "CCBY",
"oa_url": "https://www.ijbs.com/v18p3048.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fc4bd931e926af10942cafb4d6896d8ad189a89e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
4209363 | pes2o/s2orc | v3-fos-license | Discussing Opioid Risks With Patients to Reduce Misuse and Abuse: Evidence From 2 Surveys
We used 2 population-representative surveys to evaluate the recommendation from recent clinical guidelines for prescribing opioid analgesics that physicians discuss the risk of long-term use disorders with patients. In nationally representative data we observed a 60% lower rate, after adjustment for covariates, in a self-reported saving of pills among respondents who say they talked with their physicians about the risks of prescription painkiller addiction (67% lower rate without adjustment). These findings suggest patient education efforts, as currently practiced in the United States, may have positive behavioral consequences that could lower the risks of prescription painkiller abuse. Future research should test these associations under controlled settings.
INTRODUCTION
I n its recently published guidelines for prescribing opioids for chronic pain, the Centers for Disease Control and Prevention (CDC) recommend that clinicians discuss the known risks and benefits of opioid therapy with their patients. 1 Recommended topics of discussion include the risks of life-long use disorders and the risk posed to family members if prescription painkillers are intentionally or unintentionally shared. Although communication-based techniques have been shown to improve patient behaviors, 2 the CDC guidelines note that no evidence currently exists to evaluate the effectiveness of patient education or any other risk-mitigation strategies for prescription opioids. Given the increasing demands placed on physicians in primary care, evidence will be essential to helping physicians prioritize as they put these guidelines into practice. We evaluated 2 population-representative surveys conducted in 2015 and found preliminary evidence that suggests patient education efforts as currently practiced may be having positive behavioral consequences.
METHODS
We analyzed the results of 2 random-digit-dial telephone surveys of adults aged 18 years and older conducted by the Harvard T. H. Chan School of Public Health and the Boston Globe: a national poll fielded April 15-19, 2015 (an 8% response rate yielded 1,033 completed interviews) and a Massachusetts poll fielded April 7-16, 2015 (a 15% response rate yielded 810 completed interviews). Both surveys included cellular and landline telephones and were poststratified to US Census benchmarks of age, sex, education, race, marital status, and geographic area to be representative of underlying populations (The United States and Massachusetts, respectively). Samples were restricted to respondents who reported that they had been prescribed strong prescription painkillers within the last 2 years, bringing final sample sizes to 216 in the national survey, and 169 in the Massachusetts sample. We provide the survey instrument in the Supplemental Appendix, available at http://www.annfammed.org/content/14/6/575/suppl/DC1. We fit multivariable logistic regressions to estimate the association between reporting having talked with a physician about the risk of prescription painkiller addiction ("discussed risk of addiction") and reporting having saved prescription painkillers for personal medical use or to share with family members ("saved pills for later"). Retention of unused pills has been found to be an important source of opioid diversion and misuse. [3][4][5] Control variables included sex, age, race, income, metro status, and geographic region. Models also controlled for whether the patient knew someone who abused prescription painkillers in the past 5 years, a marker associated with higher risk of abuse in the patient, 6 to indirectly reduce bias that may occur from physicians targeting high-risk patients for discussion about addiction. All analyses were conducted in R 7 using the survey (R 3.31) 8 and effects (R 3.1-1) 9 packages. (Model outputs can be viewed in Table A1 of the Supplemental Appendix.) Unadjusted results were also included for comparison. Model parameters were used to plot estimated response levels and 95% confidence intervals for comparison categories when holding all covariates constant at sample means. To test the sensitivity of our results to model specification, we systematically fit alternate specifications, including models specified through backward selection, and our conclusions were unaffected (data not shown).
RESULTS
National and Massachusetts samples were similar along sociodemographic characteristics, with the exception of metropolitan status, where the Massachusetts sample had higher proportions of respondents living in areas with suburban census designations. Residents in Massachusetts were far less likely than the broader US population to recall talking about the risk of prescription drug addiction (36% recalled such a conversation compared with 61% nationally) and were more likely to say they saved pills for later use (30% said they saved pills compared with 17% nationally, Table 1).
National respondents who reported discussing the risks of addiction were less likely to report saving prescription painkillers for later. Holding other covariates at national sample means, we estimated that 20% of respondents who did not remember discussing addiction risk reported saving pills for later, compared with only 8% who did remember discussing addiction risk, for a rate that is 60% lower. Without adjustment for covariates we observed an even larger difference of 27% compared with 9%, respectively ( Figure 1).
These results were reproduced in the Massachusetts sample. Holding other covariates at state sample means, we estimated that 26% of respondents who did not discuss the risk of addiction reported saving pills for later, compared with only 12% among those who did. Without adjustment for covariates we observed a difference of 36% compared with 19%, respectively ( Figure 1).
DISCUSSION
Even though patient education efforts figure prominently into the CDC's latest recommendations around opioid prescribing for chronic pain, evidence is sorely needed to support their use and identify best practices. Our findings suggest that, as currently practiced, physician efforts to talk with their patients about the addictiveness of prescription painkillers-or other topics addressed in the course of these discussions-may be yielding positive behavioral consequences that reduce opportunities for painkiller abuse. Our data are cross-sectional, limited in sample size, and subject to many of the short-comings of survey data. The associations that we observe do not prove a causal relationship and could be explained by uncontrolled differences between patients who did and did not recall discussing the risk of addiction with their physicians. These findings, however, offer a first look at evidence to support a common recommendation from opioid prescribing guidelines for which no evidence currently exists and suggest that future exploration into the effectiveness of physician-patient communication on the risks of opioids may be fruitful. Future research should use controlled settings to test the effectiveness of a discussion about addiction risk and related safety measures in promoting appropriate use, storage, and disposal of prescription painkillers. | 2018-04-03T05:15:57.950Z | 2016-11-01T00:00:00.000 | {
"year": 2016,
"sha1": "d47326b0f492e123e4eacfdc1745bbebc4cd3f1e",
"oa_license": null,
"oa_url": "https://www.annfammed.org/content/annalsfm/14/6/575.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "88f5f9e81f233dca42d2da4e7da593fc8ea794be",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
218597663 | pes2o/s2orc | v3-fos-license | Imaging Conductive Objects Through Metal Enclosures Using ELF/VLF Magnetic Fields
Imaging through conductive media is a pervasive problem in medical, industrial, and security applications. Several potential modalities such as X-ray, exotic particle beams, and related high resolution techniques have been employed in the past. However, the difficulty of production and safety of these technologies is a concern in practice. Of particular interest in this work are extremely and very low frequency (ELF/VLF, f<30 kHz) radio signals. We consider the feasibility of using ELF/VLF signals for detecting and imaging objects that are hidden inside thin-shelled conductive enclosures. It is shown that the hidden perfect electrical conductor (PEC) object partially blocks the incident magnetic field and results in a magnetic-field depletion that can be used to detect the object. Next, using FEKO simulations, a parametric study of through-conductor detection using a magnetic dipole is considered for a cubic aluminum shield. It is shown that signals above 1 kHz can be used to evaluate the outer shield properties, while signals at frequencies below 200 Hz can be used to effectively discern shapes of hidden objects by observing the magnetic distortion within 1 cm of the outer shield. To validate the results of theory and simulations, experiments matching the simulation conditions are conducted with a cubic aluminum container and hidden aluminum block. The experiments clearly demonstrate the presence of the magnetic field depletion as predicted by theory and simulations. The results of this work suggest that near-field ELF/VLF magnetic induction is an effective method for imaging through realistic metallic shields.
I. INTRODUCTION
ELF/VLF signals (f<30 kHz) have been widely used over several decades for radio waves, geophysical prospecting, submarine communications, and upper atmospheric remote sensing [1]- [5]. Background ELF/VLF signals have been also employed for passive imaging applications due to the global presence of power line radiation, Navy transmitter signals, background lightning radiation (sferics), and cosmic sources [5]. The primary reason is that ELF/VLF signals can penetrate conductive barriers (ground, seawater, metal shields, etc.) that are otherwise inaccessible for higher The associate editor coordinating the review of this manuscript and approving it for publication was Su Yan . frequency radiation [6]. In this work, we investigate the utility of ELF/VLF signals for detecting metal objects that are obscured by highly conductive shields. The motivation is to investigate the possibility of imaging through highly conductive media using ELF/VLF magnetic induction tomography (MIT) [7] or similar methods. From a practical point of view, the analysis is relevant for non-destructive evaluation of conductive objects inside metal containers for industrial and security applications.
MIT has recently garnered considerable attention as a novel method for non-destructive evaluation (NDE) in industrial and medical applications [8]- [10]. The typical methodology for inspecting conductive structures surrounds the target of interest with an array of several inductive coils. One of the coils is excited at frequencies in the range of 5-500 kHz [11] while the remaining coils are treated as receivers. Based on the receiver measurements, image reconstruction can be performed via appropriate inversion algorithms [12], [13]. MIT has proven to be quite successful for inspecting metallic structures for deformities when the surrounding medium is air or mildly conductive media such as saline (σ ≈1 S m ) [7]. Although commonly used MIT frequencies contain the ELF/VLF band, previous work has not considered inspection of metallic objects that are surrounded by a highly conductive shield (10 5 S m < σ < 10 8 S m ). As such, the primary objective of this work is to consider the efficacy of ELF/VLF magnetic fields for the detection of shielded metallic objects.
Section II considers analytical expressions and approximations for a spherical perfect electrical conductor (PEC) on the internal surface of a conductive shell when illuminated by a low frequency magnetic field. In particular, it is shown that the hidden PEC object results in a magnetic-field depletion region that carries information about the object. Section III utilizes FEKO simulations to parametrically evaluate the impact of frequency, measurement distance, and object shape on ELF/VLF through-shield imaging. Section IV then discusses experimental results with a aluminum block that is placed on the internal surface of a aluminum container. It is shown that the experimental results clearly display the magnetic-field depletion that is consistent with theory and simulations. Section V provides a summary of the results with implications for future work.
II. NEAR-FIELD MAGNETIC DISTORTION THEORY
Modeling near-field scattering of ELF/VLF signals by shielded objects is analytically tractable for only a few selected geometries. However, the general principles can be understood by analyzing magnetic field distortions due to spherical objects and thin-shelled spherical shields. Specifically, we consider the problem of a thin spherical shield of outer radius a, shield thickness d and conductivity σ . The center of the spherical shield is located at the origin. A spherical PEC object of radius R is assumed to be inside the shield with its center located at r 0 as shown in Fig. 1. The source is assumed to be a uniform z-directed magnetic field phasor H 0 = H 0ẑ with angular frequency ω. Additionally, the fields are assumed to be measured on a plane at a distance δr that is displaced along the direction that is normal to the external sphere and along the incident magnetic field direction. To examine the relative effects of the shield and the PEC object, the total magnetic field is decomposed into H = H S + H P . Here, H S corresponds to the total magnetic field in the presence of an empty spherical shield. The quantity H P includes the additional distortion to the magnetic field due to the hidden PEC sphere.
Determining a closed form expression for H S in the near-field limit is fruitful for the present analysis. The general abstract problem of scattering of electromagnetic plane waves by concentric spherical shells of arbitrary material properties has been studied by numerous authors for more than a century [14], [15]. However, the analysis can be considerably simplified in the long-wave limit for a λ where λ represents the free-space wavelength. It is worth noting that plane-wave analysis is justified since it is effectively equivalent to a homogeneous magnetic field in the low frequency limit where a λ 1. Specifically, it can be shown that in the free-space region inside the spherical shell, 0 ≤ r < a − d, the magnetic field is given by, The quantity SE H is the magnetic shielding effectiveness with a closed form expression given by [14], In contrast, the magnetic field in the free-space exterior region, r > a is the sum of the incident field and that of an ideal magnetic dipole centered at the origin with magnetic moment, m S . The magnetic dipole moment is given by [16], where the quantity A = jωσ µ 0 ad jωσ µ 0 ad+3 . It can be seen that for high conductivity shields σ → ∞, the quantity A → 1 and reverts to the well-known solution of a PEC sphere immersed in a uniform magnetic field [16]. The critical importance of frequency ω is apparent since sufficiently low frequencies offset high conductivities and can be used to more easily penetrate the thin shield and detect hidden metal objects. Thus, examining the frequency dependence of the distorted magnetic field is a useful method for separating the shield from potential hidden objects [17].
In principle the quantity H P can be determined exactly by representing the solution via an infinite summation expansion of vector spherical harmonics [18]. However, the general expression for a non-concentric PEC sphere is sufficiently complicated that useful abstract features can be overlooked. As such, we thus invoke heuristic approximations to more easily include the effect of the hidden PEC sphere. In the interior of the shield, the magnetic field H S is assumed to be approximately uniform, which is well justified by previous work [16]. Additionally, the multiple reflections between the internal shield walls and the PEC object are also neglected which is similar to the Born approximation. In invoking these approximations, the solution to the problem of a PEC sphere immersed in a uniform magnetic field, H S , can be utilized. The magnetic field will induce surface currents on the PEC sphere that can be equivalently modeled as an induced magnetic dipole m P , The total field H at the receiver is thus approximated as the superposition of the fields from the dipole m S , the dipole m P , and the incident field H 0 . Assuming the impact of an empty box or shell can be calibrated (which is a reasonable assumption for several inspection applications), the gain G can be defined as G = |H S ·ẑ| |H | which is the ratio of the field in the absence of the hidden object to the normal component of the field with the object present.
The gain G for the z-component (same as primary field component) of the magnetic field is shown in Fig. 2 for a=0.74 m, d=2.7 mm, R=.1 m, σ =3.53×10 7 S m , r p =(0,0,−0.6417 m). The measurement plane is located at r=(x,y,−0.7494 m). The aforementioned parameters are selected in accordance with the experimental setup that is presented in Section IV. As shown in Fig. 2, the gain shows a deviation from the empty shield case. Specifically, a magnetic-field depletion (or equivalently enhancement of the gain, G) is clearly visible at 50 Hz and to a less extent at 200 Hz. At 1 kHz, A ≈ 1 and the outer shield itself acts effectively like a PEC sphere and completely blocks the hidden PEC object which shows the utility of the lower frequency signals for penetrating a conductor. The presence of the increased gain at lower frequencies is simply because the induced magnetic dipole moment shown in 4 is anti-parallel to the shielded magnetic field H S . Thus, the additional fields due to the PEC object will be anti-parallel to the incident field when measured along the axis of the induced dipole moment. Since the frequencies required to penetrate conductors are in the ELF/VLF band, the wavelengths are always in the kilometer range. As such, the physics of the problem is always in the quasi-static (or near-field) regime and every object can be considered electrically small. Thus, the frequency dependence primarily enters as the skin-depth phenomena (for penetration) but is not strongly related to object size. There is a dependence of frequency on geometry when finite conductivity/permeability objects are considered. However, in the limit of PEC objects, frequency does not show up in the equations at all. This key feature can thus be leveraged for detecting highly conductive objects that are otherwise obscured by thin-shell shield.
The previous analysis is quite approximate and is primarily meant to present the useful feature of the magnetic field depletion. A more general and accurate analysis is analytically intractable and as such a numerical approach is more appropriate. In the following section, numerical simulations are used to sweep over relevant parameters and examine important aspects of near-field magnetic distortions for non-destructive evaluation through conductive shields.
III. SIMULATION STUDY
The primary focus of the simulation study is to evaluate the efficacy of detecting PEC objects inside a aluminum container using a transmitting loop antenna. The parameters used in the numerical study are consistent with that of performed experiments described in Section IV. The simulations employ the commercial electromagnetic software, FEKO [19], which utilizes a hybrid frequency-domain finite element and boundary element solver. We employ the frequency domain method of moments (MoM) using the surface integral equation (SIE) option in FEKO. This method is well-suited to shielding problems since the surface area to volume ratio is low. The shielding container is only 2.7 mm in thickness, however, the skin depth can be just a fraction of a millimeter at 1 kHz. Using methods like finite difference time domain (FDTD) or finite element method (FEM) would force a large number of simulation cells to resolve the skin depth inside the material. Furthermore, the air-region inside the container will also need to be meshed using FDTD/FEM. This air-region occupies 98.7% of the total volume of the container and will require a large number of cells using FDTD/FEM. The MoM-SIE formalism only meshes the surfaces of the container and the hidden objects while the air region is implicitly included in the Green's functions. Thus the MoM approach allows at least an order of magnitude less of simulation cells (and computation time) compared to FDTD/FEM.
A. BASIC SIMULATION RESULTS
It is useful to first expand on the analytical estimates from Section II using simulations. In this manner, the qualitative description of the magnetic-field depletion can be verified without needing approximations. Fig. 3 shows a simple simulation result of a PEC cylinder in the presence of a point magnetic dipole (proxy for loop antenna). Specifically, the cylinder has radius and height that are both 0.2 m. The transmitting magnetic dipole is negative z-oriented with amplitude 1 A·m 2 . The dipole is elevated 0.2 m above the central axis of the cylinder PEC and transmits at 50 Hz. The red arrows on the surface of the cylinder, shown in Fig. 3(a), represent the induced surface currents. The surface currents are induced to satisfy the boundary condition of zero normal magnetic field on the surface. The currents are thus forced to flow in the counterclockwise direction such that the induced field opposes the incident field on the surface of the PEC. The counterclockwise loops of surface current can be abstracted as a z-oriented magnetic dipole. The induced dipole moment is thus anti-parallel to the transmitting dipole. Thus, along the axis of the source and cylinder, the induced dipole will generate a magnetic field that opposes the incident field and will result in a magnetic field deletion. The magnetic field depletion is readily observed in Fig. 3(a) on a plane located 0.05 m below the base of the cylinder. Fig. 3(b) and Fig. 3(c) show the H-field magnitudes at Z=−0.05 m along the y-direction (X=0.6 m) and along the x-direction (Y=0.6 m) respectively. Specifically, the field magnitude is observed to decrease approximately 30 dB in the region directly below the cylinder. The relatively large change in the field magnitude motivates investigating ELF object detection with the addition of a conductive shield.
The simulations are further extended to include the presence of a metallic shield. Fig. 4 shows the simulation setup of a PEC cube inside a aluminum box shield. The aluminum box has a conductivity of 1 × 10 7 S/m with side length L=1.2 m and thickness d=2.7 mm. The hidden PEC has length 0.2 m with the center of the PEC base located at Z=d, and X=Y=L/2. The transmitter is a negative z-oriented magnetic dipole located at Z=1.21 m, and X=Y=L/2 with a dipole-moment of 1 Am 2 . The locations of the transmitter, aluminum box, and PEC are fixed as shown in Fig. 4. The simulations utilize 8,770 surface elements with 1 cm grid length of field points. As shown in Fig. 5(e), the magnetic field amplitude behind the hidden PEC cube is reduced by approximately 20 dB compared to surrounding field. The region of magnetic field depletion approximately outlines the hidden PEC object. This suggests that hidden objects can not only be detected but geometrical features can also be readily discerned. As shown in Fig. 5(g) and (h) show the H-field magnitude is approximately the same as the normal field components shown in 79748 VOLUME 8, 2020
B. PARAMETRIC STUDY
The previous simulations have shown the utility of using ELF signals for imaging hidden PEC objects inside a metallic shield. Next, the simulations are extended in a parametric manner to examine the effect of object shape and size, measurement plane distance, and frequency.
The same aluminum container used in previous simulations is filled with eight PEC objects as shown in Fig. 6. The objects are shaped in a manner to spell out ''C-U-D-E-N-V-E-R'' which provides some diversity in the object shapes and locations. Fig. 7(a) shows G along the central z-axis of the aluminum shield starting from the base (Z=0) and extending below by 2 m. Each curve corresponds to different heights of the PEC objects while keeping the X and Y dimensions fixed. At Z=0, G decreases from 22.2 dB to 21.2 dB when the height of the PEC increases from 0.2 m to 0.8 m. At Z=−0.5 m, the difference between height 0.2 m and 0.8 m is 0.7 dB and it decreases at a farther distance away from the aluminum shield. This demonstrates that the taller the hidden object, the further the magnetic depletion extends in the direction normal to the shield surface. That is, the height of the hidden object can be discerned by measuring the H-field at incrementally greater distances from the shield surface. At the same time, the transverse geometries of the hidden objects are very clearly delineated. This is shown in Fig. 7(b) and (c) where the measurement planes are at Z=−0.005 m and Z=−0.04 m respectively.
As shown, the measurement at Z=−0.005 m and Z=−0.04 m shows the object shapes quite clearly. However, the dynamic range is approximately 10 dB lower when the measurement plane moves from Z=−0.005 m to Z=−0.04 m as shown in Fig. 7(a). This is expected since the fields fall off as 1 D 3 where D is the distance from the object. Thus greater measurement distance will make the field distortions decrease while ''blurring'' the object shapes to an extent. In regards to practical scenarios, 0.04 m is still a reasonable standoff distance for taking measurements with small magnetic coil receivers, this is demonstrated with experiments in Section IV. To extend the measurement distances, the experiment would need to be moved to a less cluttered environment and the incident signal to noise ratio (SNR) should also be increased. The primary takeaway from Fig. 7(b) and (c) is that hidden objects on the surface of metallic shields can in fact be detected and imaged even without complex inversion techniques.
The simulation results are then further extended to investigate the dependence of object detection on transmit frequency. Fig. 8 shows the relative impact of frequency along VOLUME 8, 2020 with standoff distance. Panels (a)-(c) show the magnitude ratio R of the incident magnetic field H 0 to the total magnetic field H (R = |H 0 | |H | ) along the z-direction starting 2 m below the box up to the base at Z=−0.005 m. The ratio R is measured along the z-line that intersects the center of the box at X=Y=0.6 m. Specifically, panel (a) shows the field ratio due to an empty aluminum box. This serves as a calibration and also demonstrates the relative frequency effect with the outer shell alone. As shown at 50 Hz, the value of H is relatively compared to other frequencies. As discussed in Section II, this is because the 50 Hz signal is relatively transparent to the incident signal which results in a small induced dipole moment on the aluminum shell. As the frequency is increased to 1 kHz the aluminum shell begins to resemble a PEC box. As shown, the 1 kHz and 25 kHz cases results show approximately the same field ratio since the box is effectively a PEC at these frequencies. This suggests that frequencies below ≈1 kHz are more useful to penetrate the 2.7 mm aluminum shield to an appreciable extent. Fig. 8(b) shows the field ratio with an isolated PEC. As discussed in Section II, the magnetic moment induced on a PEC sphere is independent of frequency in the long-wave limit. As such, the field ratio is the same for all frequencies as expected. Fig. 8(c) shows the result of the full problem of a hidden PEC object inside the aluminum shield. As shown in Fig. 8(a), for 1 kHz and 25 kHz the result is the same as the empty aluminum shield shown. Again, this is because the outer shield effectively blocks the incident field and does not permit detection of the hidden object. On the other hand, the 50 Hz and 200 Hz cases both differ from the empty shield case at positions close to the base of the shield Z≈−0.005 m. At distances approaching Z=2 m the empty box case (a) and hidden object case (b) once again approach the same answer. This is for the same reason as Fig. 7(b)-(c) where the field distortion due to the hidden objects become less prominent with increasing standoff distance. The impact of frequency is further illustrated in Fig. 8(d)-(f) where the gain G is measured at Z=−0.005 m for three frequencies.
As the frequency is progressed from 50 Hz to 1 kHz, the hidden objects become less prominent as the aluminum box becomes a more effective shield.
It is worth noting that all the simulations consider PEC hidden objects. In practice, the finite conductivity of materials can have an additional important impact on the frequency dependence. As such, the basic idea of scanning the normal component of the field at the surface of the shield along with sweeping frequency remains a valid experimental principle. Experimental results that closely correspond to the simulation parameters are discussed in Section IV.
IV. EXPERIMENTAL VALIDATION
To validate the results of simulations, an experiment with an aluminum container with an aluminum block placed inside was conducted. The experiment was conducted indoors on the University of Colorado Denver campus with considerable environmental clutter (metal fixtures, wiring, etc.) and large amounts of radio noise. The environment was selected to demonstrate the robustness of ELF/VLF through conductor imaging in a realistic environment.
The cubic container has side lengths of 1.2 m and wall thickness of 2.7 mm, as consistent with the FEKO simulations. The container has 5 welded sides and a single removable face that can be bolted on. For the source, a 0.5 m radius loop antenna consisting of 10 turns of 12 gauge copper wire was fastened to the opposite side of the box relative to the object. The loop antenna is a more realistic proxy of the magnetic dipole used in the simulations described in Section III. Since the frequencies considered in this experiments are within the audible acoustic spectrum, the transmitting loop was driven by an audio amplifier (Planet Audio AC1200.2 Anarchy 1200 Watt, 2 Channel, 2/4 Ohm Stable Class A/B, Full Range). A photograph of the transmit loop on the container (bolted faced removed) is shown in Fig. 9. The rectangular aluminum block was placed on the inner face of the container and is 6.5 cm thick, 16 cm wide, and 23.5 cm long. The long edge was positioned vertically relative to the ground with the base elevated 30 cm from the bottom of the container using a wooden stand. The center of the block was placed 60 cm from either side of the container. This placement was to ensure that the face with the largest surface area was coplanar with the face of the shield. A photograph of the hidden aluminum object inside the container is shown in Fig. 10.
As demonstrated in Section III, the magnetic field normal to the container's exterior can have extremely fine spatial resolution, however, the resolution fades rapidly with measurement distance from the surface of the container. To match the results of the simulations, a high resolution grid of point magnetic field measurements are required on the exterior face that is coplanar with the hidden object. As such, a 10 x 10 grid of measurement points with a 5 cm grid spacing was ensured using a fastened pegboard and attached dowels to place the receiver. The total area of the grid covers 70 cm x 70 cm around the hidden object. The receiver is a 10 cm square loop antenna that utilizes a sophisticated design based 79750 VOLUME 8, 2020 on specifications described in [20]. The 10 cm width of the receiving loop ensures fifty percent overlap between adjacent positions. Each measurement lasts sixty seconds. Thus, one scan of the face takes approximately 100 minutes per transmit frequency. The experiment is repeated with and without the object and thus takes 200 minutes for one experimental result. A photograph of the loop antenna along with the pegboard array is shown in Fig. 11.
The loop transmitter was run at three frequencies: 50 Hz, 200 Hz, and 1 kHz. These frequencies were selected to demonstrate the frequency dependence of detection as well as to not overlap with power-line harmonics. Samples received spectra for each transmit frequency (60 second integration) in the presence of the empty box are shown in Fig. 12. The 50 Hz signal (marked by red x in the top panel) is shown to be approximately 60 dB above the noise floor which is expected to be sufficient based on simulation results. The 200 Hz and 1 kHz signals are approximately 40 dB and 50 dB above the noise floor respectively and are also expected to have a sufficiently high SNR for object detection. The unlabeled spikes on the spectra correspond to power-line harmonics as well as fields from unknown electronic sources. The 20-30 dB of SNR even in the presence of significant interference suggests that the full 60 second integration times were not needed and can be reduced in the future. Fig. 13 shows measured values of the magnetic field deviations at 50 Hz, 200 Hz, and 1 kHz. All the results are relative to the baseline empty container case. As shown, the hidden aluminum object is clearly visible as a magnetic field depletion in the 50 Hz and 200 Hz cases. At 1 kHz, the object cannot be detected because the magnetic field can not appreciably penetrate the container. The noise-like variations at 1 kHz occur because the two way penetration through the container results in approximately 80 dB of loss [21], which is below the background noise floor. The results are consistent with simulations in Section III and provide good justification for using ELF/VLF signals as part of through conductor detection applications.
The experiments show certain important features that deviate from the simulation results and warrant further discussion. One such deviation is that the 50 Hz experiment scenario shows a dimmer outline of the object in comparison to the VOLUME 8, 2020 200 Hz case. This is likely due to the fact that the simulations focused on PEC objects while the experiments utilized an aluminum object. Although the penetration into the container is highest in the 50 Hz case, the aluminum object scatters less than a PEC since there is appreciable penetration into the object itself (1.2 cm skin depth at 50 Hz). Varying material properties of the hidden object was not explored in the simulations, however, the experimental results suggest a clear frequency dependence. Specifically, the induced dipole moment on a finite conductivity object is well-known to have a strong dependence on frequency in the ELF/VLF range [18], [22]. A second deviation is that the measurements are not at points but rather integrated measurements over the area of the 10 cm x 10 cm receiving loop. Thus, the location with the object is likely undersampled which leads to a blurring of the constructed image.
The experimental results show that nondestructive evaluation for the size, position, and material properties of a hidden object inside a realistic container is possible using ELF/VLF magnetic fields. The investigations in this works have primarily considered a hidden aluminum object inside an aluminum container. Additionally, the object was directly adjacent to an internal face of the shield. Under these constraints, theoretical and experimental investigation has shown that the hidden object is easily detectable without complex image reconstruction methods. However, as the location of the object and geometric complexity is varied, more sophisticated methods of image reconstruction would be required. Such methods have been heavily studied in static and quasi-static reconstruction technologies such as electrical capacitance tomography (ECT), electrical impedance tomography (EIT), MIT [23]. Thus, the methodology employed in this work can be directly extending for more complex scenarios. Topics for ongoing work includes varying object material properties, developing automated scanning for higher resolution measurements, and employing quasi-static inversion algorithms for tomographic image reconstruction.
V. CONCLUSION
ELF/VLF imaging and nondestructive evaluation of conductive objects hidden inside a metal container is investigated with theory, simulations, and experiment. Analytical approximation show that PEC objects inside a thin-shelled metal container results in a magnetic field depletion at the exterior of the container. The depletion is primarily observed for the normal component of the field in relation to the shield surface. Using simulations, a parametric study is to evaluate the importance of standoff distance, frequency, and object size. It is shown that hidden objects on the interior surface of an aluminum shield can be robustly detected for frequencies below 200 Hz for standoff distances less than 5 cm. An experiment of a hidden aluminum object inside an aluminum container is conducted using an ELF/VLF loop antenna. The experiment clearly demonstrated the magnetic depletion effect and shows good agreement with simulation results. The results of the work justify the use of ELF/VLF magnetic induction fields for non-destructive evaluation through highly conductive enclosures. From 2015 to 2016, he was a Postdoctoral Fellow with Pusan National University, Pusan, South Korea. Since 2016, he has been working as a Postdoctoral Fellow with the University of Colorado at Denver, Denver, CO, USA. His research interests include numerical modeling of low-temperature plasmas, electromagnetics, nonlinear wave-particle interactions in the earth's radiation belts, high-speed plasma science using hybrid-PIC, and PIC simulation. His research interests include ELF/VLF remote sensing, electrically small antenna design, and machine learning. He was awarded the President's Fellowship from Georgia Tech. VOLUME 8, 2020 | 2020-04-30T09:03:52.820Z | 2020-04-27T00:00:00.000 | {
"year": 2020,
"sha1": "485c3adf4763b1a48644a54ca334cbef7a54f5e1",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09078807.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "8bd99d476d5081465acc11d47fa3374a16267327",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
260164678 | pes2o/s2orc | v3-fos-license | Histogram Layer Time Delay Neural Networks for Passive Sonar Classification
Underwater acoustic target detection in remote marine sensing operations is challenging due to complex sound wave propagation. Despite the availability of reliable sonar systems, target recognition remains a difficult problem. Various methods address improved target recognition. However, most struggle to disentangle the high-dimensional, non-linear patterns in the observed target recordings. In this work, a novel method combines a time delay neural network and histogram layer to incorporate statistical contexts for improved feature learning and underwater acoustic target classification. The proposed method outperforms the baseline model, demonstrating the utility in incorporating statistical contexts for passive sonar target recognition. The code for this work is publicly available.
INTRODUCTION
Underwater acoustic target recognition (UATR) technology plays a crucial role in a variety of domains, including biology [1], carrying out search and rescue operations, enhancing port security [2], and mapping the ocean floor [3]. One of the primary target detection techniques used by modern crafts, such as unmanned underwater vehicles, is passive sonar [4]. Passive sonar is an underwater acoustic technology that uses hydrophones to detect and analyze sound waves in the ocean [5]. Unlike active sonar, passive sonar resolves targets from the natural sounds of the ocean and the noises produced by ships and other underwater vehicles. Processing and analyzing passive sonar data can be challenging due to the high volume of data and environmental complexity [6]. Signal processing techniques are often used to analyze ship-generated noise such as low frequency analysis and recording (LOFAR) spectra [7]. The Detection of Envelope Modulation on Noise (DEMON) is an approach that has been successfully used for target detection and recognition in passive sonar [8,9,10]. Despite their success, these approaches DISTRIBUTION use handcrafted features that can be difficult to extract without domain expertise [11].
Artificial neural networks (ANNs), such as convolutional neural networks (CNNs) and time delay neural networks (TDNNs), provide an end-to-end process for automated feature learning and follow-on tasks (e.g., detection and classification of signals) [12,13,14,15]. The TDNN has shown success in simulating long-term temporal dependencies [16] and can be modeled as a 1D CNN [13]. Thus, the TDNN can adaptively learn the sequential hierarchies of features, but does not explicitly account for the statistics of passive sonar data. These are difficult to model for feature extraction [17,18]. The statistics of the signals can describe the acoustic texture of the targets of interest [18]. Texture generally falls into two categories: statistical and structural [19,20,21,22].Statistical context in audio analysis involves studying the amplitude information of the audio signal. One way to capture amplitude information is by using probability density functions [18]. However, traditional artificial neural network (ANN) approaches, like convolutional neural networks (CNNs) and time-delay neural networks (TDNNs), have shown a bias towards capturing structural textures rather than statistical texture [20,21,22]. This bias limits their ability to directly model the statistical information required to capture acoustic textures accurately. To overcome this shortcoming, histogram layers can be integrated into ANNs to incorporate statistical context [22]. Methods that combine both structural and statistical textures have improved performance for other tasks such as image classification and segmentation [20,21,22]. In this work, we propose a new TDNN architecture that integrates histogram layers for improved target classification. Our proposed workflow is summarized in Figure 1. The contributions of this work are as follows: • Novel TDNN architecture with histogram layer (HLTDNN) for passive sonar target classification • In-depth qualitative and quantitative comparisons of TDNN and HLTDNN across a suite of time-frequency features.
Baseline TDNN Architecture
The TDNN architecture consisted of several convolution layers with the ReLU activation function and max pooling. 2D convolutional features were extracted from the time-frequency input to capture local relationships between the vessel's frequency information [23].
Padding was added to the input time-frequency feature to maintain the spatial dimensions of the resulting features maps. After each convolution operation and ReLU activation function, the features were pooled along the time axis with desired kernel length L (e.g., max pooling kernel of size 1 × L) to aggregate the feature information while maintaining the temporal dependencies similar to other TDNNs [16,23]. After the fourth convolutional block, the features are flattened and then passed through a final 1D convolutional layer followed by a sigmoid activation function and global average pooling layer (GAP).
Proposed HLTDNN
The baseline TDNN is focused on the "structural" (e.g., local) acoustic textures of time and frequency as well as the temporal dependencies in the data. However, the model does not directly consider the statistical aspects of the data. A histogram layer [22] can be added in parallel to the baseline TDNN model to capture statistical features to assist in improving classification performance. Given input features, X ∈ R M ×N ×D , where M and N are the spatial (or time-frequency) dimensions while D is the feature dimensionality, the output tensor of the local histogram layer with B bins, Y ∈ R R×C×B×D with spatial dimensions R and C after applying a histogram layer with kernel size S × T is shown in (1): where the bin centers (µ bd ) and bin widths (γ bd ) of the histogram layer are learnable parameters. Each input feature dimension is treated independently, resulting in BD output histogram feature maps. The histogram layer takes input features and outputs the "vote" for a value in the range of [0, 1]. The histogram layer can be modeled using convolution and average pooling layers as shown in Figure 2. Following previous work [22], the histogram layer is added after the fourth convolutional block (i.e., convolution, ReLU, and max pooling) and its features are concatenated with the TDNN features before the final output layer.
Dataset Description
The DeepShip dataset [14] was used in this work. The database contained 609 records reflecting the sounds of four different ship types: cargo, passengership, tanker, and tug. Following [14], each signal is re-sampled to a frequency of 16 kHz and divided into segments of three seconds. Figure 3 illustrates the structure of the dataset after "binning" the signals into segments. The number of signals and segments for each class are also shown. TDNN and HLTDNN classification performances are shown in Table 1. Classification performance was accessed using five metrics: accuracy, precision, recall, F1 score, and Matthew's correlation coefficient (MCC). Fisher's discriminant ratio (FDR) was used to access the feature quality (discussed more in Section 4.2). Confusion matrices for the TDNN and HLTDNN using best performing feature are displayed in Figures 4a and 4b respectively. For the HLTDNN, STFT achieved the best classification performance compared to other features. However, MFCC had the best for performance for TDNN across the different performance metrics. STFT performed similarly to MFCC when observing classification accuracy. Additional quantitative and qualitative analysis will use STFT to evaluate the impact of the histogram layer on the vessel classification.
The TDNN model initially performed well with the Mel spectrogram, MFCC, and STFT, but significantly degraded for the other three features (Table 1). The best performance was achieved using the MFCC feature as input while the worst feature was GFCC. A possible reason for this is that each feature used a 250 ms window and hop length of 64 ms. The short time frame may be limiting the frequency domain and selecting the best frequency band greatly impacts performance [25]. However, the performance of the HLTDNN was fairly robust across the different time-frequency features. The STFT feature performed the best for this model, and the HLTDNN also improved the performance of the GFCC, CQT and VQT features significantly in comparison to the TDNN. This demonstrates that the statistical context captured by the histogram layer is useful for improving target classification.
Both models did not identify the Cargo class as well as the other vessel types as shown in Figure 4. Particularly, the most common classification mistakes occurred when the model predicted Cargo as Tanker (i.e., false positive). Intuitively, this classification error makes sense because Tanker is a type of Cargo ship (e.g., oil tanker [26]) and the sound produced by each ship maybe similar. Also, the Cargo class in the DeepShip data has been noted to have high intraclass variance [27]. As a result, the Cargo class was the most difficult to classify. Feature regularization methods (i.e., constrastive learning) can be incorporated into the objective function to mitigate intra-class variance. Table 2: STFT Fisher's discriminant ratio (FDR) scores for each class and overall. The average score with ±1σ across the three experimental runs of random initialization is shown and the best average metric is bolded. The log of the FDR is shown due to the magnitude of the FDR score. The higher FDR score indicates better separability and compactness of the features in higher dimensional space for each class. In addition to the classification metrics, quality of the features was accessed using Fisher's Discriminant Ratio (FDR). FDR is the ratio of the inter-class separability and the intra-class compactness. Ideally, the inter-class separability should be maximized (i.e., different vessel types should be "far away" from one another or have large distances between the classes in the feature space) and the intra-class compactness should be minimized (i.e., samples from the same class should be "close" or have small distances between one another in the feature space). As a result, the FDR should be maximized. From Table 1, the log of the FDR shows that the histogram model achieved the best FDR scores for all six features further demonstrating the utility of the statistical features.
Feature Evaluation
A deeper analysis using the best performing feature (STFT) in terms of classification performance is shown in Table 2. For all four classes, the log FDR for the HLTDNN is statistically significant (no overlapping error bars) in comparison to the TDNN. The main difference between the two models were the increased feature separability of the HLTDNN model in comparison with the baseline TDNN. The TDNN had smaller denominator (i.e., intra-class compactness) compared to the HLTDNN when computing the norm of the within-scatter matrix, indicating that the TDNN performs marginally better in terms of intra-class compactness. On the other hand, the features from the HLTDNN are more separable than those from the TDNN, as evident from the norm of the between-scatter matrix, showing the HLTDNN's superiority in terms of inter-class separability. The FDR scores further elucidate the importance of statistical texture information captured by the histogram layer. Figure 5 shows the 2D t-SNE projection of the features from the best performing models using the STFT feature. The same random initialization for t-SNE was used for both methods in order to do a fair comparison between both models. The qualitative results of t-SNE match our quantitative analysis using FDR. The features extracted by the histogram acts as a similarity measure for the statistics of the data and assigning higher "votes" to bins where features are closer. The addition of these features to the TDNN model improved the separability of the classes as observed in Figure 5b. Modifying the histogram layer to help improve the intra-class compactness of the HLTDNN would be of interest in future investigations.
CONCLUSION
In this work, a novel HLTDNN model was developed to incorporate statistical information for improved target classification in passive sonar. In comparison to the base TDNN, the HLTDNN not only improved classification performance and led to improved feature representations for the vessel types. Future work will investigate combining features as opposed to using a single time-frequency representation as the input to the network. Each feature can also be tuned (e.g., change number of frequency bins) to enhance the representation of the signals. Additionally, both architectures can be improved by a) adding more depth and b) leveraging pretrained models. The training strategies could also use approaches to mitigate overfitting and improve performance, such as regularization of the histogram layer (e.g., add constraints to the bin centers and widths) and data augmentation. | 2023-07-27T01:22:25.923Z | 2023-07-25T00:00:00.000 | {
"year": 2023,
"sha1": "3498c0993f7087eca8c07aafc47b0625459b95c7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3498c0993f7087eca8c07aafc47b0625459b95c7",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
234199153 | pes2o/s2orc | v3-fos-license | The Energy Consumption of Terraces in the Barcelona Public Space: Heating the Street
Terraces, as outdoor extensions of food and beverage businesses located in the public realm, have very high potential to activate the streetscape, bring people together and improving urban experiences. Among the consequences of the current COVID-19 pandemic are the recommendations of maximizing outdoor environments when conducting human interactions. Therefore, outdoor eating has dramatically increased throughout the world, with terraces becoming a radical urban change in many streetscapes. The urgency of the situation, and rapid implementation of these changes, has revealed some aspects of this phenomenon that should be considered when adapting the regulations to this new reality. However, the research on their functioning and impact is limited. Additionally, although energy consumption in the architectural and urban field is considered fundamental, research has rarely addressed small business outdoor spaces, placing the focus instead on residential heating or public lighting. This study focuses on the intersection of these two gaps by analyzing a set of terraces in Barcelona and estimating the power installed in their outdoor heating devices. The goal is to determine the potential energy consumed, contrast it with other values more commonly used when researching architectural energy consumption and point out the lack of sustainability of these approaches to providing comfort. The calculations show that the installed power in Barcelona terraces is significant and, when estimating potential consumption, it presents values higher that the average heating consumption of residential units in Spain. These results support two main conclusions: first, the relevance of addressing the means of providing comfort in outdoor urban spaces due to the high magnitude of their potential energy consumption; second, the importance of adapting those systems to outdoor conditions, understanding the needs of the occupants and the limitations of the environment in order to develop sustainable solutions that provide comfort without attempting to heat the air of the street.
Introduction. Terraces and Public Space. The Case of Barcelona
There is a type of phenomenon in the urban public space that, although often disregarded by urban planning professionals, is placed at the core of current political and social discussions. Terraces located in the public realm have the potential of highly influencing the dynamics and performance of cities and, therefore, are in need of further analysis and study.
The term "terrace" is generally used to describe architectural outdoor extensions of indoor spaces. For instance, private terraces in residential buildings extend the living area outdoors. In this article, we will focus on a particular case, using the word "terrace" to define the outdoor extension of food and beverage businesses, generally restaurants and cafes ( Figure 1). Specifically, we focus on those that make use of the public space to provide this extension [1]. Our society is currently facing an extraordinary challenge in the form of the COVID-19 pandemic first identified in December 2019 and declared a pandemic in March 2020 by The World Health Organization [2]. It has abruptly disrupted the way of living across the entire world. The events that we are experiencing demand a revision of practices and priorities in our everyday life. A wide range of fields and disciplines will have to adapt and re-think the status quo, urban planning being one of the most significant [3].
Terraces can become key elements in this reflection for two main reasons. First, current quarantine and stay at home situations have revealed the importance of access to outdoor space for the well-being of humans. Sun and open air have been unmasked not only as privileges for the few but as needs for general health and wellness. Secondly, the new standards that will need to be put in place moving forward will demand inventive solutions. Larger spaces will be needed to accommodate the same amount of people and fulfill minimum distance requirements. Terraces could be important to be able to provide outdoor experiences and social interaction with safer distances. Around the world, municipalities are already tackling the challenge, rapidly updating and adapting regulations to the new situation.
There are multiple conditions that allow the existence of this type of terrace. The most basic ones are the presence of food and beverage businesses, the social acceptance to eat in public and the existence of the concept of public space [4].
Traditionally, terraces offered the opportunity of taking advantage of better outdoor conditions, related to thermal comfort, a sunny or airy environment, easier social interaction or better views [5,6]. However, these characteristics are often lost in the current situation. When the urban environment is not considered as positive as desired, elements tend to be incorporated into the terrace in order to protect the client from the outdoor public surroundings ( Figure 2). These elements can have different aims, such as protection from the sun (sunshade/parasol), from the car noise and emissions or wind (lateral divisions) or from the cold (heating). In this article, the heating elements incorporated in terraces are the focus of the investigation. Our society is currently facing an extraordinary challenge in the form of the COVID-19 pandemic first identified in December 2019 and declared a pandemic in March 2020 by The World Health Organization [2]. It has abruptly disrupted the way of living across the entire world. The events that we are experiencing demand a revision of practices and priorities in our everyday life. A wide range of fields and disciplines will have to adapt and re-think the status quo, urban planning being one of the most significant [3].
Terraces can become key elements in this reflection for two main reasons. First, current quarantine and stay at home situations have revealed the importance of access to outdoor space for the well-being of humans. Sun and open air have been unmasked not only as privileges for the few but as needs for general health and wellness. Secondly, the new standards that will need to be put in place moving forward will demand inventive solutions. Larger spaces will be needed to accommodate the same amount of people and fulfill minimum distance requirements. Terraces could be important to be able to provide outdoor experiences and social interaction with safer distances. Around the world, municipalities are already tackling the challenge, rapidly updating and adapting regulations to the new situation.
There are multiple conditions that allow the existence of this type of terrace. The most basic ones are the presence of food and beverage businesses, the social acceptance to eat in public and the existence of the concept of public space [4].
Traditionally, terraces offered the opportunity of taking advantage of better outdoor conditions, related to thermal comfort, a sunny or airy environment, easier social interaction or better views [5,6]. However, these characteristics are often lost in the current situation. When the urban environment is not considered as positive as desired, elements tend to be incorporated into the terrace in order to protect the client from the outdoor public surroundings ( Figure 2). These elements can have different aims, such as protection from the sun (sunshade/parasol), from the car noise and emissions or wind (lateral divisions) or from the cold (heating). In this article, the heating elements incorporated in terraces are the focus of the investigation.
The high level of energy consumption that human activity is currently responsible for is a pressing issue at multiple levels. The scarcity of fossil energy sources, the emissions produced by current energy generation systems, radioactivity pollution from nuclear sources, rise of global temperatures and climate change, lack of basic energy needs of vulnerable populations and limited resiliency in emergency situations.
Cities, suburbs and other urban centers have been identified as a major focus of energy consumption, and great efforts have been devoted to reducing its use. Research, studies and policies [7,8] have attempted to address the issue through a wide range of actions, from reducing private transportation to incentivizing the implementation of renewable energy sources [9,10]. The high level of energy consumption that human activity is currently responsible for is a pressing issue at multiple levels. The scarcity of fossil energy sources, the emissions produced by current energy generation systems, radioactivity pollution from nuclear sources, rise of global temperatures and climate change, lack of basic energy needs of vulnerable populations and limited resiliency in emergency situations.
Cities, suburbs and other urban centers have been identified as a major focus of energy consumption, and great efforts have been devoted to reducing its use. Research, studies and policies [7,8] have attempted to address the issue through a wide range of actions, from reducing private transportation to incentivizing the implementation of renewable energy sources [9,10].
In addition, recent economic crises have magnified serious existing issues related to energy poverty. These events have raised sustainability awareness [11] and encouraged a drive for energy saving and an interest in energy efficiency rehabilitation for buildings [12,13]. Studies have shown how the largest energy consumption share within a household is that caused by heating and cooling systems [14,15].
Consequently, significant efforts, both in academia and professional practice, have been focused on the development of strategies that tackle the building environment [16][17][18]. Through multiple studies, heating and cooling have been identified as an important fraction of energy consumption, especially in residential buildings, and design strategies and policies have been focused on addressing this issue [19,20]. Open public space approaches have been generally focused on the management of public lighting or the limiting of certain transportation means [21]. These measures have been established by governmental agencies that have the tools to control these elements of the public realm.
However, other important energy consuming activities are often overlooked. The energy used to light, heat or cool outdoor terraces is rarely accounted for in academic energy studies or governmental policies. This situation might be the consequence of three main factors: first, the difficulty to acquire data on energy consumption from all the disaggregated businesses of a city in comparison with municipal infrastructure; second, the challenges of separating the indoor and outdoor energy consumption of food and beverage businesses; finally, the variability regarding both the presence and heating use of terraces, which are constantly changing due to business closures and openings, opening hours and occupancy, which create difficulties in developing a comprehensive database. Based on the increasing presence of terrasses, a thorough analysis of their energy performance could be key for urban sustainability.
In Barcelona, the number of terraces located in the public space has recently increased very rapidly, under the influence of multiple factors: mild climate, tourism growth, the implementation of the anti-tobacco law [22], widening of sidewalks, prioritization of the pedestrian realm in the public space [23,24], rise of rental prices and the number of food- In addition, recent economic crises have magnified serious existing issues related to energy poverty. These events have raised sustainability awareness [11] and encouraged a drive for energy saving and an interest in energy efficiency rehabilitation for buildings [12,13]. Studies have shown how the largest energy consumption share within a household is that caused by heating and cooling systems [14,15].
Consequently, significant efforts, both in academia and professional practice, have been focused on the development of strategies that tackle the building environment [16][17][18]. Through multiple studies, heating and cooling have been identified as an important fraction of energy consumption, especially in residential buildings, and design strategies and policies have been focused on addressing this issue [19,20]. Open public space approaches have been generally focused on the management of public lighting or the limiting of certain transportation means [21]. These measures have been established by governmental agencies that have the tools to control these elements of the public realm.
However, other important energy consuming activities are often overlooked. The energy used to light, heat or cool outdoor terraces is rarely accounted for in academic energy studies or governmental policies. This situation might be the consequence of three main factors: first, the difficulty to acquire data on energy consumption from all the disaggregated businesses of a city in comparison with municipal infrastructure; second, the challenges of separating the indoor and outdoor energy consumption of food and beverage businesses; finally, the variability regarding both the presence and heating use of terraces, which are constantly changing due to business closures and openings, opening hours and occupancy, which create difficulties in developing a comprehensive database. Based on the increasing presence of terrasses, a thorough analysis of their energy performance could be key for urban sustainability.
In Barcelona, the number of terraces located in the public space has recently increased very rapidly, under the influence of multiple factors: mild climate, tourism growth, the implementation of the anti-tobacco law [22], widening of sidewalks, prioritization of the pedestrian realm in the public space [23,24], rise of rental prices and the number of foodrelated businesses and, more recently, COVID-19 restrictions. This increment has caused a climate filled with discussions and even conflicts regarding this issue.
This situation has pushed municipal authorities to commission studies [25] that could support the drafting and proposal of new regulations that might address the abovementioned challenges [26][27][28]. Nevertheless, these studies or regulations fail to address the energy consumption that these elements can potentially cause.
The presence of switched-on heating systems in terraces during the daytime in a city with a Mediterranean climate such as Barcelona, where the average annual temperature is around 18.4 • C (average temperature of years 2014-18) and with temperatures between 12.4 and 6.1 • C for the coolest month, February, is questionable (Table 1).
Analysis of the Terraces in Barcelona
Although terraces located in public space seem central in the development of the city, literature addressing this subject is scarce. Terraces are mentioned in certain urbanism publications about public space, but they appear only briefly and often are not directly mentioned. It seems that public space professionals, probably due to their private commercial character, have overlooked this phenomenon. However, terraces have a high impact on morphological and social attributes of the public space. On one hand, they occupy a large amount of pedestrian areas and modify their routes. On the other hand, they offer a meeting point for citizens, contributing a secure and convivial feeling to the neighborhood.
Selection of the Zones of Analysis
In order to proceed with a further analysis of several terrace areas in Barcelona, the first step was conducting a selection of the most significant zones. This choice was based on a cartographic databased work developed by the Centre de Política del Sol i Valoracions (CPSV) and published in "Estudi de caracterització i avaluació de Terrasses en espai public" [25]. From this database, with the use of a Geographic Information System software, 6 zones with the greatest terrace concentrations and diverse urban morphology typologies were detected ( Figure 3). The objective was to be able to collect data from areas of the city with a significant presence of this phenomenon and common or diverse characteristics that present a contrasting panorama.
The selected zones: The terraces studied have different occupancy patterns between office hours and after hours and/or weekends. Two of these areas are located in the urban fabric of the Eixample, but with a substantial difference. Enrique Granados St. (1) is located in an office building area, and during breakfast and midday it is busy with office workers and some tourists, whereas Gaudí Avenue (5), due to being close to Sagrada Familia, has a much higher flow Sustainability 2021, 13, 865 5 of 20 of tourists. The two zones in the city center (3 and 4) are occupied by tourists but have fairly different urban attributes. Plaça Reial (3) has terraces of a larger size and with more equipment than those in Mercat del Born (4), while both have an intense nighttime use. The former ones are restaurants, and their customer stays are longer, while the latter ones in Born (4), an after-hour drinks spot, have shorter customer stays. Rambla del Poble Nou (2) is located around Villa Olimpica, close to the sea, and it is the area that displays the most important difference in occupation between daytime, with a large proportion of locals, and nighttime, with a large proportion of tourists. Blai Street (6) is a narrow pedestrian street close to the entertainment business area Paral·lel, and has its largest flow of people during the nighttime, with both locals and tourists. ware, 6 zones with the greatest terrace concentrations and diverse urban morphology typologies were detected ( Figure 3). The objective was to be able to collect data from areas of the city with a significant presence of this phenomenon and common or diverse characteristics that present a contrasting panorama.
(a) (b) The terraces studied have different occupancy patterns between office hours and after hours and/or weekends. Two of these areas are located in the urban fabric of the Eixample, but with a substantial difference. Enrique Granados St. (1) is located in an office building area, and during breakfast and midday it is busy with office workers and some tourists, whereas Gaudí Avenue (5), due to being close to Sagrada Familia, has a much higher flow of tourists. The two zones in the city center (3 and 4) are occupied by tourists but have fairly different urban attributes. Plaça Reial (3) has terraces of a larger size and with more equipment than those in Mercat del Born (4), while both have an intense nighttime use. The former ones are restaurants, and their customer stays are longer, while the latter ones in Born (4), an after-hour drinks spot, have shorter customer stays. Rambla del Poble Nou (2) is located around Villa Olimpica, close to the sea, and it is the area that displays the most important difference in occupation between daytime, with a large proportion of locals, and nighttime, with a large proportion of tourists. Blai Street (6) is a narrow pedestrian street close to the entertainment business area Paral·lel, and has its largest flow of people during the nighttime, with both locals and tourists.
a) Enrique Granados Street
Located in the district of central Eixample, Enrique Granados is an example of a vertical axis in the Barcelona grid, with a NW-SE orientation. It connects la Diagonal with Gran Via, two major metropolitan Barcelona avenues. The street has a constant width of 20 m, like all regular streets of l'Eixample grid, but with different section configurations. Although some parts of the street are pedestrianized, the most common section holds 50% of the surface dedicated to pedestrians, organized in two wide lateral walkways, one direction car lane, a two-way bike lane and a central parking area, which alternates bicycles,
b) Rambla del Poble Nou
This boulevard is also located in the district of Eixample. It has a width of 30 m from Avinguda Diagonal to Carrer de Pere IV, and a width of 20 m for the most part of its length, between Carrer de Pere IV to Passeig de Clavell, with 70% of the total surface used for pedestrian movement, consisting of lateral walkways smaller than Enrique Granados and a central Rambla that acts as the pedestrian core of the street. It has two car lanes, This boulevard is also located in the district of Eixample. It has a width of 30 m from Avinguda Diagonal to Carrer de Pere IV, and a width of 20 m for the most part of its length, between Carrer de Pere IV to Passeig de Clavell, with 70% of the total surface used for pedestrian movement, consisting of lateral walkways smaller than Enrique Granados and a central Rambla that acts as the pedestrian core of the street. It has two car lanes, marked as bicycle priority lanes, split by the Rambla with opposite directions. The width of the lateral sidewalks slightly changes along the street, but the structural elements of the section remain the same (Figures 5 and 6).
b) Rambla del Poble Nou
This boulevard is also located in the district of Eixample. It has a width of 30 m from Avinguda Diagonal to Carrer de Pere IV, and a width of 20 m for the most part of its length, between Carrer de Pere IV to Passeig de Clavell, with 70% of the total surface used for pedestrian movement, consisting of lateral walkways smaller than Enrique Granados and a central Rambla that acts as the pedestrian core of the street. It has two car lanes, marked as bicycle priority lanes, split by the Rambla with opposite directions. The width of the lateral sidewalks slightly changes along the street, but the structural elements of the section remain the same (Figures 5 and 6).
b) Rambla del Poble Nou
This boulevard is also located in the district of Eixample. It has a width of 30 m fro Avinguda Diagonal to Carrer de Pere IV, and a width of 20 m for the most part of i length, between Carrer de Pere IV to Passeig de Clavell, with 70% of the total surface use for pedestrian movement, consisting of lateral walkways smaller than Enrique Granad and a central Rambla that acts as the pedestrian core of the street. It has two car lane marked as bicycle priority lanes, split by the Rambla with opposite directions. The wid of the lateral sidewalks slightly changes along the street, but the structural elements of th section remain the same (Figures 5 and 6).
(c) Plaça Reial and les Rambles
Plaça Reial is an important square in Barcelona located in the core of the medieval city, tangential to Les Rambles. Measuring 113 × 75 m, Plaça Reial presents the first regularized façades of the medieval city through arcades on the ground floor. It has several connections with lateral streets, the most important being Carrer Ferran, the first straight street of the city, and les Rambles. The zone studied in this case also includes several terraces located in Les Rambles. This avenue of Barcelona connects Plaça Catalunya with the sea. It was also formerly the limit between the walled city and the Raval neighborhood, the hors-murailles western extension, now the avenue that divides both neighborhoods and the most touristic axis in Barcelona (Figure 7). connections with lateral streets, the most important being Carrer Ferran, the first straight street of the city, and les Rambles. The zone studied in this case also includes several terraces located in Les Rambles. This avenue of Barcelona connects Plaça Catalunya with the sea. It was also formerly the limit between the walled city and the Raval neighborhood, the hors-murailles western extension, now the avenue that divides both neighborhoods and the most touristic axis in Barcelona (Figure 7). Gaudí Avenue is a 25 m wide street splitting the orthogonal grid of Eixample in a precise north-south diagonal that connects two landmarks, the Sagrada Familia and Hospital de Sant Pau. Originally, it was used as an important driveway, but in 1985, the ave- Gaudí Avenue is a 25 m wide street splitting the orthogonal grid of Eixample in a precise north-south diagonal that connects two landmarks, the Sagrada Familia and Hospital de Sant Pau. Originally, it was used as an important driveway, but in 1985, the avenue was remodeled, becoming more pedestrian friendly. Since then, the street has hosted a large number of restaurant businesses frequented by visitors to the Sagrada Familia. Its current layout has two car lanes separated by a 10 m wide pedestrian way and two other 4 m wide pedestrian ways adjacent to the building façades, resulting in a 75% of the section dedicated to pedestrians ( Figure 9).
e) Gaudi Avenue
Gaudí Avenue is a 25 m wide street splitting the orthogonal grid of Eixample in a precise north-south diagonal that connects two landmarks, the Sagrada Familia and Hospital de Sant Pau. Originally, it was used as an important driveway, but in 1985, the avenue was remodeled, becoming more pedestrian friendly. Since then, the street has hosted a large number of restaurant businesses frequented by visitors to the Sagrada Familia. Its current layout has two car lanes separated by a 10 m wide pedestrian way and two other 4 m wide pedestrian ways adjacent to the building façades, resulting in a 75% of the section dedicated to pedestrians (Figure 9). Paral·lel Avenue is a 40 m wide street originated after the approval of Pla Cerdà in 1859 that connects Plaça Espanya to the old royal shipyards by the port. It also delimits the boroughs of Poble Sec and Raval. Its trace, following the same east-west direction as Paral·lel Avenue is a 40 m wide street originated after the approval of Pla Cerdà in 1859 that connects Plaça Espanya to the old royal shipyards by the port. It also delimits the boroughs of Poble Sec and Raval. Its trace, following the same east-west direction as the geographic parallel, was established over the demolished medieval city walls. The avenue constituted a leisure axis, especially nightlife businesses, from the last years of the 19th century until the 1970s. Nowadays, it still holds a good number of entertainment businesses, such as theatres, cinemas, concert halls. Its layout gives a total of 40% of its width to pedestrian areas, arranged in two lateral walkways, two three-lane driveways and a central two-way bicycle lane.
Blai Street is a 450 m long, 10 m wide street in the neighborhood of Poble Sec that has recently become very popular. Once unnoticed, this street, aided by its pedestrianization and a rebranding strategy, has become the center of the neighborhood life, filled with bars and terraces (Figures 10 and 11).
Data Collection of the Characteristic Elements of the Terraces
The second step of the process was the creation of a database that assembled the characteristics of the studied terraces.
Each spreadsheet includes 21 attributes organized in three sections: premises data, terrace elements and urban environment. It also contains graphical information: plan and section, an image of the terrace and an environment conceptual diagram describing the relationship with the surroundings (Figure 12).
The first section, premises data, addresses the eight following fields: name and address, date and hour of observation, type of business, price of coffee, price of beer (33 cl), opening schedule and approximate tourist/local client percentage. The price of coffee and beer are used as proxies of the business economic level, as they tend to be sold at all types of terraces, be it restaurants, bars or cafes. Every zone is codified with an ID, from 1 to 6, and every terrace has a number of two digits preceded by the ID of its zone (total of 3 digits) ( Figure 13). the geographic parallel, was established over the demolished medieval city walls. The avenue constituted a leisure axis, especially nightlife businesses, from the last years of the 19th century until the 1970s. Nowadays, it still holds a good number of entertainment businesses, such as theatres, cinemas, concert halls. Its layout gives a total of 40% of its width to pedestrian areas, arranged in two lateral walkways, two three-lane driveways and a central two-way bicycle lane.
Blai Street is a 450 m long, 10 m wide street in the neighborhood of Poble Sec that has recently become very popular. Once unnoticed, this street, aided by its pedestrianization and a rebranding strategy, has become the center of the neighborhood life, filled with bars and terraces (Figures 10 and 11).
Data Collection of the Characteristic Elements of the Terraces
The second step of the process was the creation of a database that assembled the characteristics of the studied terraces. the geographic parallel, was established over the demolished medieval city walls. The avenue constituted a leisure axis, especially nightlife businesses, from the last years of the 19th century until the 1970s. Nowadays, it still holds a good number of entertainment businesses, such as theatres, cinemas, concert halls. Its layout gives a total of 40% of its width to pedestrian areas, arranged in two lateral walkways, two three-lane driveways and a central two-way bicycle lane.
Blai Street is a 450 m long, 10 m wide street in the neighborhood of Poble Sec that has recently become very popular. Once unnoticed, this street, aided by its pedestrianization and a rebranding strategy, has become the center of the neighborhood life, filled with bars and terraces (Figures 10 and 11).
Data Collection of the Characteristic Elements of the Terraces
The second step of the process was the creation of a database that assembled the characteristics of the studied terraces. The second section, terrace elements, describes the equipment of the terrace. This section is divided into four subsections: seats, tables, auxiliary elements and heating devices ( Figure 14). The seats subsection has three fields: number of seats, number of people occupying these seats and the type of chair. The tables subsection includes two fields: number of tables and use or not of tablecloth (the presence of the meal). The last subsection describes the terrace through their auxiliary elements, in addition to tables and chairs, such as umbrellas, planters, awning, platform, buffet, heating and artificial lighting. With the aim of synthesizing the diversity of terraces regarding equipment, eight modules of terrace are defined, based on the previous work by Pia Giannoni [29]. The description of these modules ranges from a terrace with only tables and seats [M1] to a terrace that becomes a physical extension of its own indoor business [M8]. The last subsection, the heating subsection, is the one that acts as the basis for the analysis of this research. This part seeks to quantify the heating devices installed for every terrace. It contains 3 fields: gas, electric and interior, which describe the type and number of heating devices. Each spreadsheet includes 21 attributes organized in three sections: premises data, terrace elements and urban environment. It also contains graphical information: plan and section, an image of the terrace and an environment conceptual diagram describing the relationship with the surroundings (Figure 12). The first section, premises data, addresses the eight following fields: name and address, date and hour of observation, type of business, price of coffee, price of beer (33 cl), opening schedule and approximate tourist/local client percentage. The price of coffee and beer are used as proxies of the business economic level, as they tend to be sold at all types of terraces, be it restaurants, bars or cafes. Every zone is codified with an ID, from 1 to 6, and every terrace has a number of two digits preceded by the ID of its zone (total of 3 digits) ( Figure 13). The second section, terrace elements, describes the equipment of the terrace. This section is divided into four subsections: seats, tables, auxiliary elements and heating devices ( Figure 14). The seats subsection has three fields: number of seats, number of people occupying these seats and the type of chair. The tables subsection includes two fields: number of tables and use or not of tablecloth (the presence of the meal). The last subsection describes the terrace through their auxiliary elements, in addition to tables and chairs, such as umbrellas, planters, awning, platform, buffet, heating and artificial lighting. With the aim of synthesizing the diversity of terraces regarding equipment, eight modules of terrace are defined, based on the previous work by Pia Giannoni [29]. The description of these modules ranges from a terrace with only tables and seats [M1] to a terrace that becomes a physical extension of its own indoor business [M8]. The last subsection, the heating subsection, is the one that acts as the basis for the analysis of this research. This part Each spreadsheet includes 21 attributes organized in three sections: premises data, terrace elements and urban environment. It also contains graphical information: plan and section, an image of the terrace and an environment conceptual diagram describing the relationship with the surroundings (Figure 12). The first section, premises data, addresses the eight following fields: name and address, date and hour of observation, type of business, price of coffee, price of beer (33 cl), opening schedule and approximate tourist/local client percentage. The price of coffee and beer are used as proxies of the business economic level, as they tend to be sold at all types of terraces, be it restaurants, bars or cafes. Every zone is codified with an ID, from 1 to 6, and every terrace has a number of two digits preceded by the ID of its zone (total of 3 digits) ( Figure 13). The second section, terrace elements, describes the equipment of the terrace. This section is divided into four subsections: seats, tables, auxiliary elements and heating devices ( Figure 14). The seats subsection has three fields: number of seats, number of people occupying these seats and the type of chair. The tables subsection includes two fields: number of tables and use or not of tablecloth (the presence of the meal). The last subsection describes the terrace through their auxiliary elements, in addition to tables and chairs, such as umbrellas, planters, awning, platform, buffet, heating and artificial lighting. With the aim of synthesizing the diversity of terraces regarding equipment, eight modules of terrace are defined, based on the previous work by Pia Giannoni [29]. The description of these modules ranges from a terrace with only tables and seats [M1] to a terrace that becomes a physical extension of its own indoor business [M8]. The last subsection, the heating subsection, is the one that acts as the basis for the analysis of this research. This part Finally, the urban environment section focuses on three fields: the terrace-premises connection, the terrace-premises proportion and the terrace-street relation ( Figure 15). The terrace-premises connection has four possible values: "adhered" when the terrace is in contact with the business; "close" when it is located right next to it, without space for pedestrians to walk in between; "separated" when there is a gap between the terrace and the premises enough for pedestrians to walk by; and "far" when the terrace is separated from the premises by, at least, a car lane.
The terrace-street proportion is also structured in four options: "appendage", "extension", "equivalent" and "dispenser". "appendage" is when the terrace surface is significantly smaller than the indoor part of the food and beverage business. "extension" when the terrace, although still smaller than the business, represents a significant portion of the entire establishment. "equivalent" represents those terraces in which their surface is approximately the same as the surface of the business. Finally, "dispenser" represents those cases when the surface of the terrace is significantly larger than the indoor part of the business.
Finally, the last subsection, the terrace-street relation, considers two fields: the sidewalk width where the terrace is located, and the ratio between sidewalk and total street width (considering sidewalks on both sides of the street). In addition to these variables, this section takes into account the environment concept diagram. This diagram aims to present the influence of the terrace over its immediate environment and the reverse. seeks to quantify the heating devices installed for every terrace. It contains 3 fields: gas, electric and interior, which describe the type and number of heating devices. Finally, the urban environment section focuses on three fields: the terrace-premises connection, the terrace-premises proportion and the terrace-street relation ( Figure 15). The terrace-premises connection has four possible values: "adhered" when the terrace is in contact with the business; "close" when it is located right next to it, without space for pedestrians to walk in between; "separated" when there is a gap between the terrace and the premises enough for pedestrians to walk by; and "far" when the terrace is separated from the premises by, at least, a car lane.
The terrace-street proportion is also structured in four options: "appendage", "extension", "equivalent" and "dispenser". "appendage" is when the terrace surface is significantly smaller than the indoor part of the food and beverage business. "extension" when the terrace, although still smaller than the business, represents a significant portion of the entire establishment. "equivalent" represents those terraces in which their surface is approximately the same as the surface of the business. Finally, "dispenser" represents those cases when the surface of the terrace is significantly larger than the indoor part of the business.
Finally, the last subsection, the terrace-street relation, considers two fields: the sidewalk width where the terrace is located, and the ratio between sidewalk and total street width (considering sidewalks on both sides of the street). In addition to these variables, this section takes into account the environment concept diagram. This diagram aims to present the influence of the terrace over its immediate environment and the reverse. The field work consisted in collecting the data using the previously described spreadsheet in the six mentioned areas and processing it with Geographic Information System (GIS), using ArcMap 10.4 (Esri) as the main software. The data collection was carried out in the winter of 2017, throughout February and March, a period which will be the reference for the below comparisons with other representative values. The sample collected for the present investigation comprises a total of 268 terraces, which are divided as follows: Finally, the urban environment section focuses on three fields: the terrace-premises connection, the terrace-premises proportion and the terrace-street relation (Figure 15). The terrace-premises connection has four possible values: "adhered" when the terrace is in contact with the business; "close" when it is located right next to it, without space for pedestrians to walk in between; "separated" when there is a gap between the terrace and the premises enough for pedestrians to walk by; and "far" when the terrace is separated from the premises by, at least, a car lane.
The terrace-street proportion is also structured in four options: "appendage", "extension", "equivalent" and "dispenser". "appendage" is when the terrace surface is significantly smaller than the indoor part of the food and beverage business. "extension" when the terrace, although still smaller than the business, represents a significant portion of the entire establishment. "equivalent" represents those terraces in which their surface is approximately the same as the surface of the business. Finally, "dispenser" represents those cases when the surface of the terrace is significantly larger than the indoor part of the business.
Finally, the last subsection, the terrace-street relation, considers two fields: the sidewalk width where the terrace is located, and the ratio between sidewalk and total street width (considering sidewalks on both sides of the street). In addition to these variables, this section takes into account the environment concept diagram. This diagram aims to present the influence of the terrace over its immediate environment and the reverse. The field work consisted in collecting the data using the previously described spreadsheet in the six mentioned areas and processing it with Geographic Information System (GIS), using ArcMap 10.4 (Esri) as the main software. The data collection was carried out in the winter of 2017, throughout February and March, a period which will be the reference for the below comparisons with other representative values. The sample collected for the present investigation comprises a total of 268 terraces, which are divided as follows: The field work consisted in collecting the data using the previously described spreadsheet in the six mentioned areas and processing it with Geographic Information System (GIS), using ArcMap 10.4 (Esri) as the main software. The data collection was carried out in the winter of 2017, throughout February and March, a period which will be the reference for the below comparisons with other representative values. The sample collected for the present investigation comprises a total of 268 terraces, which are divided as follows: •
Results and Discussion
From a total of 268 studied terraces, 120 (45%) have heating devices. From these 45%, 87 terraces have gas cylinder heaters, while 22 have electric heaters and 11 have both.
When calculating the total installed power that these terraces represent (3100 kW), gas cylinder heaters are responsible for 97% (3000 kW), while only the remaining 3% (87 kW) is due to the 145 electric heaters. Due to the low percentage of terraces with electric heating devices, the energy analysis presented in the final discussion only considered gas heating values.
These terraces have a total of 4018 seats and occupy a total area of approximately 2878 m 2 . Using these calculations, the average installed power per square meter can be approximated, resulting in 997 W/m 2 ( Table 2). In order to gain further insight into the use of heating devices in terraces, data retrieved were analyzed regarding two main factors: the installed power per square meter by zone (Table 3) and the differences in heating type in terraces, by zone and by size. The following chart ( Figure 16) analyzes the dimension of the terraces, estimated by the number of seats and divided by their heating type: gas (red dots), electric (yellow dots), none (blue dots). When focusing on the percentage of terraces without heating devices, Avinguda Ga and Paral·lel/Blai have a value above 70%, while zones such as Enrique Granados, Poble N and Mercat del Born have approximately 50% and Plaça Reial under 20%. In all the zon the percentage of terraces with heating is lower than the percentage of terraces withou except for the case of Plaça Reial.
It can be observed that Plaça Reial is the area with the greatest percentage of terra with heating and also the one with the highest total installed power of all the evalua zones. However, as seen in the previous table, the resulting values of installed power square meter are the lowest of all the areas. Considering that the terraces in this locat are also the largest ones, it can be concluded that the dimension of the terraces grea influences the installed power per square meter value.
If the same data are re-arranged in descending order by dimension, given in num of seats, other observations can be made (Figure 17). When focusing on the percentage of terraces without heating devices, Avinguda Gaudí and Paral·lel/Blai have a value above 70%, while zones such as Enrique Granados, Poble Nou and Mercat del Born have approximately 50% and Plaça Reial under 20%. In all the zones, the percentage of terraces with heating is lower than the percentage of terraces without it, except for the case of Plaça Reial.
It can be observed that Plaça Reial is the area with the greatest percentage of terraces with heating and also the one with the highest total installed power of all the evaluated zones. However, as seen in the previous table, the resulting values of installed power per square meter are the lowest of all the areas. Considering that the terraces in this location are also the largest ones, it can be concluded that the dimension of the terraces greatly influences the installed power per square meter value.
If the same data are re-arranged in descending order by dimension, given in number of seats, other observations can be made ( Figure 17). with heating and also the one with the highest total installed power of all the evaluated zones. However, as seen in the previous table, the resulting values of installed power per square meter are the lowest of all the areas. Considering that the terraces in this location are also the largest ones, it can be concluded that the dimension of the terraces greatly influences the installed power per square meter value.
If the same data are re-arranged in descending order by dimension, given in number of seats, other observations can be made ( Figure 17). The highest number of terraces without any type of heating device is represented by the terraces with less than 24 seats. Above this limit, a significant number of terraces have heating devices. Furthermore, only two of the terraces with more than 48 seats do not have The highest number of terraces without any type of heating device is represented by the terraces with less than 24 seats. Above this limit, a significant number of terraces have heating devices. Furthermore, only two of the terraces with more than 48 seats do not have heating devices. Consequently, three main groups of terraces were identified regarding size that respond to heating criteria, from 0 to 24 seats, from 24 to 48 seats and above 48 seats.
The following table (Table 4) indicates the value of installed power per square meter calculated for each of the three main groups (size in square meters calculated with the aforementioned 0.77 m 2 /seat value). Table 4. Installed power per square meter in terraces by size group.
Seats Installed Power Per m 2 (W/m 2 )
Less than 24 seats 1535 W/m 2 24-48 seats (extremes included) 1058 W/m 2 Greater than 48 seats 918 W/m 2 It can be observed that around 90% of large terraces (>48 seats) have heating devices. Municipal regulations set a heating load per square meter limit of 0.7 kW/m 2 [26]. The installed power per square meter of large terraces (>48 seats) is close to the regulations. On the other hand, in the case of small terraces (<24 seats) which constitute only 20% of this group, the installed power per square meter tends to surpass the municipal regulations by approximately twice the limiting value (1.5 kW/m 2 ).
Installed Power and Reference Values
Heating devices could be classified into two main groups: gas cylinder heaters and electric ones. In order to analyze the installed power of every terrace, nominal power values for these gas and electric heaters were assigned. An average power of 12 kW was assigned to the gas cylinder heaters and an average power of 0.6 kW to the electric ones. These values represent the maximum power and were obtained from catalog specifications of most of the available heaters installed in these zones [30][31][32].
In order to obtain a reference baseline to assist in comparing the results obtained regarding power installed in the terraces, five power values are considered and estimated as follows. The power of solar radiation received in a surface can be used to set up a first baseline value. The value of 200 W/m 2 represents the average solar radiation received on a horizontal surface before any transformation process. It should be noted that during the sun hours this value can reach around 800 W/m 2 . It represents 1/5 of the installed power (1000 W/m 2 ) or around 1/3.5 of that recommended by municipal regulations (700 W/m 2 ). This means that terraces are equipped with heating devices that provide an average of five times the solar heating power, similar to being under peak hour sun radiation without any protection.
(b) The heating load of a mean terrace type, enclosed in a single glazed envelope ( Figure 18). Therefore, the comparison of the average installed power per square meter in terraces in Barcelona with the aforementioned values shows that it is, approximately, 180 times the necessary power to heat a square meter of a residential building and also the average heating energy consumption in residential buildings in Spain. In order to show these results in a graphical way, two charts are needed. The first one represents a comparison of the reference values mentioned in this section, in W/m 2 . The second one aims to properly display the large difference between the reference values, the municipal regulation and the current installed power in terraces. The installed power per square meter for heating in terraces has a high value, often over the value recommended by the municipal regulations (Figure 19b). The regulations of Barcelona limit the heating load by 0.7 kW/m 2 . As we have seen, while large terraces often show values per square meter under the regulatory limit, small ones can double the municipal maximum. The volume of the mean terrace type was calculated based on the average area of terraces with gas heating installed, and a height of 3 m, obtaining a value of 88 m 3 . The resulting envelope surface considers all the surfaces in contact with the exterior, except with the ground, and resulted in 94.4 m 2 . The transmittance value estimated for a single glazed curtain wall was 5 W/(m 2 K).
The estimated heating load considers transmission and ventilation losses. The heating load was estimated by the Heating Degree Day (HDD) (20 • ) method, and the value for Barcelona was used, obtained from the IDAE database [33]. The air renovation load was estimated considering one Air Change per Hour (ACH).
The estimated heating load resulted in 130 W/m 2 , 65% of the solar power.
The installed power at the terraces is around eight times the power necessary to achieve comfort conditions in a glazed enclosure.
(c) The heating load of a mean terrace type, enclosed in an insulated envelope. The heating load estimated in this case is similar to the previous one but considers a scenario with an insulated envelope. The same value of "mean terrace volume" and the same value of "envelope surface" were taken, but with a transmittance of 0.7 W/(m 2 K). This value is recommended by the Codigo Técnico de la Edificación (CTE), the Spanish regulation for residential buildings [34]. The air renovation load also was estimated considering one Air Change per Hour (ACH). In this case, the result was 24.8 W/m 2 .
A more efficient envelope characterized by a transmittance U-value of 0.7 W/(m 2 K) in order to improve the performance yielded only 25 W/m 2 . This would reduce the necessary power down to 12.5% of the solar power and to only a small percentage (2.5%) of the installed power in terraces.
Although This leads us to search for data from living spaces. The first result came from statistical data obtained from a high number of residential buildings used by people in Spain, and a second value was obtained through simulations with DesignBuilder. The aim was to compare the power installed in terraces per square meter with the power needed to heat a square meter of a home. For this reason, two additional values were used for comparison.
(d) The average heating energy consumption in residential buildings in Spain. The data were extracted from Conama Foundation's report [15], regarding the heating power consumption in residential buildings in Spain, and has a value of 49.3 kWh/m 2 annum, which constitutes an average of 5.62 W/m 2 . This figure refers to an annual value, including winter and summer months without distinction, which makes it difficult to isolate the focus on the month of March in this study. In any case, it is an interesting value to include in the comparison because it offers an order of magnitude of heating power consumption.
(e) The heating load of a simulated standard residential unit. Simulations with DesignBuilder were run on typical residential models, similar to the average considered in the aforementioned report [15]. The climate file used was that of Barcelona, with a global radiation value of 200 W/m 2 . The values considered regarding the thermal envelope were those recommended by the Spanish regulations [34]. Several comparisons between façades, adiabatic enveloping surfaces and/or those adjacent to the ground were made with respect to their energy exchange. In order to account for unfavorable scenarios, the dwelling models used were those with four or five out of the six enveloping surfaces in contact with the exterior, and only one or 2 adiabatic and/or adjacent to the ground. The results obtained range between 5 and 5.4 W/m 2 .
Therefore, the comparison of the average installed power per square meter in terraces in Barcelona with the aforementioned values shows that it is, approximately, 180 times the necessary power to heat a square meter of a residential building and also the average heating energy consumption in residential buildings in Spain.
In order to show these results in a graphical way, two charts are needed. The first one represents a comparison of the reference values mentioned in this section, in W/m 2 . The second one aims to properly display the large difference between the reference values, the municipal regulation and the current installed power in terraces. The installed power per square meter for heating in terraces has a high value, often over the value recommended by the municipal regulations (Figure 19b). The regulations of Barcelona limit the heating load by 0.7 kW/m 2 . As we have seen, while large terraces often show values per square meter under the regulatory limit, small ones can double the municipal maximum. municipal regulation and the current installed power in terraces. The installed power per square meter for heating in terraces has a high value, often over the value recommended by the municipal regulations (Figure 19b). The regulations of Barcelona limit the heating load by 0.7 kW/m 2 . As we have seen, while large terraces often show values per square meter under the regulatory limit, small ones can double the municipal maximum. The fact that regulations set a limit to the installed power per square meter can distort the overall consumption results, as terraces with less installed power in absolute terms might be surpassing the municipal regulations, while terraces with more installed power might not reach the limits.
From the data collected in this survey as well as the samples chosen, it can be stated that the installed power in a terrace is related to the size and number of seats. Urban morphology, microclimatic zone and situation in the city also present certain trends. Terraces in squares, near the sea or in a touristic zone present characteristic features, but a more extended monitoring should be performed to support these points.
If all the available heating devices were turned on, the power consumption could be almost forty times the power needed to keep the interior conditions within regulations (CTE) [34] and 180 times the power needed to maintain the interior conditions of a square meter of an average terrace. Therefore, if the number of terraces continues to increase at the current rate, the impact on power consumption could acquire great significance.
Estimated Energy Consumption
The energy consumed was estimated using different scenarios based on the following variables: opening hours, percentage of installed power used and outdoor air temperature.
Opening hours were set from 8:00 h to 24:00 h, based on the regulations of the "Ordenanza de Terrazas de Barcelona" [26]. During weekends and exceptional occasions, terraces can be open two additional hours at night. We did not take into consideration these singular occasions for the calculations, rendering 5840 annual open hours of a total of 8760, with 496 h in March.
The percentage of switched-on heating devices was set either at 100% or 60%, based on direct observation and surveys to restaurant owners.
Three values (14, 16 and 18 • C) were set based on existing literature on outdoor comfort [35][36][37]. The number of hours below set temperatures were calculated during the terrace opening hours (Figure 20). The percentage of switched-on heating devices was set either at 100% or 60%, based on direct observation and surveys to restaurant owners.
Three values (14, 16 and 18 °C) were set based on existing literature on outdoor comfort [35][36][37]. The number of hours below set temperatures were calculated during the terrace opening hours ( Figure 20). To define the temperatures in Barcelona, climatic data from the simulation software DesignBuilder were used, also compared with the local meteorological Station Can Bruixa (Table 5). To define the temperatures in Barcelona, climatic data from the simulation software DesignBuilder were used, also compared with the local meteorological Station Can Bruixa (Table 5). The study was conducted on 268 terraces, with 36.5% having heating devices. The total number of terraces of the municipal census in June 2020 was 5650, not accounting for those exceptionally authorized due to COVID-19 [38].
Conclusions
This study is centered on terraces of food and beverage businesses located in the public realm, with a particular focus on the energy use of heating devices installed during the winter months. We analyzed a set of case studies that were used as a reference to calculate installed power values for heating devices and, through different scenarios, estimated the potential energy consumed. These values were compared with a set of baselines to give an indication of the importance of considering this outdoor dining from an energy consumption angle.
While residential buildings might be the focus of current energy concerns, this study points out that activities performed outdoors, such as consuming food or drinks in a terrace while socializing, could be the cause for the radical increase in the overall power consumption of an individual's daily activity [39].
In addition, the current COVID-19 pandemic has added more urgency to the study of this practice, as the presence of outdoor dining is dramatically increasing everywhere. The biggest challenge is found in cities where the outdoor climate conditions will require heating devices for a long period of time in the year.
In order to guarantee an overall sustainable trajectory in our ways of living, it is important to anticipate and monitor new practices that could become a challenge to our environmental objectives. Therefore, appropriate regulations are needed in order to ensure that the comfort of the user in terraces is not provided at the expense of unnecessary energy consumption.
This study also aims to highlight how current regulations tend to be limited in their approach. There is a clear difference between regulations that set up a limit value of installed power and regulations that specify features that would guarantee the creation of certain environmental conditions. The Codigo Técnico de la Edificación (CTE) [34] is the latter type of regulations, while the Ordenança de terrasses, current municipal regulations on terraces [26], is the former type.
A possible improvement could be basing the regulations on outdoor air temperature instead of only relying on values weighted by square meter [40]. Although the consumption would still be high, turning heaters off in the middle of the day, when the temperature rises, would be a feasible strategy to reduce it. The best option, however, would be to base the entire approach to providing comfort on the environmental conditions at hand and the needs of users, considering using radiant solutions, instead of heating the air of the street. When efforts are directed toward achieving higher sustainability and responsible use of energy, it is important to control spaces that, even though privately owned, are in the public space.
This work aims to open this pathway by showing the significance of the energy consumption in terraces and pointing out following research directions that should be taken in order to achieve a comprehensive understanding of the topic.
With more means and by involving more researchers and institutions, we aim start finding the answers to the questions we have highlighted. Is this extensive use of heating devices sustainable in outdoor terraces in Barcelona? Is the real objective of these devices to improve the thermal conditions of the users, or is there an underlying commercial interest, expressed through a warm visual image projected to the public? If the goal is an improvement of the thermal conditions, would it not be better to use radiant systems that modify the radiant temperature instead of the air temperature? If the main objective is psychological, could we use other visual effects with a similar "warm" image but much lower consumption?
Terraces have been and will continue to be key elements of the public realm, with the capacity of bringing people together or activating the streetscape. More recently, the focus has shifted in allowing social interactions with access to open air and with safety and health measures. In any case, terraces remain an element that can be strategically used to benefit the public realm and urban experience. Therefore, it is important that we study how to improve them at many different levels, from social justice to wellness.
Energy consumption and human comfort should be at the center of focus, and the fact that scarce resources are directed towards "heating the street" should be a major concern.
Author Contributions: The different knowledge and experience of each author have equally contributed to the development and final version of this article. All authors collectively designed the methodology, performed the data collection, analyzed the data and wrote the paper with equal responsibility. All authors have read and agreed to the published version of the manuscript. | 2021-02-04T14:01:54.486Z | 2021-01-16T00:00:00.000 | {
"year": 2021,
"sha1": "b4369ea86e0b61ea96f736811f8e7768f96be227",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/2/865/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f1a7fc3f4afe32153b30f02b78325dff692d8355",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Business"
]
} |
247173501 | pes2o/s2orc | v3-fos-license | Prognostic Impact of In-Hospital Use of Mechanical Cardiopulmonary Resuscitation Devices Compared with Manual Cardiopulmonary Resuscitation: A Nationwide Population-Based Observational Study in South Korea
Background and Objectives: This study analyzed the prognostic impact of mechanical cardiopulmonary resuscitation (CPR) devices in out-of-hospital cardiac arrest (OHCA) patients, in comparison to manual CPR. Materials and Methods: This study was a nationwide population-based observational study in South Korea. Data were retrospectively collected from 142,905 OHCA patients using the South Korean Out-of-Hospital Cardiac Arrest Surveillance database. We included adult OHCA patients who received manual or mechanical CPR in the emergency room. The primary outcome was survival at discharge and the secondary outcome was sustained return of spontaneous circulation (ROSC). Statistical analysis included propensity score matching and multivariate logistic regression. Results: A total of 19,045 manual CPR and 1125 mechanical CPR cases (671 AutoPulseTM vs. 305 ThumperTM vs. 149 LUCASTM) were included. In the matched multivariate analyses, all mechanical CPR devices were associated with a lower ROSC than that of manual CPR. AutoPulseTM was associated with lower survival in the multivariate analysis after matching (aOR with 95% CI: 0.57 (0.33–0.96)), but the other mechanical CPR devices were associated with similar survival to discharge as that of manual CPR. Witnessed arrest was commonly associated with high ROSC, but the use of mechanical CPR devices and cardiac origin arrest were associated with low ROSC. Only target temperature management was the common predictor for high survival. Conclusions: The mechanical CPR devices largely led to similar survival to discharge as that of manual CPR in OHCA patients; however, the in-hospital use of the AutoPulseTM device for mechanical CPR may significantly lower survival compared to manual CPR.
Introduction
Through recent bioengineering developments and its expansion to various medical fields, advanced medical equipment are being used to improve the quality of cardiopul-monary resuscitation (CPR). In airway management, video laryngoscopes significantly improve intubating performance, including improving the glottic view [1,2]. Mechanical CPR devices perform automatic chest compression (CC) for cardiac arrest patients [3]. As an alternative to manual CPR, the 2020 American Heart Association (AHA) CPR guidelines outline that mechanical CPR devices are indicated for use in special circumstances, such as for coronary angiography, the implementation of extracorporeal membrane oxygenation, or patient transportation via ambulances or helicopters [4].
Generally, mechanical CPR devices can be classified into three different CC mechanisms [5,6]. First, the AutoPulse TM device (AutoPulse ® Resuscitation System Model 100, ZOLL ® , Chelmsford, CA, USA) performs automatic CC using a load-distributing band with two arms that connect from the backboard [7]. When the whole rib cage, including the sternum, is compressed with the load-distributing band, blood flows to the heart and the whole body. Second, the Thumper TM device (Thumper Model 1007CCMII Mechanical CPR System, Michigan instruments Inc., Grand Rapids, MI, USA) performs CC using the driving force generated by compressed oxygen with one connecting arm from the backboard [8]. The piston of the Thumper TM device mimics the mechanism of manual CC and can directly compress the heart over the sternum. Third, the LUCAS TM device (LUCAS TM 2 Chest Compression System, JOLIFE AB Inc., Lund, Sweden) uses the driving force generated by electricity to perform CC using two connecting arms and a suction cup; the device can directly compress the sternum and induce active decompression via the suction cup [9].
In a recent systematic review, the AutoPulse TM device led to a higher incidence of pneumothorax and subcutaneous hematoma than manual CPR; however, the LUCAS TM device has been shown to be equivalent to manual CPR [10]. The difference in the CC mechanism between these mechanical CPR devices may affect patient outcomes, such as the return of spontaneous circulation (ROSC) or survival. Therefore, our aim was to investigate the prognostic impact of three mechanical CPR devices used in-hospital for out-of-hospital cardiac arrest (OHCA) patient outcomes, compared with manual CPR, using nationwide surveillance data from South Korea.
Study Design
This was a retrospective nationwide population-based observational study that used data from the Out-of-Hospital Cardiac Arrest Surveillance (OHCAS) database from the Korea Disease Control and Prevention Agency (KDCA) in South Korea (http://kdca.go. kr/, accessed on 1 March 2021). Over the period from 2012 to 2016, all acute OHCA patients transferred to medical institutions via emergency medical services were included in this study. Approximately 30,000 patients per year and 600 medical institutions were included. KDCA investigator visited the medical institution to review patient medical records and verify several items according to the Utstein Style and Resuscitation Outcome Consortium Project.
Participants
The study population included adult patients (18 years of age and older) who had been witnessed as OHCA patients between January 2012 and December 2016. The intervention group included patients who received mechanical CPR in the emergency room using one of three CPR devices (AutoPulse TM , Thumper TM , or LUCAS TM ). The control group included patients who received manual CPR. During pre-hospital CPR, the paramedics performed manual CPR for the patients in both groups. In addition, no patients received CPR using mechanical CPR devices until their arrival to the medical institution. Exclusion criteria were: trauma, patient with ROSC prior to arrival at the medical institution, death on arrival, patients with "do not resuscitate" orders, patients under 18 years or age, and patients who were transferred to another medical institution after emergency room management.
Outcome Measures
The primary outcome was survival at hospital discharge. The secondary outcome was the sustained ROSC (>20 min) in the emergency room.
Data Extraction
We extracted covariates to compare and analyze the survival and ROSC of patients in the intervention and control groups. The results were analyzed as follows: (1) Mechanical CPR using AutoPulse TM versus manual CPR; (2) mechanical CPR using Thumper TM versus manual CPR; and (3) mechanical CPR using LUCAS TM versus manual CPR. The covariates included prognostic factors that had been reported to be significantly related to the survival and ROSC of arrest patients in previous research [3,4,10]. More specifically, the following covariates were included and analyzed in the univariate analysis: age, sex, location of the cardiac arrest (public vs. non-public), bystander CPR, cause of arrest (cardiac vs. non-cardiac), initial cardiac arrest rhythm during CPR (shockable vs. nonshockable), transport time to the medical institution, percutaneous coronary intervention (PCI), target temperature management (TTM), pacemaker, and extracorporeal membrane oxygenation (ECMO).
Statistical Analyses
The categorical variables were analyzed using Pearson's Chi-squared and Fisher's exact tests. Continuous variables were analyzed using an independent samples t-test for parametric data and Mann-Whitney U-test for non-parametric data. A Shapiro-Wilk test was used to assess data normality. Propensity score matching (PSM) analysis was used to adjust for the covariates and lower their confounding effects for both the control and experimental groups. Since matching cannot be performed when there are missing values, all missing values for each variable were completely omitted prior to matching. PSM model was developed including all variables with no missing value. Matching was performed for each of the three devices separately, and 1:1 was conducted without replacement of variables. Multivariate analysis using logistic regression was additionally performed using all statistically significant covariates from the univariate analysis; any variables with p < 0.05 in the univariate analyses were included in the regression. Logistic regression with backward elimination was performed for all significant factors in univariate analysis. Following the stepwise elimination of factors in the regression, only the factors that optimize the model's coefficient of determination remained. The p-value criterion for covariate entry was 0.2. All data analyses were performed using R (version 4.1.1, The R foundation for Statistical Computing).
Ethics Statement
The local ethics committee approved this study (Kangnam Sacred Heart Hospital's Institutional Review Board No. HKS 2018-10-020); the need for informed consent was waived off due to the study's retrospective nature and the use of anonymous clinical data. The KCDC approved the use of the data for this study. In addition, the methodology fulfilled the criteria of the Strengthening the Reporting of Observational Studies in Epidemiology checklist [11].
Study Subject Characteristics
In this study, mechanical CPR using three different mechanical CPR devices (AutoPulse TM , Thumper TM , and LUCAS TM ) was analyzed, in comparison with manual CPR. The specifications of the mechanical CPR devices are shown in Table 1. AutoPulse TM is characterized by CC performed by a compression band, and Thumper TM is characterized by CC performed by oxygen or air pressure. The compression tool in the LUCAS TM device can be absorbed in the form of a cup shape, such that it adheres to the patient's chest surface to induce active decompression. A total of 142,905 OHCA patients were registered in the database, all of which were evaluated for their eligibility for inclusion in this study. After excluding 32,293 patients with missing data, 20,170 patients were included in our analysis ( Figure 1). The number of patients in each of the four comparison groups was as follows: 19
Study Subject Characteristics
In this study, mechanical CPR using three different mechanical CPR devices (Au-toPulse TM , Thumper TM , and LUCAS TM ) was analyzed, in comparison with manual CPR. The specifications of the mechanical CPR devices are shown in Table 1. AutoPulse TM is characterized by CC performed by a compression band, and Thumper TM is characterized by CC performed by oxygen or air pressure. The compression tool in the LUCAS TM device can be absorbed in the form of a cup shape, such that it adheres to the patient's chest surface to induce active decompression.
A total of 142,905 OHCA patients were registered in the database, all of which were evaluated for their eligibility for inclusion in this study. After excluding 32,293 patients with missing data, 20,170 patients were included in our analysis ( Figure 1). The number of patients in each of the four comparison groups was as follows: 19,045 patients received manual CPR; 671 patients received mechanical CPR with AutoPulse TM ; 305 patients received mechanical CPR with Thumper TM ; 149 patients received mechanical CPR with LU-CAS TM . The clinical characteristics of the patients from the unmatched data in the manual and mechanical CPR groups are summarized in Supplementary Table S1. In the unmatched univariate analysis, there was a significant difference between the mechanical and manual CPR groups for some covariates. In the AutoPulse TM vs. manual CPR comparison, the significant covariates were age, arrest rhythm, PCI, and pacemaker (p = 0.016, 0.010, 0.009, and 0.017, respectively). In the Thumper TM vs. manual CPR comparison, the TM The clinical characteristics of the patients from the unmatched data in the manual and mechanical CPR groups are summarized in Supplementary Table S1. In the unmatched univariate analysis, there was a significant difference between the mechanical and man-ual CPR groups for some covariates. In the AutoPulse TM vs. manual CPR comparison, the significant covariates were age, arrest rhythm, PCI, and pacemaker (p = 0.016, 0.010, 0.009, and 0.017, respectively). In the Thumper TM vs. manual CPR comparison, the only significant covariate was TTM (p = 0.028). In the LUCAS TM vs. manual CPR comparison, the significant covariates were bystander CPR and ECMO (p = 0.005 and <0.001, respectively). Comparing each device of mechanical CPR and manual CPR, sustained ROSC and survival at discharge were significantly higher in manual CPR (p of AutoPulse TM = less than 0.001, 0.028, p of Thumper TM = less than 0.001, 0.011, and p of LUCAS TM = 0.003, 0.048, respectively).
Matched Univariate Analysis
For the mechanical CPR with AutoPulse TM and manual CPR comparison, 1:1 PSM was applied to the three imbalanced covariates (i.e., age, arrest rhythm, PCI); the results are shown in Table 2. Both groups equally included 671 patients. Patients in the mechanical CPR with AutoPulse TM group showed a higher frequency of witnessed cardiac arrest (62.9 vs. 56.6%, p = 0.022), TTM rates (7.0 vs. 3.3%, p = 0.003), and ECMO (2.8 vs. 1.0%, p = 0.029), compared to manual CPR. Other covariates did not significantly differ across both groups. In terms of outcomes, there was a lower rate of sustained ROSC in the AutoPulse TM group (30.3 vs. 35.8%, p = 0.037), but there was no significant difference between groups in survival at discharge (4.9 vs. 6.3%, p = 0.342). For the mechanical CPR with Thumper TM and manual CPR comparison, 1:1 PSM was applied to the imbalanced covariate (TTM). Both groups included 305 patients. Patients in the mechanical CPR with Thumper TM group showed a higher rate of arrest in a public location compared to the manual CPR group. Other covariates did not significantly differ across both groups. With respect to outcomes, the Thumper TM group showed a lower rate of sustained ROSC compared with manual CPR (20.3 vs. 36.1%, p < 0.001). There was no significant difference across groups for survival at discharge (3.3 vs. 4.6%, p = 0.532).
For the mechanical CPR with LUCAS TM and manual CPR comparison, 1:1 PSM was applied to the two imbalanced covariates (bystander CPR and ECMO). Both groups included 149 patients. There were no imbalanced covariates in either group after PSM. In terms of outcomes, the LUCAS TM group showed a lower rate of sustained ROSC compared to manual CPR (27.5 vs. 46.3%, p = 0.001); however, there was no significant difference across groups for survival at discharge (2.7 vs. 5.4%, p = 0.377).
Matched Multivariate Analysis
Across all three mechanical CPR devices, the use of mechanical CPR devices, witnessed arrest, and cardiac origin arrest were common significant predictors for sustained ROSC (Figure 2 and Table 3). Witnessed arrest was significantly associated with high ROSC, but the use of mechanical CPR devices and cardiac origin arrest were associated with low ROSC. With respect to survival at discharge, TTM was the only significant predictor for high survival across all three types of mechanical CPR devices ( Figure 2 and Table 4). In the multivariate analysis of matched cases, mechanical CPR with AutoPulse TM and cardiac origin arrest showed a low sustained ROSC (aOR with 95% CI: 0.74 (0.58-0.93) and 0.38 (0.26-0.57), respectively). However, witnessed cardiac arrest showed a high sustained ROSC (aOR with 95% CI: 2.05 (1.60-2.62); Table 3). Additionally, the use of Au-toPulse TM and ECMO was associated with lower survival at discharge (aOR with 95% CI, In the multivariate analysis of matched cases, mechanical CPR with AutoPulse TM and cardiac origin arrest showed a low sustained ROSC (aOR with 95% CI: 0.74 (0.58-0.93) and 0.38 (0.26-0.57), respectively). However, witnessed cardiac arrest showed a high sustained ROSC (aOR with 95% CI: 2.05 (1.60-2.62); Table 3). Additionally, the use of AutoPulse TM and ECMO was associated with lower survival at discharge (aOR with 95% CI, 0.57 (0.33-0.96) and 0.03 (0.00-0.34), respectively; Table 4). Conversely, the following factors were associated with higher survival: witnessed cardiac arrest, arrest rhythm, PCI, TTM, and ECMO (aOR with 95% CI In the analysis between mechanical CPR with Thumper TM and manual CPR, the use of Thumper TM and cardiac origin arrest were significantly associated with low sustained ROSC (aOR with 95% CI: 0.43 (0.30-0.63) and 0.29 (0.15-0.54), respectively; Table 3); however, witnessed cardiac arrest was significantly associated with high sustained ROSC (aOR with 95% CI: 2.38 (1.57-3.60)). The use of the ThumperTM as a mechanical CPR device was not significantly related to survival at discharge because it was removed from the final regression model (Table 4). Three factors were significant predictors for high survival: shockable arrest rhythm, TTM, and pacemaker (aOR with 95% CI: 2.92 (1.09-7.79), 6.33 (1.75-22.98), and 15.95 (2.33-109.00), respectively).
In the analysis between mechanical CPR with the LUCAS TM device and manual CPR, the use of LUCAS TM and cardiac origin arrest were significantly associated with low sustained ROSC (aOR with 95% CI: 0.45 (0.27-0.75) and 0.14 (0.05-0.41), respectively; Table 3). Moreover, witnessed arrest was the only significant predictor (aOR with 95% CI: 2. 05 (1.19-3.55)). The use of the LUCASTM as a mechanical CPR device was not significantly related to survival at discharge because it was removed from the final regression model (Table 4). However, young age, TTM, and pacemaker were significantly associated with high survival (aOR with 95% CI: 0.96 (0.92-1.00), 6.51 (1.65-25.74), and 14.20 (1.48-136.61), respectively).
Discussion
This study is a nationwide population-based observational study. OHCA patients who experienced three different types of mechanical CPR devices used in-hospital were compared to those who received manual CPR; their ROSC and survival rates were analyzed. In the univariate analysis after PSM, all three mechanical CPR devices showed lower sustained ROSC than that of manual CPR. Survival at discharge after the use of these mechanical CPR devices was equivalent to that of patients who received manual CPR. In the multivariate analysis for survival, in-hospital procedures such as TTM, PCI, pacemaker, and ECMO were significant prognostic factors, even in OHCA patients. These results may provide additional scientific evidence for the clinical importance of post-cardiac arrest care in OHCA patients.
There were several studies comparing mechanical CPR in hospitals against manual CPR [12][13][14][15]. Hayashida et al. utilized national data in Japan and showed that mechanical CPR was associated with significantly worse outcome [12]. This study reported the long no flow period caused by transmission to device from manual CPR, as well as the device's lack of real-time input on depth, rate, recoil, and cycle. Several CPR devices were included in this study, however no subgroup analysis was conducted. AutoPulse TM requires a long time to apply for an arrest patient due to its complex and heavy structure. Other machines, on the other hand, have the potential to reduce the time it takes to apply them to a patient. Furthermore, because the mechanisms for compression depth, speed, and decompression used by different machines differ, manual CPR must be compared independently.
In this study, the enrolled OHCA patients received manual CPR during the prehospital stage. All three types of mechanical CPR were in-hospital CPR performed in the emergency room. Each device has its own unique features and characteristics ( Table 1). The automatic chest compression device's compression movement is similar to the mechanism that successfully returns blood to the heart during manual chest compression [5], and each device may be classified according on the compression mechanism's features [6]. To squeeze the entire ribcage, compress the heart, and increase blood flow, AutoPulse TM uses two load distributing bands attached to the backboard [7]. Since compression depth is only 20% of chest depth, compression depth might differ depending on the chest depth. It is then compressed at an 80-beat-per-minute rate. In Thumper TM , one connecting arm attached to the backboard directly compresses the heart on the sternum in the form of a piston motion, with oxygen as the driving force [8]. Compression is performed to the depth of 5-6 cm and the rate of 100 per minute. LUCAS TM utilizes two connecting arms linked to the backboard and a suction cup to press in a piston motion, and the cup can induce active decompression [9]. When the overall chest depth is less than 18.5 cm, chest compressions are conducted at a depth of 5 cm or less.
The mechanical CPR with AutoPulse TM has been known to have advantages in terms of improving the outcomes of cardiac arrest patients, compared to manual CPR. First, the AutoPulse TM increases the diastolic blood pressure of cardiac arrest patients during CC compared to manual CPR, since it can continuously perform CC via band compression [16]. Nevertheless, there was insufficient evidence that this physiologic effect leads to better survival than manual CPR [16]. Second, the AutoPulse TM has an additional advantage in the pre-hospital transport of cardiac arrest patients, since CC can be maintained at an appropriate depth and rate without interruption in an ambulance [17,18]. Despite the advantages of the AutoPulse TM , there are few reports that the AutoPulse TM is superior to manual CPR [19]. In large population studies in Europe and the United States, the survival of patients who received the AutoPulse TM was similar to that of patients who received manual CPR [20,21]. Another previous study additionally reported a 11.7% rate of serious organ injury occurring in mechanical CPR with AutoPulse TM (manual CPR: 6.3%) [4]; organ injury included pneumothorax, liver rupture and emphysema, and fractures of multiple ribs and the sternum were accompanied in 45.6% of patients (manual CPR: 41.3%) [4]. A recent meta-analysis also reported that manual CPR has a lower risk of pneumothorax and hematoma than mechanical CPR with the AutoPulse TM [10]. Our study also suggests that the in-hospital use of the AutoPulse TM for mechanical CPR could significantly lower survival compared to manual CPR.
Another factor of AutoPulse TM to consider is compression rate. CPR guidelines recommend a depth of compression of 100-120 per minute for high-quality CPR [4]. The compression rate of AutoPulse TM is only 80 beats per minute, which is insufficient for high-quality CPR. Even when compared to other machines (Thumper and LUCAS are 100 ± 6 per minute and 102 ± 2 per minute, respectively), it is set low. This should be considered a variable that can influence the outcome as compared to manual CPR.
During mechanical CPR with Thumper TM , the high energy compressed by air is converted into energy that the piston can use to compress the chest, leading to an increase in cardiac output [22]. However, the high piston energy and one connecting arm of Thumper TM may also cause serious organ damage and induce a change in the CC point or non-vertical CC. Lin et al. showed no significant difference in early survival for OHCA patients performing mechanical CPR with Thumper TM versus manual CPR in the hospital [13].
The LUCAS TM device can perform high-quality CPR by maintaining a consistent compression point using two connecting arms and by inducing active decompression via the suction cup. Previous studies have reported that LUCAS TM can maintain a higher cardiac output, higher carotid blood flow, and higher cardiac perfusion pressure than manual CPR [23,24]. In a large population randomized trial of OHCA patients, such as the LINC and PARAMEDIC study, mechanical CPR with LUCAS TM demonstrated the same sustained ROSC and survival rate, compared to manual pre-hospital CPR by paramedics [25,26]. In a systematic review and meta-analysis comparing LUCAS TM and manual CPR, LUCAS TM produced the same ROSC and survival as manual CPR [8,21,27]. Another advantage of LUCAS TM is its capability of continuous CC during transport for emergency procedures, such as PCI [17,28]. In addition, mechanical CPR with LUCAS TM can significantly reduce the interruption time during CC, compared to manual CPR [29]. Nonetheless, some autopsy studies have shown that LUCAS TM can cause microfractures of the sternum or multiple ribs, which were not identified by post-mortem computed tomography [30]. In a multi-center study comparing manual CPR with mechanical CPR using LUCAS TM , the LUCAS TM group showed a similar frequency of sternal fracture (LUCAS TM : 58.3% vs. manual CPR: 54.2%, p = 0.555) and a higher frequency of rib fractures than that of manual CPR (LUCAS TM : 78.8% vs. manual CPR: 64.6%, p = 0.021) [31,32].
This study has several limitations. First, this study included a wide representation of the population of South Korea, but these results may be different in studies of another race or country. Second, regardless of our efforts to adjust for confounding factors using PSM and multivariate analysis, the data should be carefully interpretated due to the selection bias inherent with the nature of an observational study. Third, we only analyzed short-term survival provided by the OHCAS database, since this database did not provide long-term survival metrics; therefore, the effect of mechanical CPR may differ for long-term survival. Fourth, the sample size of the mechanical CPR group was not as large as that of the manual CPR group; even though PSM was applied to resolve the imbalanced sample sizes between groups, the effect of mechanical CPR should be reassessed in future large population studies.
Conclusions
The investigated mechanical CPR devices mostly led to equal survival as that of manual CPR among OHCA patients; however, the in-hospital use of the AutoPulse TM for mechanical CPR may significantly lower survival compared to manual CPR.
Institutional Review Board Statement:
The study was conducted according to the guidelines of the Declaration of Helsinki and was approved by the Institutional Review Board (Institutional Review Board of the Kangnam Sacred Heart Hospital (IRB No. 2018-10-020)).
Informed Consent Statement:
Patient consent was waived due to the retrospective nature of the study.
Data Availability Statement:
The datasets generated during the current study are available from the corresponding author on reasonable request. | 2022-03-02T16:27:23.878Z | 2022-02-27T00:00:00.000 | {
"year": 2022,
"sha1": "466a1dc2f2a7595d85e09dd99bcf7538c285d00d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1648-9144/58/3/353/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d6c7c5c9a30c56d826feec66def1e9ec6e08bc1d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237004255 | pes2o/s2orc | v3-fos-license | Analysis of risk factors in patients with alcohol delirium who have been treated at the riga psychiatry and narcology center in 2018
Introduction Alcohol abuse can be the cause for psychotic disorders. In the International Classification of Diseases (ICD10) they are coded F10.4-F10.9. One of the potentially life-threatening complications is the development of alcohol delirium. Mortality rates in patients with untreated alcohol delirium reach 15%. It is extremely important to identify the risk factors that contribute to the development of delirium in time to ensure the most effective treatment and to ensure the patient’s potential survival in the hospitalization and post-hospitalization phase. Objectives To analyze and evaluate the risk factors that have coused alcohol withdrawal with the development of delirium in patients admitted at the department of Narcology of the Riga Psychiatry and Narcology Center in 2018. Methods This study is a retrospectively conducted cohort study based on data from inpatient medical records for patients diagnosed with alcohol-induced delirium at the Department of Narcology of the Riga Psychiatry and Narcology Center in Year 2018. Results In the Riga Psychiatry and Narcology Center 113 patients were diagnosed alcohol caused delirium. That makes up to 8% of all inpatients in year 2018. Summary of the prevalence of the most significant risk factors in 2018 inpatients with alcohol delirium. High levels of aspartate aminotransferase 95% Tachycardia 76% High levels of alanine aminotransferase 54% Low platelet count 51% High systolic blood pressure 50% High diastolic blood pressure 46% Other somatic diseases 45% Previous history of detoxification 37% History of alcohol-induced seizures 13% Conclusions The study indicated that some easily determined parameters are potential clinical predictors for the development of delirium tremens. Disclosure No significant relationships.
Introduction: Background: Little is known about the modifications in gambling patterns during the Covid-19 pandemic, which has shown signs of increase, particularly for individuals with preexisting gambling problems. Objectives: Our aim was to assess the behaviour of a cohort of patients in the Trentino Region. Methods: A semi structured questionnaire containing Hamilton Depression Rating Scale as well as open-ended questions on gambling activities, specifically online gambling, was administred over the telephone. The survey was administred for two months over the lockdown period (april-june 2020) and took approximately 20 minutes to complete.
Results: About 50 responsens were collected. Data are currently been analyzed and will be avaiable at the time of the Congress. Conclusions: Will be show at the time of the Congress. Introduction: Alcohol abuse can be the cause for psychotic disorders. In the International Classification of Diseases (ICD10) they are coded F10.4-F10.9. One of the potentially life-threatening complications is the development of alcohol delirium. Mortality rates in patients with untreated alcohol delirium reach 15%. It is extremely important to identify the risk factors that contribute to the development of delirium in time to ensure the most effective treatment and to ensure the patient's potential survival in the hospitalization and post-hospitalization phase. Objectives: To analyze and evaluate the risk factors that have coused alcohol withdrawal with the development of delirium in patients admitted at the department of Narcology of the Riga Psychiatry and Narcology Center in 2018. Methods: This study is a retrospectively conducted cohort study based on data from inpatient medical records for patients diagnosed with alcohol-induced delirium at the Department of Narcology of the Riga Psychiatry and Narcology Center in Year 2018. Results: In the Riga Psychiatry and Narcology Center 113 patients were diagnosed alcohol caused delirium. That makes up to 8% of all inpatients in year 2018. Summary of the prevalence of the most significant risk factors in 2018 inpatients with alcohol delirium.
Conclusions:
The study indicated that some easily determined parameters are potential clinical predictors for the development of delirium tremens. Introduction: Goffman defined stigma as an "attribute that is deeply discrediting" and in the last two decades research on this subject grew substantially.Opioids were ranked as the second most common form of illicit drug used worldwide and there is consensus in the literature that opioid substitution therapy (OST), methadone or buprenorphine, are the most effective treatments, although remain underutilized. People with an history of substance use disorders (SUD) are widely stigmatized, a significant barrier to detection and treatment efforts. Care workers were cited as the second most common source of stigma.
Objectives: The aim is to do a review of the literature of stigma as a significant barrier to OST and present several potential strategies to reduce stigma. Methods: Non-systematic review of the literature with selection of scientific articles published in the last 5 years; by searching the Pubmed and Medscape databases using the combination of MeSH descriptors. The following MeSH terms were used: Opioid Use Disorder; Stigma; Opioid Substitution Therapy Results: OST providers should actively bring up the topic of stigma in clinic appointments to determine whether the patient is experiencing stigma, and if so, whether it is adversely affecting their ability to continue in the treatment. More active measures need to be taken to help reducing the stigma through public awareness campaigns at local levels, continuing education of health care providers regarding substance OST, and greater incorporation of family members into the program. Conclusions: In conclusion, further research is required to understand and address this issue. Introduction: Designer drugs, as a term, first came about in the 1980s. Most of these "designer drugs" have synthetic cannabinoids and other psychoactive formulas difficulty to detect.
Objectives: A 28 year man was referred to the hospital. Methods: CT brain and EEG were also normal. Results: Among 7 days before attending the hospital the patient had a strange behaviour. He was staying like in changed reality. The day before admission he got irritable in the evening was reporting that he could hear animal's imperative voices "we together with squirrel, dolphin visited giraffe, that someone told to jump from the window". That symptoms were temporary after that he was shocked when realized that he was in a room. The patient has the history of marihuana use in the past 5 years, periodically. There is no evidence data about the usage of other narcotic substances. On examination he was alert, sitting on a same place looking at one point, sometimes trying to find something or suddenly standing and trying to go somewhere. He has a change of catatonic stupor and excitement. The psychomotor activity was changeable. While observing the patient during few days several times he disrobed all his clothes, staying or laying on a bed or suddenly freezing in one pose.
Conclusions: Taking into account clinical symptoms, the patient developed, the conclusion was made about connection of patients' oneiroid catatonia with the usage of "Spice" or "Designer drug". Thus, designer drugs may sound like a safer alternative, but often can lead to serious mental disturbances. Introduction: Objective laws of emotional disorder formation, their frequency along with clinic and psychopathological structure have been poorly studied until now.3 groups of patients have been observed: 200 people with alcohol addiction, 180 people with opioid addiction, and 90 people with psychostimulant addiction.
Objectives: All these have influenced our research which goal is to study patients' emotional state at the stages of psychosocial rehabilitation.
Methods: Signs of psychological and physical addiction, specific personality disorders and decrease in social functioning level have been found for all of the observed patients. Psychodiagnostic research (performed according to Hamilton, Spielberger and Hanin, Buss-Durkee methods) has shown significant increase of depression and anxiety parameters, as well as aggression level for all the patients. | 2021-08-14T13:18:16.521Z | 2021-04-01T00:00:00.000 | {
"year": 2021,
"sha1": "e65e12e90b765fb9ef482e8c49b31c8577d1bd49",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Cambridge",
"pdf_hash": "e65e12e90b765fb9ef482e8c49b31c8577d1bd49",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
} |
87311145 | pes2o/s2orc | v3-fos-license | Comparative Genotype Analysis of Hepatitis A Virus : Two One-Year Studies in South Korea in 2002 and 2011 In
In Hyuk Baek, Hyun Woong Lee, Hyung Joon Kim, Mi-Ok Song, Seung-Kew Yoon, Jong-Hwa Park, In Sik Chung and Wonyong Kim Department of Microbiology, Chung-Ang University College of Medicine, Seoul; Department of Internal Medicine, Chung-Ang University College of Medicine, Seoul; Research Institute of Public Health and Environment, Seoul; Department of Internal Medicine, The Catholic University School of Medicine, Seoul; Department of Genetic Engineering and Graduate School of Biotechnology, Kyung Hee University, Suwon, Korea
INTRODUCTION
Hepatitis A virus (HAV) is the most general cause of acute infectious hepatitis, with about 1.5 million people infected annually worldwide (1).HAV generally spreads through the fecal and oral transmission.Development in living standards and socio-economic growth are associated with a reduce in the incidence of HAV infection in developing countries over the past decades.In provinces where HAV is prevalent, children may transmit the virus even though they lack evident clinical features, allowing the virus reservoirs to spread from developing countries into developed nations (2).
HAV is a sole member of the family Picornaviridae and is a member of the genus Hepatovirus (3).The virus genome is a 7.5 kb positive strand of RNA with a single poly protein that is divided into three functional regions named P1, P2, and P3.P1 encodes the capsid proteins (VP1-VP4), whereas P2 and P3 encode non-structural proteins necessary for virus replication (4).HAV has been categorized into seven major genotypes (I-VII) according to the sequence of the VP1/2A junction (5).Genotypes I, II, III, and VII are known to be human strains, whereas genotypes IV, V, and VI are exclusively simian in origin (6).Genotypes I and III are the most common worldwide and are further divided into subgenotypes IA, IB, IIIA, and IIIB.Comparative analysis of this region was suggested as a molecular epidemiologic marker to investigate the relatedness of individual strains with regard to parameters such as geographical or infectious origin.
Although a vaccination program is being implemented in South Korea and most adults had immunity to HAV until 20 years ago (7), there has been a significant recent increase in HAV infection (8~11).The greater part of children and young adults were not infected with HAV at early ages, making morbidity most likely upon infection.
Several recurring epidemic HAV infection have occurred,
and the infection is now recognized as a public health trouble (12).The number of adult cases of acute hepatitis A has progressively increased during the last ten years, reflecting the changing epidemiology of HAV according to rapid improvements in socioeconomic status (8,13).In this study, HAV fecal specimens obtained from patients in the years 2002 and 2011 in Seoul, South Korea were analyzed with respect to the phylogenetic relatedness of their genotypes, and endemic patterns were determined by comparison with other geographically defined isolates.
Stool specimens
Among available 79 samples, 34 fecal specimens were collected from patients diagnosed with an acute form of hepatitis A at the Catholic University School of Medicine in 2002 and 45 samples were collected from Chung-Ang University Hospital in 2011.Acute hepatitis A was defined by the presence of specific clinical symptoms accompanied by detection of IgM anti-HAV in serum samples using a commercially available assay according to the manufacturer instructions (Diasorin, Saluggia, Italy).All specimens were diluted ten-fold with PBS (pH 7.4) and clarified by centrifugation at 10,000 g for 10 min.The supernatants were stored at -20℃ before use.
Primer design
Based on the genomic map (13), primers for HAV genotyping were chosen from the VP1/2A junction region.
RNA extraction for genotyping
HAV RNA was extracted using Trizol reagent (Gibco BRL Life Technologies, Grand Island, NY, USA).In brief, 0.3 ml of supernatant from a fecal suspension in PBS was mixed with 0.7 ml Trizol reagent and 0.2 ml chloroform/ isoamylalcohol (24:1).After centrifugation at 12,000 g for 10 min, the RNA in the aqueous solution was precipitated by adding an equal volume of isopropanol.The RNA precipitate was collected by centrifugation at 12,000 g for 10 min, washed with 70% ethanol, and dissolved in 20 μl RNase-free water.
Reverse transcription-polymerase chain reaction
Nested RT-PCR was performed in three steps.
Nucleotide sequencing and phylogenetic analysis of the VP1/2A junction region
The amplified cDNA (482 bp) was purified using the QIAquick purification kit (Qiagen GmbH, Westburg, Germany) and sequenced using the BigDye terminator cycle sequencing kit and an ABI PRISM 3730 automated DNA sequencer (Applied Biosystems).HAV-2895F was used as a sequencing primer for analysis of the 168-bp VP1/2A junction region.The resultant sequences were aligned using the CLUSTAL_X 1.81 program (15) with parameters set against the corresponding sequences of 41 other geographically defined HAV strains from the NCBI GenBank database.A rooted tree was constructed using the neighbor-joining algorithms (15) from the PHYLIP suite of programs (16).Evolutionary distance matrices were generated by the neighbor-joining method (17) and tree topology was evaluated using a bootstrap analysis of the neighborjoining dataset with the SEQBOOT and CONSENSE programs from the PHYLIP package.
Nucleotide sequence accession numbers
The nucleotide sequences obtained in this study were deposited in GenBank under the accession numbers EU186418-EU186431 and JQ066751-JQ066763.
Description of HAV patients
In 2002 we analyzed 14 patients, comprised of 8 (57.1%) males and 6 (42.9%) females, with a mean age of 29.1 years (range 14 to 46 years).In 2011 we analyzed 11 (64.7%)male patients and 6 female (35.3%) patients aged from 26 to 42 years.The patients were hospitalized mainly with symptoms of fatigue, nausea, and right-upper-quadrant (RUQ) discomfort, although two cases presented with manifestations of jaundice.All of the patients were sporadic cases.The mean peak alanine aminotransferase (ALT) level was 4,134 IU/ml (range, 970~7,020 IU/ml) and the mean peak bilirubin was 5.3 mg/dl (range, 1.1~10.1 mg/dl).One patient (a 26-year-old female) was diagnosed with fulminant hepatic failure.She became comatose within two days following admission, with a factor V level 7% of normal.
She underwent emergency orthotopic liver transplantation.
Another patient (a 30-year-old male) had experienced jaundice without liver failure for two months following admission.The other patients recovered spontaneously, with a marked decline in ALT activity and clotting factor normalization within four weeks following admission.
Genotyping and phylogenetic analysis
The HAV genotype was identified in 14
Phylogenetic analysis of 2002 strains
Thirteen of the 14 strains isolated in 2002 were subgenotype IA.These strains were further divided into three clusters using the neighbor-joining algorithms and supported by high bootstrap values (Fig. 1a).CAU 02-08, CAU 02-09, and CAU 02-23 belonged to the cluster of mixed Japanese and Chinese strains whereas CAU 02-04, CAU 02-07, and CAU 02-21 were related to the Japanese cluster.
However, the other seven CAU isolates, CAU 02-01, CAU 02-03, CAU 02-06, CAU 02-10, CAU 02-11, CAU 02-12, and CAU 02-18, appeared to belong to a new cluster together with six Korean isolates of KU98 serious strains that were previously identified in 1998 and represented a Korean native cluster based on 94.0% shared nucleotide identity.The nucleotide sequences of these strains had 95.8~99.4% nucleotide identity similarity with MS-1, the subgenotype IA prototype strain detected in the USA in 1964, but showed relatively low nucleotide identities (91.0~88.0%)with HM-175, the prototype subgenotype IB detected in China in 1989.Among these strains, the nucleotide sequences of CAU 02-11 and CAU 02-12, which were isolated from a family outbreak between brothers, were identical and showed no nucleotide variability with CAU 02-10.This cluster also included three Japanese strains, A68, A159, and FH-1.
Korean isolates close to the USA cluster were not found.
Russian strains, 1406, RUS_17_03, Kular-1982-2101, and RM 238 belonging to genotype I showed a closer relationship to strains in the Korean cluster than those in the Chinese, Japanese, and USA clusters, with >94.6% nucleotide identity.
Interestingly, CAU 02-16 appeared in a subgenotype IIIA lineage that was clearly separated from the other Korean strains.This strain was phylogenetically closest to India 90 detected in 1990, with 97.6% nucleotide identity, and prototype strain PA21 detected from Panama in 1980, with 97.4% nucleotide identity.It is notable that this is the first imported genotype IIIA strain identified in this country and that the patient had a history of travel to India before symptoms manifested.
South Korea diagnosed at the Department of Internal Medicine at the Catholic University Hospital in 2002 and at the Department of Internal Medicine at Chung-Ang University Hospital in 2011 were assessed.All participants provided written informed consent.For all cases, collected samples were analyzed under protocols approved by the Chung-Ang University College of Medicine IRB (Protocol #2009-13).
First, 5
μl extracted RNA was denatured with 20 μM reverse primer (HAV-3433R) and 5 μl ddH 2 O in 70℃ for 10 min.Then, cDNA was synthesized with the denatured RNA in a reaction mixture containing 10 mM Tris-HCl (pH 8.3), 50 mM KCl, 0.01% gelatin, 2.5 mM MgCl 2 , 2.5 mM dNTPs, and 2.5 units AMV reverse transcriptase (Promega Corporation, Madison, WI, USA) at 42℃ for 60 min.The first PCR was performed to amplify the flank region of the VP1/2A junction.Five microliters of cDNA was added to a PCR reaction mixture containing 10 mM Tris-HCl (pH 8.3), 50 mM KCl, 0.01% gelatin, 1.5 mM MgCl 2 , 0.2 mM of each dNTPs, 20 μM each primer (HAV-2741F and HAV-3433R) and 2.5 units Taq polymerase (Roche Diagnostics, Indianapolis, IN, USA).The PCR conditions consisted of one cycle of denaturation at 98℃ for 1 min, followed by 30 cycles at 95℃ for 1 min, 45℃ for 1 min, and 72℃ for 1 min 30 sec, and one final extension at 72℃ for 10 min in a GeneAmp PCR system 2700 (Applied Biosystems, Foster City, CA, USA).Nested PCR was performed using 5 μl of the first amplification product as a template and the primer set HAV-2895F and HAV-3376R.The reaction mixture was the same as for the first amplification and the thermal cycling profile consisted of one cycle of 95℃ for 7 min, 30 cycles at 94℃ for 1 min, 50℃ for 1 min, 72℃ for 1 min, and a final extension at 72℃ for 10 min.Both first and nested PCR products were analyzed by electrophoresis in 2% SeaKem LE agarose gels (FMC Bioproducts, Rockland, ME, USA) with ethidium bromide staining.The results were viewed using a GelDoc XR image-analysis system (BioRad Laboratories Inc, Hercules, CA, USA).
Figure 1 .
Figure 1.Phylogenetic trees based on 168-bp HAV VP1/2A junction region nucleotide sequences show genetic relationships between 4 Korean isolates and 37 HAV strains obtained from the GenBank database.The shaded boxes indicate the 16 genotype I CAU isolates and the unfilled boxes indicate geographically related clusters.The numbers at the nodes indicate the level of bootstrap support (%) based on neighbor-joining analysis of 1,000 re-sampled datasets; only values above 50% are given.Hepatitis A virus Cy145 (L07732) was used as the out-group.(a) 2002 data; (b) 2011 data | 2019-03-31T13:41:35.528Z | 2014-09-01T00:00:00.000 | {
"year": 2014,
"sha1": "c2a880407b1f2f802c3b010ed12c42af83567637",
"oa_license": "CCBYNC",
"oa_url": "https://synapse.koreamed.org/upload/SynapseData/PDFData/0079jbv/jbv-44-252.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "c2a880407b1f2f802c3b010ed12c42af83567637",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
265607398 | pes2o/s2orc | v3-fos-license | Asiaticoside Down-Regulates HIF-1α to Inhibit Proliferation, Migration, and Angiogenesis in Thyroid Cancer Cells
Background: Thyroid cancer (TC), the most prevalent endocrine malignancy, has been subjected to various treatment methods. However, the efficacy of asiaticoside (AC) for treating TC remains uncertain. Aims: To explore the impact of AC on TC and determine its potential mechanisms of action. Study Design: In vitro and in vivo cell line study. Methods: We evaluated the effects of AC on human TC cell lines, namely TPC-1 and FTC-133. Both in vitro and in vivo experimental validations were conducted. Results: AC significantly diminished the viability and proliferation of TC cells based on the CCK-8 assay and Edu staining findings. Migration and invasion assays revealed that AC effectively curtailed the migration and invasiveness of TC cells. The tube formation assay demonstrated that AC substantially impeded TC cell-induced angiogenesis. Western blot assay revealed that AC significantly reduced the expression levels of TRAF6, HIF-1α, and VEGFA, indicating that AC could potentially exert its anticancer effect by inhibiting the TRAF6/HIF1α pathway. Our in vivo experiments, which involved administering AC to BALB/c nude mice injected with TPC-1 cells, demonstrated significant inhibition of tumor growth and reduction in the expression of Ki-67, TRAF6, HIF-1α, and VEGFA. Conclusion: Our study highlights the significant inhibitory effect of AC on TC, offering fresh insights and potential drug candidates for TC treatment.
INTRODUCTION
Thyroid cancer (TC) is the most common endocrine malignancy, and its incidence has increased globally in the last two decades.More than 95% of the TCs originate from follicular epithelial cells, with variable metastases to the lymph nodes. 1 Although surgical resection and radiometabolic therapy have been successfully used to treat a majority of TC cases, some TCs become refractory or metastasize.Angiogenesis, an unregulated and unrestricted biological process in tumors, induces tumor cell proliferation, local invasion, and hematogenous metaVstasis. 2The transcription factor hypoxia-inducible factor 1α (HIF1α) plays a crucial role in hypoxic responses, activating various downstream effectors, including vascular endothelial growth factor (VEGF).VEGF expression is associated with tumor progression in numerous cancers, including those of the pancreas, breast, cervix, and thyroid.Overexpression of VEGF is associated with an increase in the growth, progression, invasion, and metastasis of TC cells. 2 TNF receptor-associated factor 6 (TRAF6), a member of the TRAF family, originally identified for its role in inflammatory signaling, has recently been implicated in cancer.Overexpression of TRAF6 regulates tumorigenesis and angiogenesis in several cancers, including those of the lung and pancreas. 3Inhibition of the TRAF6/HIF-1α/VEGF pathway reportedly mitigates TC angiogenesis and metastasis. 4Recent studies have highlighted the therapeutic potential of active compounds from herbs.Asiaticoside (AC), a pivotal biochemical constituent isolated from Centella asiatica, reportedly wields significant anticancer properties against certain malignancies.It serves as a prominent inhibitor of the advancement of gastric cancer while simultaneously inducing endoplasmic reticulum stress. 5Moreover, AC can suppress the epithelial-mesenchymal transition (EMT) and stem cell-like traits of pancreatic cancer PANC-1 cells, by inhibiting the activation of p65 and p38MAPK. 6In triple-negative breast cancers, AC effectively restricts the EMT by enhancing the PPARG expression while simultaneously suppressing the P2RX7-facilitated TGF-β/Smad signaling pathway. 7Furthermore, AC directly Background: Thyroid cancer (TC), the most prevalent endocrine malignancy, has been subjected to various treatment methods.However, the efficacy of asiaticoside (AC) for treating TC remains uncertain.
opposes cell proliferation and considerably reduces resistance to chemotherapeutic drugs within hepatocellular carcinoma cells. 8xperimental studies have revealed that AC can subvert the malignancy of osteosarcoma cells, which is typically induced by macrophagic polarization of the M2 phenotype, via the inhibition of the TRAF6/NF-κB pathway. 9,10However, the effects of AC on TC and their underlying mechanisms remain elusive.Herein, we aimed to evaluate the impact of AC on TC and determine their potential mechanisms of action.
Cells and culture
Two TC cell lines (TPC-1 and FTC-133) were obtained from the Cell Resource Center, Peking Union Medical College, which is a part of the National Science and Technology Infrastructure (NSTI-BMCR; http://cellresource.cn).The cells were cultured in highglucose DMEM (Invitrogen-Gibco, Carlsbad, CA, USA) with 10% fetal bovine serum (FBS), 100 U/ml of penicillin G, and 100 U/ml of streptomycin, and incubated at 37 °C in 5% CO 2 .Based on the treatment to be administered, the cells were divided into the control or AC-treated groups, with AC concentrations of 5, 10, and 20 mM/ ml of medium.
Animals
Male BALB/c nude mice (8-12 weeks old) were procured from Beijing Viton Lever Laboratory Animal Technology Co., Ltd.The mice were housed in a pathogen-free facility with ad libitum access to food and water, under controlled temperature conditions of 22 °C.All experimental procedures were conducted in accordance with the guidelines established by the Animal Care and Use Committee of the Second Affiliated Hospital of Wenzhou Medical University.The mice were allocated into two experimental groups: a control group and an AC-treated group (10 mg/kg).
Cell proliferation assay
For the cell proliferation assay, cells in the logarithmic growth phase were dispensed at 2,000 cells/well into 96-well plates and incubated overnight.Cell viability was assessed using a Cell Counting Kit-8 (CCK-8; Solarbio, Beijing, China).In each well, 10 μl of the CCK-8 reagent was added, and the plates were further incubated at 37 °C in a humidity-controlled environment.Absorbance at 450 nm was quantified for three days post-seeding using a microplate spectrophotometer (BioTek, Winooski, VT, USA).
Cell migration and invasion assays
For the migration assay, a scratch test was performed.Initially, a confluent monolayer of cells was scratched with a sterile pipette tip to create a wound.The cells were then washed with phosphatebuffered saline to remove detached cells and debris, and fresh serumfree medium was added.Cell migration into the wound area was monitored and photographed at 0, 12, and 24 hours after wounding.
For the invasion assay, Transwell chambers were utilized.The Transwell's upper compartment was pre-coated with 50 μl of ECM gel (Matrigel; Matrigel: serum-free medium ratio, 1:5).After 4 h, 6 x 10 4 cells in 200 μl of serum-free medium were seeded in the upper chamber.The lower compartment was filled with 800 μl of medium containing 10% FBS.After 24 h, non-invading cells in the upper chamber were removed.Cells that invaded through the membrane to the lower surface were fixed with 4% paraformaldehyde and stained with 0.1% crystal violet (Beyotime, Shanghai, China).The invaded cells in five visual fields were counted using an inverted microscope (IX50; Olympus Tokyo, Japan) at 100 x magnification.
Tube formation assay
The impact of TC cells on angiogenesis in HUVECs was assessed using a tube formation assay.ECM gel (Matrigel; Sigma-Aldrich) was added to chilled 96-well plates and incubated at 37 °C for 2 h for polymerization.Subsequently, HUVECs (PromoCell, Heidelberg, Germany) were co-cultured with thyroid cell lines at 3.5 x 10 4 cells/well and incubated for 5 h at 37 °C.Thereafter, the tube formation was imaged under an inverted microscope (AxioObserver D1; Zeiss).Tubule length and organization were analyzed using an angiogenesis analyzer (Image J plug-in) (NIH).
Statistical analysis
Data have been obtained from at least three independent experiments and are expressed as means ± standard deviations.Normality of data distribution was assessed using the Shapiro-Wilk test.For comparing more than two groups, one-way ANOVA was used for independent groups and repeated measures ANOVA was used for dependent groups.Post-hoc analysis was performed using Tukey's HSD.Non-parametric data was analyzed using the Kruskal-Wallis and Dunn's tests.Differences in protein levels between normal thyroid and PTC samples were analyzed using the Mann-Whitney U test.A p-value of < 0.05 was considered statistically significant.All analyses were performed using GraphPad Prism (version 6.0;) and SPSS (version 20.0;).
AC inhibits the TC cell proliferation
Changes in cell viability in the TPC-1 and FTC133 cell lines were examined 24, 48, and 72 h after AC treatment using CCK-8 (Figure 1a).Compared to the control group, cell viability significantly reduced when AC concentrations reached 10 μM (p < 0.01) and 20 μM (p < 0.001).EDU staining revealed a significant decrease in the number of EDU-positive cells at 10 μM (p < 0.001) and 20 μM (p < 0.001) AC concentrations, indicating that AC inhibited TC cell proliferation in a dose-dependent manner (Figure 1b).
AC inhibits TC cell migration and invasion
The number or proportion of cells undergoing migration and invasion was significantly lower in the AC-treated groups than in the control group.Specifically, at 10 μM (p < 0.001) and 20 μM (p < 0.001) AC concentrations, there was a significant decrease in the number of cells undergoing migration and invasion (Figures 2a and 2b).The inhibitory effect of AC on TC cell migration and invasion was dose-dependent.
AC inhibits TC cell angiogenesis
The angiogenic ability of AC on TPC-1 and FTC133 cell lines was examined.After HUVEC inoculation, tube formation was observed in the control group and the group treated with 5 μM AC.However, when the AC concentration reached 10 μM (p < 0.001) or even 20 μM (p < 0.001), the number of tubes formed decreased significantly in a dose-dependent manner (Figure 3).
AC inhibits the TRAF6/HIF1α pathway
AC may exert its inhibitory effect through the TRAF6/HIF1α pathway.Thus, the expression levels of TRAF6, HIF-1α, and VEGFA were examined using Western blot analysis.Compared to the control group, the relative protein expression of TRAF6, VEGFA (Figure 4a), and HIF-1α (Figure 4b) in the TC cell lines significantly reduced in a dose-dependent manner when the AC
AC promotes TC progression via HIF1α
To test the hypothesis that AC exerts its effects via HIF-1α, different treatment groups were established: the control group, 20 μM AC alone treatment group, and treatment group expressing HIF-1α and 20 μM AC.Compared to the control group, treatment with AC significantly decreased cell viability (p < 0.001) (Figure 5a), cell invasion (p < 0.001) (Figure 5b), migration ratio (p < 0.001) (Figure 5c), and tube formation (p < 0.001) (Figure 5d).However, in the group where HIF-1α was expressed in addition to AC treatment, cell viability (p < 0.001), the number of EDUpositive cells (p < 0.001), migration ratio (p < 0.001), invasion (p < 0.001) and tube formation (p < 0.01) were significantly higher than in the group treated with AC alone.
AC inhibits tumor growth in vivo
An in vivo model of TC was established by injecting TPC-1 cells into BALB/c nude mice, and the effects of 10 mg/kg AC treatment were observed.In the group administered 10 mg/kg AC, the tumor size (Figure 6a), tumor volume (Figure 6b) at various time points, and tumor weight (Figure 6c) had reduced compared to in the control group.Immunohistochemical analysis revealed a decrease in the number of Ki-67 positive tumor cells in the AC-treated group (p < 0.001) (Figure 6d).Furthermore, the expression levels of TRAF6, HIF-1α, and VEGFA were lower in the AC-treated group than in the control group (p < 0.001) (Figures 6e, f).
DISCUSSION
TC is an endocrine tumor with a high global incidence, which has been continuously increasing over the past decades. 11It encompasses multiple subtypes, including papillary, follicular, medullary, and undifferentiated TC, each contributing to its diverse pathobiology and clinical presentation.Although a favorable prognosis is expected in most patients with TC, certain subtypes and advanced stages are associated with poorer outcomes.Currently available therapeutic options, though numerous, require improvements in both efficacy and tolerability.Hence, the exploration of novel therapeutic strategies and medications remains a significant clinical endeavor.Increasing evidence suggests that AC plays an important role in several diseases.AC demonstrates ROS-reducing effects, such as inhibiting TGF-β1-induced MMT and ROS by activating Nrf2, 12 which protects the peritoneum and prevents peritoneal fibrosis. 13AC also demonstrates an anti-inflammatory effect, which modulates cutaneous allergic inflammation and could be developed as a therapeutic drug for atopic dermatitis. 14Recently, the antitumor effects of AC have been identifies in several cancers.AC reportedly inhibits gastric cancer progression and induces endoplasmic reticulum stress. 5Additionally, AC inhibits triple-negative EMT in breast cancer. 7A terpene glycyrrhetinic acid extract, a triterpenoid, reportedly reduces specific protein expression in TC cells.Thus, it could be used for the treatment of TC and other endocrine tumors. 15owever, the effect of AC on TC remains unclear.
To explore the effect of AC on TC cells, we treated two TC cell lines (TPC-1 and FTC133) with AC in vitro.We found that AC treatment significantly reduced the viability, proliferation, migration, and invasiveness of the TC cells; it also significantly reduced its angiogenic ability.Jiang et al. 16 identified the inhibitory effects of Rh2 (an anticancer-like molecule) on the migration and proliferation of TC cells.Deng and Sun 17 found that Platycodonopsis saponin D effectively blocks PTC progression and prevents cell proliferation by arresting the cell cycle and enhancing apoptosis.These findings are consistent with the inhibitory effects of AC identified in this study, including the inhibition of cell migration, invasion, and proliferation.Furthermore, we found that AC inhibited the tube-forming ability of TC cell lines.Tumor angiogenesis is defined as the proliferation of a vascular network that provides a supportive microenvironment rich in oxygen and nutrients for the optimal growth of tumors.He et al. 18 established that AC inhibited osteoclast formation and function, which is consistent with the findings in our study.However, recently AC has been found to demonstrate promote tubule formation when used as a dressing. 19This difference in the effect of AC on tubule formation may depend on the application scenario; in cancer, AC's ability to inhibit tubule formation is utilized.
In our study, we found that the expression levels of TRAF6, HIF-1α, and VEGFA were significantly decreased after treatment with AC.This suggests that AC may exert its inhibitory effect on TC via the TRAF6/HIF1α pathway.In the in vivo experiments, we injected TPC-1 cells into BALB/c nude mice and treated them with AC.AC significantly reduced the tumor volume and weight and improved the survival rate of the mice.Additionally, AC decreased the expression of TRAF6, HIF-1α, and VEGFA in tumor tissues.Zhang et al. 20 reported that KDM1A regulates TC stemness and promotes TC progression via the demethylation of HIF-1α.Song et al. 21reported that the inhibition of HIF-1α/YAP signaling alone or in combination with other potential markers was effective against aggressive PTC.These findings are consistent with those of our study.Additionally, overall, AC effectively inhibited TC development both in vivo and in vitro.These results suggest that AC suppresses TC progression and induces a stress response in vivo by inhibiting the TRAF6/HIF1α pathway.This finding provides new clues and directions for subsequent in-depth studies.
Although our findings provide new insights into the understanding of AC as a potential anti-TC therapy, it has some limitations that need to be further addressed.First, our study relied primarily on two TC cell lines (TPC-1 and FTC133).Although these cell lines are widely used in research, they may not fully represent the entire biology of TC.Future studies should include a wider variety of TC cell lines or patient-derived cells to verify the applicability of our results in a wider range of situations. 22Second, our study relied heavily on in vitro experiments and in vivo models in nude mice.Although these models can mimic some of the properties of tumors in living organisms, they cannot fully simulate the complex physiological environment of the human body. 23Therefore, our findings need to be further confirmed through clinical studies.Furthermore, we mainly focused on the role of the TRAF6/ HIF1α pathway in the inhibition of TC by AC.However, there may be other pathways that are yet to be identified.Future studies should further explore the mechanism of action of AC, including other possible signaling pathways and molecular targets.Finally, although we found that AC has an inhibitory effect on TC, we could not clarify its dose-effect relationship in humans, possible side effects, and toxicity.This information is crucial for evaluating the potential of AC as an anticancer drug.However, our findings indicate that AC may be a promising anti-TC therapy.Further studies are needed to externally validate this and address the limitations of our study.
Our results indicate that AC has a significant inhibitory effect on TC, which was confirmed by both in vivo and ex vivo experiments.AC significantly inhibited the viability, proliferation, migration, invasion, and angiogenesis of TC cells.This inhibitory effect may be exerted by the inhibition of the TRAF6/HIF1α pathway.These findings provide new evidence for the application of cumarins as an anti-TC drug.However, future studies that perform more indepth evaluations and externally validate AC's mechanism of action are required. | 2023-12-05T06:18:00.127Z | 2023-12-04T00:00:00.000 | {
"year": 2024,
"sha1": "914dbd9eade5ef877dab85a2d7933fddd6f7ca1b",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.4274/balkanmedj.galenos.2023.2023-7-123",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "64cf3bbc72d72bdbde9542bf098efcf62fc82c6e",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
247941272 | pes2o/s2orc | v3-fos-license | The Passenger Domain of Bartonella bacilliformis BafA Promotes Endothelial Cell Angiogenesis via the VEGF Receptor Signaling Pathway
ABSTRACT Bartonella bacilliformis is a Gram-negative bacterial pathogen that provokes pathological angiogenesis and causes Carrion’s disease, a neglected tropical disease restricted to South America. Little is known about how B. bacilliformis facilitates vasoproliferation resulting in hemangioma in the skin in verruga peruana, the chronic phase of Carrion’s disease. Here, we demonstrate that B. bacilliformis extracellularly secrets a passenger domain of the autotransporter BafA exhibiting proangiogenic activity. The B. bacilliformis-derived BafA passenger domain (BafABba) increased the number of human umbilical endothelial cells (HUVECs) and promoted tube-like morphogenesis. Neutralizing antibody against BafABba detected the BafA derivatives from the culture supernatant of B. bacilliformis and inhibited the infection-mediated hyperproliferation of HUVECs. Moreover, stimulation with BafABba promoted phosphorylation of vascular endothelial growth factor receptor 2 (VEGFR2) and extracellular-signal-regulated kinase 1/2 in HUVECs. Suppression of VEGFR2 by anti-VEGFR2 antibody or RNA interference reduced the sensitivity of cells to BafABba. In addition, surface plasmon resonance analysis confirmed that BafABba directly interacts with VEGFR2 with lower affinity than VEGF or Bartonella henselae-derived BafA. These findings indicate that BafABba acts as a VEGFR2 agonist analogous to the previously identified B. henselae- and Bartonella quintana-derived BafA proteins despite the low sequence similarity. The identification of a proangiogenic factor produced by B. bacilliformis that directly stimulates endothelial cells provides an important insight into the pathophysiology of verruga peruana. IMPORTANCE Bartonella bacilliformis causes life-threatening bacteremia or dermal eruption known as Carrion’s disease in South America. During infection, B. bacilliformis promotes endothelial cell proliferation and the angiogenic process, but the underlying molecular mechanism has not been well understood. We show that B. bacilliformis induces vasoproliferation and angiogenesis by producing the proangiogenic autotransporter BafA. As the cellular/molecular basis for angiogenesis, BafA stimulates the signaling pathway of vascular endothelial growth factor receptor 2 (VEGFR2). Identification of functional BafA protein from B. bacilliformis in addition to B. henselae and B. quintana, the causes of cat scratch disease and trench fever, raises the possibility that BafA is a common virulence factor for human-pathogenic Bartonella.
revealed the presence of two genes homologous to B. henselae bafA as ortholog candidates ( Fig. 2A). Both proteins, encoded by BARBAKC583_RS02470 (RS02470) and BARBAKC583_RS02475 (RS02475), possessed predicted passenger domains containing a pertactin-like domain and a b-domain that matches the characteristics of Pfam "pertactin" (Pfam accession number PF03212) and "autotransporter" (PF03797). On the other hand, unlike the B. henselae-derived BafA (BafA Bhe ), "AIDA" (PF16168) was not detected in the passenger domains of RS02470 and RS02475 (Fig. 2B). A multiple-sequence alignment showed that the deduced passenger domain of RS02470 shared 28.6% sequence identity with that of BafA Bhe with a 193-residue gap, whereas RS02475 shared 32.4% identity with BafA Bhe with an 87-residue gap and 25.9% identity with RS02470 with a 188-residue gap (Fig. 2C). Many conserved amino acid residues were found in the middle portions of the sequences (residues 221 to 493 in RS02470 or residues 154 to 442 in RS02475), but a number of gaps in the C-terminal regions were also identified, especially in RS02470.
To further verify which of the two candidates is the BafA ortholog, we performed synteny analysis of the genomic regions surrounding the bafA gene in B. henselae and B. bacilliformis. These regions are highly syntenic with each other (see Fig. S1 in the supplemental material). B. henselae harbored the bafA gene, tandemly located downstream of the two autotransporter genes (BH05490 and BH05500), and the pgsA and uvrC genes further downstream. B. bacilliformis also had the conserved gene order, but only two autotransporter genes, RS02470 and RS02475, were present in positions corresponding to the three autotransporter genes in B. henselae. Of these, RS02475 was located downstream, suggesting that it was most likely the bafA ortholog. To assess the cell-proliferative activity of these B. bacilliformis-derived proteins, we generated recombinant RS02470 and RS02475 passenger domains. Both recombinant proteins were successfully obtained after size exclusion chromatography as almost single bands with a few minor bands detected on Coomassie brilliant blue-stained SDS-PAGE gels (Fig. 2D).
Next, we examined the mitogenic activity of the recombinant proteins in an endothelial cell proliferation assay. Extracellularly added RS02475 passenger domain increased the number of HUVECs, as has been observed with BafA Bhe , while the RS02470 passenger domain did not affect cell proliferation (Fig. 2E). The cell proliferation activity of RS02475 was dose dependent, but the number of cells after treatment with RS02475 was lower than that of cells treated with BafA Bhe (Fig. 2F). In order to further ascertain the functionality of BafA Bba on cell proliferation, we performed heterologous BafA complementation with a bafA-disrupted mutant of B. henselae. The B. henselae mutant transformed with the BafA Bba expression plasmid effectively restored the cell-proliferative property to the same extent as the mutant with homologous complementation (Fig. 3). From these observations, we consider RS02475 the B. bacilliformis-derived BafA ortholog; thus, we refer to it as BafA Bba .
We next investigated whether BafA Bba is actually secreted by B. bacilliformis and stimulates cell proliferation of endothelial cells. Using a Western blot analysis, two bands of approximately 60 kDa and 37 kDa that reacted with the anti-BafA Bba antibody were observed in the recovered fraction of the supernatant of B. bacilliformis (Fig. 4A). On the other hand, no specific bands were detected from the culture supernatant of uninfected HUVECs or cells cocultured with B. henselae. The proliferative activity of exogenously added BafA Bba was completely blocked by the pretreatment with anti-BafA Bba polyclonal antibody but not anti-BafA Bhe antibody (Fig. 4B). Moreover, the anti-BafA Bba antibody also decreased B. bacilliformis-induced cell proliferation in part, but no inhibitory effect of anti-BafA Bhe antibody was observed (Fig. 4C). These results suggest that BafA Bba plays a key mitogenic role in B. bacilliformis-induced endothelial cell proliferation.
We then examined the proangiogenic activity of BafA Bba in a tube formation assay, an in vitro model of angiogenesis. In the presence of BafA Bba , HUVECs cultured for 20 h in type I collagen gels markedly formed tube-like structures, as has been observed with BafA Bhe (Fig. 5A). In reviewing the total tube area, length, and the number of branch points, which are indicators of tube formation, each value is comparable between BafA Bhe and BafA Bba (Fig. 5B), indicating that BafA Bba is capable of stimulating the angiogenic processes in endothelial cells.
B. bacilliformis-derived BafA upregulates the VEGFR2 signaling pathway. BafA Bhe has been shown to interact with VEGF receptor 2 (VEGFR2) and subsequently activate the downstream p44/p42 (ERK1/2) mitogen-activated protein kinase (MAPK) signaling pathway in endothelial cells (19). We assessed the effect of BafA Bba on this pathway by Western blot analysis. During the 60-min treatment with BafA Bba , the detectable signals of phosphorylated VEGFR2 were elevated over time, although the levels with BafA Bba were lower than those with VEGF-A 165 or BafA Bhe (Fig. 6A). Likewise, BafA Bba increased the phosphorylation level of ERK1/2 especially after 30 and 60 min of treatment ( Fig. 6A and B). These observations suggest that BafA Bba functions as a VEGFR2 ligand in a manner similar to that of VEGF-A 165 and BafA Bhe . (19); Bhe-DbafA/bafA Bba , 623-125 complemented with a bafA Bba -carrying plasmid. Two days after infection, cell numbers were measured from HUVECs stained with CellMask deep red and Hoechst 33342. Bars show means and SD (n = 6 biological replicates; circles). Statistical significance was determined using one-way ANOVA with Tukey's multiple-comparison test (ns, not significant; ***, P , 0.001).
BafA Bba -induced cell proliferation requires binding to VEGFR2. In order to determine whether BafA Bba exhibits mitogenic activity through direct interaction with VEGFR2, we next examined the effect of VEGFR2 blocking or silencing on BafA-induced HUVEC proliferation. Pretreatment with anti-VEGF antibody, which efficiently neutralizes VEGF-A 165 activity, showed no effect on BafA-induced cell proliferation, whereas anti-VEGFR2 antibody effectively inhibited the BafA Bba activity (Fig. 7A). Furthermore, knockdown of VEGFR2 in HUVECs (Fig. 7B) completely abolished the potency of BafA Bba and VEGF-A 165 stimulation (Fig. 7C). These results strongly suggested that BafA did not induce VEGF secretion from HUVECs but bound directly to VEGFR2, and we thus attempted to detect the interaction by surface plasmon resonance (SPR) assays. The recombinant protein of Fc-tagged VEGFR2 extracellular domain (VEGFR2-ECD) was captured on protein A-immobilized sensor chips and tested for binding with gradual concentrations of VEGF-A 165 , BafA Bhe , and BafA Bba . All three analytes were found to readily interact with VEGFR2-ECD (Fig. 7D). The equilibrium dissociation constants (binding affinity [K D ]) of VEGF-A 165 and BafA Bhe binding to VEGFR2-ECD were calculated to be 6.91 6 2.30 nM and 2.08 6 0.68 nM, respectively. The K D value of BafA Bba , which was determined to be 91.3 6 4.07 nM, represented significantly lower binding affinity Bars show means and SD (n = 8 biological replicates; circles). Statistical significance was determined using oneway ANOVA with Dunnett's multiple-comparison test (ns, not significant; **, P , 0.01; ***, P , 0.001).
than that observed for VEGF-A 165 and BafA Bhe . Taken together, these data indicate that the interaction between VEGFR2-ECD and BafA Bba exhibits 44-fold lower binding affinity than that with BafA Bhe .
DISCUSSION
B. bacilliformis is a bacterial pathogen that is transmitted to humans by the bite of Lutzomyia sand flies, whose habitat is restricted to high-altitude valleys in the Andes Mountains of South America (23). Humans are the only known reservoir of B. bacilliformis, and infection results in Carrion's disease, a life-threatening bartonellosis. In the acute phase of Carrion's disease, known as Oroya fever, the bacteria infect erythrocytes and cause severe hemolytic anemia. Erythrocyte adherence and invasion involves several identified virulence factors, including flagella (24), the extracellular protein deformin (25), and invasion-associated locus proteins A and B (26). On the other hand, in verruga peruana, the chronic phase of Carrion disease, the bacteria infect endothelial cells and cause their proliferation, resulting in dermal eruption. The presence of numerous microvessels in the skin lesions suggests that infection of endothelial cells triggers a local angiogenic response. This fact was experimentally demonstrated in studies showing that B. bacilliformis enhanced endothelial cell proliferation and that several factors were involved in this process. One of these factors was tissue plasminogen activator (t-PA), production of which was elevated in infected HUVECs and might be involved in the angiogenic process (21). As a bacterium-derived factor, GroEL, a heat shock protein of B. bacilliformis, was reported to play an important role in inducing endothelial cell proliferation (18). GroEL was secreted from the culture supernatant of B. bacilliformis and exhibited an increase in the number of HUVECs in a dose-dependent manner, and this cell proliferation was partially inhibited in the presence of anti-GroEL antibody. However, mitogenic activity in HUVEC cultures treated with the recombinant GroEL protein was not detected, and the authors therefore suggested that the effect of GroEL on HUVECs might be indirect and might involve other factors. Thus, the In this study, we successfully identified a proangiogenic autotransporter from B. bacilliformis that directly stimulates vascular endothelial cells. The passenger domain of this autotransporter is considered a BafA ortholog, since this protein displayed enhancement of endothelial proliferation, capillary formation, and upregulation of VEGFR2 signaling despite having only 32.4% amino acid sequence identity with that derived from B. henselae. Pertactin-like domains are present in the passenger of BafA Bba as well as those derived from B. henselae and B. quintana; therefore, they presumably represent a conserved domain structure for the proangiogenic autotransporters of Bartonella. On the other hand, another autotransporter RS02470 that showed 28.6% sequence identity with BafA Bhe also contained a pertactin-like domain but lacked cell-proliferative activity. Therefore, we speculate that the pertactin-like domain is necessary, but not sufficient, for the proangiogenic activity of the BafA family autotransporter.
For endothelial cell proliferation and angiogenesis, activation of ERK1/2 in endothelial cells is one of the most important signal transduction pathways (27,28). BafA Bba evoked a dose-dependent increase in the number of HUVECs, but the percent increase of cell growth at higher concentrations of BafA Bba (100 or 500 ng/mL) was lower than that with BafA Bhe . In addition, relatively low phosphorylation levels of both VEGFR2 and ERK1/2 were detected in BafA Bba -treated cells compared to BafA Bhe -treated ones, suggesting that BafA Bba is modestly active in enhancing cell proliferation via upregulation of VEGFR2 signaling. The weak activity of BafA Bba is presumably attributed to its lower binding affinity for VEGFR2 than that of VEGF-A 165 or BafA Bhe , but blocking or silencing of VEGFR2 abolished the responsiveness of HUVECs to BafA Bba , suggesting that the direct binding to VEGFR2 is an essential event for BafA Bba to upregulate the downstream signals.
Recombinant BafA Bba exhibits a rather marginal mitogenic activity compared to the effect observed with BafA Bba on the infected cells. One possible explanation is that the purified recombinant BafA Bba was different in size from both protein fragments (approximately 60 kDa and 37 kDa) detected by Western blot analysis when using B. bacilliformis culture supernatant. This suggests that the native BafA secreted from B. bacilliformis may be processed at a different position(s) than the recombinant we prepared. The mismatch of the size of BafA Bba may have affected the activity. Another possibility is that BafA Bba is unstable, and its activity was reduced during preparation, perhaps during extraction of protein from E. coli cells by ultrasonication or the subsequent purification steps. In fact, when the BafA-disrupted mutant of B. henselae was heterologously complemented with BafA Bba , it showed the same level of cell proliferation activity as when it was complemented with BafA Bhe . This supports the possibility that BafA Bba is intrinsically as active as BafA Bhe .
In contrast to the weak cell proliferation activity, BafA Bba facilitated the tube formation of HUVEC cultured in collagen gel at levels comparable to BafA Bhe . For capillary morphogenesis, the expression of integrins a 1 b 1 and a 2 b 1 has been elevated in the cell surface, and activation of Src kinase and Rho GTPase is a key intracellular event involved in tube morphogenesis (29)(30)(31). Moreover, activation of p38 MAPK also has been implicated as crucial in driving tubulogenesis of endothelial cells overlaid with type I collagen in a serum-free defined medium (32). It may be speculated that this tubulogenesis-related signaling with BafA Bba stimulation occurs at the same level as with BafA Bhe .
A polyclonal antibody against BafA Bhe displayed no inhibitory effect on the activity of BafA Bba , and conversely, anti-BafA Bba antibody was unable to suppress the BafA Bhe activity, indicating that these BafA proteins consist of distinct antigenicity structures. Neutralizing antibody against BafA Bba incompletely blocked the HUVEC proliferation during B. bacilliformis infection. This suggests that other factors such as t-PA and GroEL are also involved in the proangiogenic capability of B. bacilliformis. In addition, recent studies reported that infection of HUVECs by live B. bacilliformis induced host cell secretion of epidermal growth factor (EGF) (33). EGF is a growth factor associated with angiogenesis of vascular endothelial cells, and EGF levels in serum samples showed moderate positive correlation with B. bacilliformis bacteremia of Carrion's disease in Peru (34). Collectively, multiple factors, including BafA, likely contribute to the hyperproliferation of endothelial cells and the subsequent angiogenic process in response to B. bacilliformis infection.
In summary, we identified a novel mechanism whereby B. bacilliformis provokes endothelial cell proliferation and angiogenesis by producing the proangiogenic autotransporter BafA during infection. A previous study reported that tyrosine kinase inhibitors decreased the invasion of endothelial cells by B. bacilliformis (35). In addition to serving as a mitogenic factor that promotes angiogenesis, BafA may therefore be involved in the internalization of B. bacilliformis into endothelial cells, as BafA can upregulate the tyrosine kinase activity of VEGFR2 and its downstream signaling molecules. We believe that BafA functions as a crucial pathogenic factor that gives rise to pathological angiogenesis and contributes to infection of humans by B. bacilliformis. The genomic regions around the bafA gene are highly conserved between B. henselae and B. bacilliformis. In addition to this finding, the presence of bafA orthologs in many Bartonella species (19) indicates that bafA is one of the most characteristic genes in bartonellae which were inherited from the ancestral species. Based on these findings, the BafA protein family is likely to be a bacterial apparatus common to Bartonella species that facilitates their survival and proliferation in hosts.
MATERIALS AND METHODS
Bacterial culture. B. bacilliformis strain KC584 (ATCC 35686) was obtained from the American Type Culture Collection (Manassas, VA). B. bacilliformis was grown on Columbia agar with 5% defibrinated sheep blood (CSB; Becton Dickinson, Franklin Lakes, NJ) at 28°C. After 10 to 14 days of culture, the bacteria were collected from the culture plate and resuspended in medium 199 (M199; Thermo Fisher Scientific, Waltham, MA) supplemented with 10% fetal bovine serum (FBS; Biowest, France). For convenience, the bacterial number in the suspension was estimated to be 1 Â 10 9 bacteria/mL at an optical density at 600 nm (OD 600 ) of 1.0. The details of the generation of heterologous BafA-complemented B. henselae are described in Text S1 in the supplemental material. Escherichia coli strains were grown on a lysogenic broth (LB) agar or in the liquid medium (Becton Dickinson). When required, kanamycin was used at a final concentration of 25 mg/mL.
Endothelial cell proliferation assay. HUVECs purchased from PromoCell (Heidelberg, Germany) were cultured in endothelial growth medium 2 (EGM-2; PromoCell) at 37°C in a humidified atmosphere and 5% CO 2 . For evaluation of proliferation of infected cells, HUVECs (passage 6 to 9, 9,000 cells/cm 2 ) were plated onto a gelatin-coated 96-well plate with EGM-2. After 6 to 8 h of incubation, the medium was replaced with M199-10% FBS, and then the cells were infected with B. bacilliformis at the indicated MOI. For evaluation of the activity of BafA proteins, HUVECs were treated with indicated concentrations of BafA after changing the culture medium to M199-10% FBS. When the effect of antibodies was examined, anti-BafA Bhe antibody (15 mg/mL; generated in our previous study [19]), anti-BafA Bba antibody (15 mg/mL), anti-VEGF antibody (bevacizumab; Pfizer, 3 mg/mL), anti-VEGFR2 antibody (ramucirumab; Eli Lilly Japan; 3 mg/mL), normal rabbit IgG (15 mg/mL; Fujifilm Wako, Osaka, Japan), or human normal IgG (3 mg/mL; Fujifilm Wako) was used to treat cells for 30 min at 37°C prior to bacterial infection or addition of BafA or VEGF-A 165 (PeproTech, Cranbury, NJ). Two days after bacterial infection or BafA treatment, the cells were stained with CellMask deep red plasma membrane stain (1:5,000; Thermo Fisher Scientific) and NucBlue Live ReadyProbes reagent (Thermo Fisher Scientific) for 30 min at 37°C. The cells were then fixed with 4% paraformaldehyde for 15 min at room temperature and washed with phosphate-buffered saline (PBS) three times. The plate was imaged on an Opera Phenix high-content screening system (PerkinElmer, Waltham, MA) using the confocal setting with a 5Â or 20Â air objective. Cell numbers were measured from images of 4 (with the 5Â objective) or 25 (with the 20Â objective) fields in each well using Harmony 4.5 software (PerkinElmer).
Indirect coculture of HUVEC and B. bacilliformis. To coculture HUVECs with B. bacilliformis without direct contact, a Millicell 24-well cell culture plate (Merck Millipore, Burlington, MA) was used to separate them as previously described (22). B. bacilliformis cells cultivated on a CSB plate for 14 days were collected and suspended in M199/10% FBS to an OD 600 of 0.03. Two hundred microliters of the suspended B. bacilliformis cells was then added to filter plate wells, which were inserted into a receiver plate in which HUVECs were seeded at a density of 20,000 cells/well in M199-10% FBS. After 2 days in coculture, the Millicell filter plate was removed, and the proliferation of HUVECs was assessed as described under "Endothelial cell proliferation assay".
Bioinformatic prediction of BafA orthologs. The deduced BafA orthologs in the B. bacilliformis genome (NCBI accession number NC_008783.1) were searched against the sequence of the passenger domain of B. henselae-derived BafA autotransporter using local BLAST program in CLC Main Workbench 21.0.3 software (Qiagen, Hilden, Germany) with an E value cutoff of ,1 Â 10 250 . Conserved domain analyses in these candidates were conducted using the Pfam 34.0 server (http://pfam.xfam.org/). Signal peptides were predicted by the SignalP-5.0 server (https://services.healthtech.dtu.dk/service.php ?SignalP-5.0). The passenger domains of BafA Bhe , RS02470, and RS02475 were aligned and compared by CLC Main Workbench. The details of synteny analysis are described in Text S1 in the supplemental material.
Plasmid construction. Primers used in this study were obtained from Thermo Fisher Scientific. The plasmid vector for expression of BafA Bhe (pET-28b-BH513) was generated in our previous study (19). For construction of plasmids expressing the passenger domain of B. bacilliformis BafA ortholog candidates, the genomic DNA of B. bacilliformis was purified from 14-day-old cultures on CSB agar using a PureLink genomic DNA mini kit (Thermo Fisher Scientific). Using the genomic DNA as the template, the nucleotides encoding the 25th to 512th amino acids of BARBAKC583_RS02470 and the 25th to 498th amino acids of BARBAKC583_RS02475 were amplified by Platinum SuperFi DNA polymerase (Thermo Fisher Scientific) with the primer sets NheI-RS02470-Fw (TCTCGCTAGCTCAGATTGGTGTGATGTCGTG)-SalI-RS02470-Rv (GACTGTCGACTCATAAAGGATCTGGACTTGGCTGA) and NheI-RS02475-Fw (TCTCGCTAGCGA GGAAAAAAAGCAAGGGGG)-SalI-RS02475-Rv (GACTGTCGACTCATGAAAAATTAGCAGTCTGCATACTC), respectively. The PCR products were digested with NheI/SalI and inserted into the compatible sites of pET-28b (Sigma-Aldrich, St. Louis, MO) by using a Ligation-Convenience kit (Nippon Gene, Tokyo, Japan). The resultant plasmids were designated pET-28b-RS02470 and pET-28b-RS02475. All inserted sequences in the plasmids were confirmed by Sanger sequencing (Eurofins Genomics, Tokyo, Japan).
Expression and purification of recombinant proteins. E. coli BL21(DE3) transformed with pET-28b-BH513, pET-28b-RS02470, or pET-28b-RS02475 was grown at 20°C in LB medium containing kanamycin until the culture reached an OD 600 of 0.5 to 0.7. Protein expression was induced by adding 20 mM isopropyl-b-D-thiogalactoside (IPTG), and incubation was continued overnight at 13°C. The cells were harvested by centrifugation and resuspended in 25 mM Tris-HCl (pH 7.5), 500 mM NaCl, 30 mM imidazole with protease inhibitor cocktail (Nacalai Tesque, Kyoto, Japan). After disruption of the cells by sonication, the lysates were obtained by centrifugation and filtration with a 0.45-mm Minisart syringe filter (Sartorius, Gottingen, Germany). The crude extract was loaded onto an immobilized nickel affinity chromatography column (HisTrap HP; Cytiva, Marlborough, MA) equilibrated with the resuspended buffer. The column was washed with 25 mM Tris-HCl (pH 7.5), 500 mM NaCl, 30 mM imidazole, 5 mM ATP, 10 mM MgCl 2 , and 10% glycerol to remove contaminating proteins, then eluted with a gradient of 30 to 400 mM imidazole.
After concentration of the eluate with an Amicon Ultra-15 centrifugal filter unit (molecular weight cutoff [MWCO], 10,000; Merck Millipore), the concentrated sample was subjected to size exclusion chromatography with a Superdex 200 10/300 GL column (Cytiva), and the fractions that possessed the cell proliferation activity were pooled. The protein concentration was determined with reference to standards of bovine serum albumin using a bicinchoninic acid (BCA) protein assay kit (Fujifilm Wako). The purity of the fractions of each purification step was checked by SDS-PAGE using e-PAGEL HR 10% gel (ATTO, Tokyo, Japan) followed by staining with CBB Stain One Super (Nacalai Tesque). Endotoxin (lipopolysaccharide) levels in the purified protein samples were determined with a Pierce Chromogenic Endotoxin Quant kit (Thermo Fisher Scientific), which is based on the amebocyte lysate method, and confirmed to be ,0.1 endotoxin unit (EU) per mg protein. The purified proteins were stored at 280°C until use in each experiment.
Detection of BafA Bba from culture supernatant. Recombinant BafA Bba passenger domain was used for antiserum production in a rabbit by Eurofins Genomics. Anti-BafA Bba polyclonal antibody was then purified by using rProtein A Sepharose Fast Flow (Cytiva). For detection of BafA Bba from culture supernatant, B. bacilliformis or B. henselae (3.0 Â 10 8 bacteria/well) was cocultured with HUVECs in 1 mL of M199-0.1% FBS for 48 h in a humidified atmosphere and 5% CO 2 . Each culture was centrifuged to remove the cells, and the supernatants were then concentrated with an Amicon Ultra-15 centrifugal filter unit (MWCO, 10,000). The concentrated supernatants were subjected to SDS-PAGE using Bolt 10% bis-Tris gel and MOPS (morpholinepropanesulfonic acid) SDS running buffer (Thermo Fisher Scientific). The proteins in the gel were transferred onto a polyvinylidene difluoride (PVDF) membrane using an iBlot 2 gel transfer device (Thermo Fisher Scientific). The membrane was then directly stained for totalprotein detection using a Revert total-protein staining kit (LI-COR, Lincoln, NE) and subsequently scanned using the 700-nm channel of the Odyssey CLx imaging system (LI-COR). After total-protein detection, the membrane was probed with rabbit anti-BafA Bba polyclonal antibody followed by IRDye 800CW goat anti-rabbit IgG secondary antibody (LI-COR) using an iBind Western system with an iBind fluorescent detection solution kit (Thermo Fisher Scientific). Finally, the reactive bands were detected using the 800-nm channel of the Odyssey imager.
Endothelial cell tube formation assay. To evaluate proangiogenic activity of BafA in vitro, an endothelial tube formation assay was performed using collagen gel prepared with a collagen gel culturing kit with concentrated MEM culture solution (Nitta Gelatin, Osaka, Japan) as previously described (19). HUVECs (7 Â 10 5 cells/well in a 48-well plate) were cultured between two layers of collagen gel in the presence of PBS, BafA Bhe (100 ng/mL), or BafA Bba (100 ng/mL). Basal EGM with 2% FBS containing PBS, BafA Bhe , or BafA Bba was added to the gels in each well, and then 20-h cultures were stained with 1 mM calcein acetoxymethyl (AM) (Dojindo, Kumamoto, Japan). After washing with Hanks' balanced salt solution (HBSS; Thermo Fisher Scientific), nine fields of fluorescence images in each well were collected using an Opera Phenix system with a 20Â objective, followed by creation of global images. From these global images, the total tube area, total tube length, and the numbers of branch points from the tube-like structures were quantified using Harmony 4.5 software.
Western blot analysis. For analysis of VEGFR2 and ERK1/2 activation, HUVECs (passage 6 to 9, 10,000 cells/well) were seeded onto gelatin-coated 12-well plates with EGM-2. After 24 h of incubation, the medium was replaced with M199-10% FBS. The cells were starved for 15 h in the same medium and then stimulated with recombinant human VEGF-A 165 (20 ng/mL), BafA Bhe (200 ng/mL), or BafA Bba (200 ng/mL) for the indicated time at 37°C. The cells were harvested, lysed with 0.2 M NaCl, 1 mM EDTA, 1 mM dithiothreitol, protease inhibitor cocktail, and phosphatase inhibitor cocktail (Nacalai Tesque), and sonicated by using a Bioruptor sonicator (Sonicbio, Kanagawa, Japan). After centrifugation to remove cell debris, each cell lysate was then subjected to SDS-PAGE using Bolt 10% bis-Tris gel and MOPS-SDS running buffer. Following electrophoresis, the proteins were transferred onto PVDF membranes using the iBlot 2 gel transfer device.
RNA interference. Silencer Select small interfering RNAs (siRNAs) against VEGFR2 (s7823; Thermo Fisher Scientific) and Silencer Select negative-control siRNA number 2 (Thermo Fisher Scientific) were used for RNAi assays. Briefly, 5,000 cells of HUVEC in EGM-2 were plated in each well of a 96-well plate and incubated at 37°C for 6 h. The medium was replaced with 0.1 mL of M199-10% FBS, and the cells were transfected with siRNA (0.5 pmol/well) using Lipofectamine RNAiMAX (Thermo Fisher Scientific) in accordance with the manufacturer's instructions. After 24 h, the cells were washed and replaced with fresh M199-10% FBS and then used for cell proliferation assays against VEGF-A 165 and BafA Bba . VEGFR2 knockdown at 24 h after transfection was confirmed by quantitative reverse transcription-PCR (qRT-PCR).
qRT-PCR. Total RNA of HUVECs was purified with RNeasy minikit (Qiagen, Hilden, Germany). cDNA was synthesized with a QuantiTect reverse transcription kit (Qiagen), and subsequent qRT-PCR was performed with TaqMan Fast advanced master mix (Thermo Fisher Scientific) and the desired TaqMan probes (GAPDH, Hs02758991_g1; KDR [VEGFR2], Hs00911700_m1 [Thermo Fisher Scientific]) with a QuantStudio 7 Flex system. The VEGFR2 expression was determined by the DDC T method, normalized to GAPDH mRNA expression.
SPR analysis. The interaction of recombinant Fc-tagged VEGFR2-ECD (Sino Biological, Beijing, China) with VEGF-A 165 , BafA Bhe , or BafA Bba was monitored by SPR using a Biacore 8K (Cytiva) performed at 25°C in single-cycle mode. The VEGFR2-ECD protein (180 nM) was injected and captured on flow cells (Fc) 2 on a series S sensor chip protein A (Cytiva) at approximately 1,400 to 1,700 response units. Fc 1 was used as the negative control. All proteins used for this assay were in HBS-P1 buffer (Cytiva). Various concentrations (1.25, 2.5, 5, 10, and 20 nM) of VEGF-A 165 , BafA Bhe , and BafA Bba were then flowed through the sensor chip at a flow rate of 30 mL/min and the real-time response was recorded. After each reaction, the sensor chip was regenerated using 10 mM glycine-HCl at pH 1.5. The K D values were calculated from the first on and off rates from the bivalent model of Biacore Insight Evaluation software (Cytiva).
Statistical analysis. Data are expressed as means and standard deviations (SD). Statistical analysis between two groups was performed using two-tailed unpaired Student's t test, and statistical significance among multiple groups was analyzed by one-way analysis of variance (ANOVA) with Dunnett's or Tukey's multiple-comparison test, as indicated in the figure legends, with GraphPad Prism 9 software (GraphPad Software, La Jolla, CA). All experiments were conducted at least in triplicate to ensure reproducibility of the observations. Representative Western blots and microscopy images from at least three biologically independent replicates with similar results are shown.
Data availability. Raw Sanger sequencing data will be provided by the corresponding author on reasonable request. All other data generated during the study are included in this article.
SUPPLEMENTAL MATERIAL
Supplemental material is available online only. TEXT S1, DOCX file, 0.02 MB. | 2022-04-05T13:09:33.082Z | 2022-04-05T00:00:00.000 | {
"year": 2022,
"sha1": "1f4395915fd424097a10763f60039bae46f25ecc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ASMUSA",
"pdf_hash": "7fc0f6084e5a0a09ffd67cd742c1726c116b5679",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2662597 | pes2o/s2orc | v3-fos-license | Where Fail-Safe Default Logics Fail
Reiter's original definition of default logic allows for the application of a default that contradicts a previously applied one. We call failure this condition. The possibility of generating failures has been in the past considered as a semantical problem, and variants have been proposed to solve it. We show that it is instead a computational feature that is needed to encode some domains into default logic.
Since the introduction of default logic [Rei80], semantical problems of the original definition have been identified, and variants have been proposed to solve them [Luk88,Bre91,Ryc91,DSJ94,GM94,MT95]. One of the problems with Reiter's definition is that the application of a sequence of defaults may lead to failure. The following example shows this problem.
The default a:b c entails c whenever a is true and b is consistent with our current knowledge. The application of this default makes c true, therefore making the second default c:a ¬b applicable, which makes b false. This result is however in contradiction with the assumption of consistency of b we made for applying the first default.
We call failure the condition in which the application of another default makes a default that has already been applied inapplicable. In other words, if the application of a default contradicts the assumption of a default that has already been applied, this is a failure. The possibility of failures has been in the past considered a drawback, especially because failures may make the evaluation of theories like T impossible (i.e., these theories have no extensions.) A typical solution to this problem is to refrain from applying defaults that would lead to failure. This is done by justified default logic [Luk88], constrained default logic [DSJ94], and cumulative default logic [Bre91,GM94]. For the theory T above, the application of the first default makes c true, but the second default is not applied because it would lead to failure. We therefore conclude that T is equivalent to the propositional theory {a, c}.
This solution is semantically good, as it it allows for the evaluation of theories like T (i.e., it assigns extensions to theories that would otherwise have none.) On the other hand, having forbidden failures can be seen as a computational problem, as the encoding of some domains requires exactly this "ability to fail." An example is the translation from reasoning about actions to default logic proposed by Turner [Tur97]. This translation generates all possible evaluations of a variable x by means of a pair of defaults :x x and ¬x ¬x , and then removes the unwanted evaluations by generating failures. For example, the simple theory {Holds(Alive, S 0 )}, telling that Fred is alive in the initial state, is translated into the following default theory: : Holds(Alive, S 0 ) Holds(Alive, S 0 ) , : ¬Holds(Alive, S 0 ) ¬Holds(Alive, S 0 ) , ¬Holds(Alive, S 0 ) : false , ∅ Either the first or the second default can be applied, but not both. Depending on which one we decide to apply, we obtain Holds(Alive, S 0 ) or ¬Holds(Alive, S 0 ). The third default generates a failure whenever ¬Holds(Alive, S 0 ) is true. The only possible remaining case is therefore that in which Holds(Alive, S 0 ) is true. In other words, the first two defaults generate all possible evaluations of Holds(Alive, S 0 ) and the third default deletes the one we do not want. In general, some defaults generate all possible evaluations of the fluents and some other defaults delete the ones that are not possible. The latter are called killing defaults by Cholewinski, Marek, Mikitiuk, and Truszczynski [CMMT95], who presented other reductions where killing defaults are used to delete unwanted solutions. Since this deletion is realized by generating a failure, these translations do not work for semantics where failure is impossible, such as justified default logic [Luk88]. The inability to fail can be therefore seen as a limitation, as some translations from other formalisms into default logics require failures.
In this paper, we consider the problem of translating default semantics that can fail (fail-prone) into default semantics where failure is impossible (fail-safe). In order for the results to abstract over the specific semantics, we consider a sufficiently general definition of "semantics of default logic", based on the concept of process [FM92,AS94,FM94,Ant99]. We formalize the concept of fail-safeness in this framework, and investigate the translability from fail-prone into fail-safe semantics.
The translations we consider are polynomial either in time or size of the result. We consider translations that preserve the skeptical consequences, or the extensions, or the processes of a default theory.
The least constrained form of translation is that translating the inference problem: given a theory D, W and a formula p we require the translation to produce another theory D ′ , W ′ and another formula p ′ in such a way D, W |= p holds in one semantics if and only if D ′ , W ′ |= p ′ holds in the other one. Such translations are possible in polynomial time between all semantics that have the same complexity. For example, we can translate Reiter's default logic into justified default logic in this way as both semantics are Π p 2 -complete [Got92,Sti92,CS93].
We extend this result by simplifying the translation of p into p ′ : we indeed show a translation such that D, W |= p holds if and only if D ′ , W ′ |= a∨p holds, where a is a new variable. Intuitively, the new variable expresses the condition of failure in a semantics that cannot fail: by setting a to true whenever a failure should be necessary, the generated extension implies a ∨ p, and is thus irrelevant to skeptical entailment.
Using this translation, the extensions of the theories D, W and D ′ , W ′ are not the same. We therefore consider translations that preserve the extensions: such translations are called faithful [Kon88,Got95]. Clearly, no faithful translation is possible from a semantics that may not have extensions to one that always have. This is how Delgrande and Schaub [DS03], for example, have shown that Reiter's default logic cannot be translated into justified default logic. However, the question remains if this impossibility is only due to a possible lack of extensions. We therefore restrict to the case in which the original theory has at least one extension and prove that an extension-preserving translation exists or not depending on what we assume to be polynomial: the running time of the translation or the size of the result.
Finally, we consider translations preserving the processes of a default theory. Processes are the basic semantic notion of the operational semantics for default logic [FM92,AS94,FM94,Ant99]; they are sequences of defaults that can be applied in a default theory. Since two or more processes may correspond to the same extension, two default theories may have the same extensions but different processes. Therefore, translations that preserve the processes can be considered as the "most preserving" ones.
Default Logics
We use the operational semantics for default logics. Two slightly different operational semantics for default logics have been given independently by Antoniou and Sperschneider [AS94,Ant99] and by Froidevaux and Mengin [FM92,FM94]. A default is a rule of the form: The formulae α, β, and γ are called the precondition, the justification, and the consequence of d, and are denoted as prec(d), just(d), and cons(d), respectively. This notation is extended to sets and sequences of defaults in the obvious way. A default is applicable if its precondition is true and its justification is consistent; if this is the case, its consequence should be considered true.
A default theory is a pair D, W where D is a set of defaults and W is a consistent theory, called the background theory. We assume that all formulae are propositional, the alphabet and the set D are finite, and all defaults have a single justification. The assumption that W is consistent is not standard; however, all known semantics give the same evaluation when the background theory is inconsistent. We define a process to be a sequence of defaults that can be applied starting from the background theory.
The definition of processes only takes into account the preconditions and the consequences of defaults. This is because the interpretation of the justifications depends on the semantics. All semantics select a set of processes that satisfy two conditions: successfulness and closure. Intuitively, successfulness means that the justifications of the applied defaults are not contradicted; closure means that no other default should be applied.
The particular definition of successfulness and closure depend on the specific semantics. The following are the definitions used by Reiter's and constrained default logic.
Successfulness:
Local: for each d ∈ Π, the set W ∪ cons(Π) ∪ just(d) is consistent; Closure: Maximality: for any d ∈ Π, the sequence Π · [d] is not a globally successful process.
We abstract the notions of successfulness and closure from the particular semantics, and define them to be two conditions on Π and D, W such that: 1. successfulness is antimonotonic: if Π · Π ′ is successful, then Π is successful; 2. [ ] is a successful process, if W is consistent; 3. successfulness and closure can be expressed as the combination of a number of consistency tests over the formulae in Π and D, W ; 4. these consistency tests are independent on the order of defaults in Π; 5. the combination of the results of these consistency tests results can be done in polynomial time.
A default logic semantics defined in terms of two conditions of successfulness and closure that satisfy these assumptions is called regular. Most default logic semantics are regular: the only exception known to the author is the semantics of concise extensions, which is not regular because the condition of subsumption requires checking all possible default orderings [Ryc91].
We remark that the last condition does not imply that successfulness and closure can be checked in polynomial time: they can only be checked in polynomial time once the consistency/entailment tests have been done. This is typically the case: for example, Reiter's default logic closure condition amounts to check whether, for every d ∈ Π, either W ∪ cons(Π) |= prec(d) or W ∪ cons(Π) ∪ just(d) is inconsistent. Once we have checked the consistency of W ∪ cons(Π) ∪ ¬prec(d) and W ∪ cons(Π) ∪ just(d) for every d ∈ Π, determining whether closure is satisfied can be done in linear time.
An extension of a default theory is the deductive closure of W ∪ cons(Π) where Π is a successful and closed process. Note that more than one process may generate the same extension. The skeptical consequences of a default theory are the formulae that are entailed by all its extensions. The credulous consequences are those implied by some of its extensions.
Fail-safeness of a semantics is formalized as follows.
Definition 2 (Fail-Safe Semantics) A regular semantics for default logic is fail-safe if, for every default theory, any successful process is the prefix of a successful and closed process.
This definition formalizes the idea that a sequence of defaults cannot generate a failure: if we can apply a sequence of defaults, then an extension will be eventually generated, possibly after applying some other defaults. In other words, the situation in which we apply some defaults but then find out that we do not generate an extension never occurs. Fail-safeness is a form of commitment to defaults: if we apply a default, we never end up with contradicting its assumption.
Fail-safeness can also be seen as a form of monotonicity of processes w.r.t. to sets of defaults: if a semantics is fail-safe, then adding some defaults to a theory may only extend the successful and closed process of the theory and create new ones. However, this form of monotonicity is not the same as that typically used in the literature, which is defined in terms of consequences, not processes. Froidevaux and Mengin [FM94,Theorem 29] have proved a result that essentially states that every semantics in which closure is defined as maximal successfulness is fail-safe. As a result, justified and constrained default logics are fail-safe.
We recall that we assume that the background theory W is consistent. In this case, if a semantics is fail-safe, then every default theory has a successful and closed process: since the process [ ] is successful, a process Π that is successful and closed exists. The condition of antimonotonicity provides an algorithm for finding this successful and closed process: if Π is successful and closed, all its initial fragments are successful as well. We can therefore obtain a successful and closed process by iteratively adding to [ ] a default that lead to a successful process.
While all fail-safe semantics give extensions to theories, the converse is only true in some cases. According to Reiter's and rational default logics the simple default theory { :a ¬a }, ∅ has no extension, and these two semantics are therefore not fail-safe. On the other hand, every default theory has at least one concise extension [Ryc91], while the semantics of concise extensions is not fail-safe, as shown by the following example.
The default d 1 is applicable to W = ∅; the default d 2 is applicable to [d 1 ] because d 2 is not subsumed by d 1 . On the other hand, the process [d 1 , d 2 ] is not concise, as d 1 is subsumed by d 2 . As a result, [d 1 ] is a successful process, but is not the initial part of any successful and closed process under the semantics of concise extensions.
Translations
In this paper, we investigate the extent to which fail-safe semantics are less expressive than fail-prone ones. In particular, we study whether one of the two facts below can be proved: Since Reiter's default logic is fail-safe when restricting to normal defaults, the existence of a translation of the first kind implies that every regular default semantics can be translated into at least one fail-safe semantics. However, the restriction to normal defaults makes such a result more general. Since most of the semantics for default logic behave like Reiter's on normal defaults (an exception is Rychlik's concise semantics [Ryc91], which is however not fail-safe), this result extends to all these semantics. In order to simplify the terminology, we formally define "normal default logic" as a semantics for default logic.
Definition 3 Normal default logic is the restriction of Reiter's default logic to the case of normal defaults.
Ideally, we would like results of the first kind to be exactly the converse of the second one, i.e., every regular semantics can be translated into every fail-safe semantics. This general question has however an easy (and of little significance) negative answer: the semantics that has [ ] as the only successful and closed process is fail-safe, but none of the considered semantics can be translated into it. Indeed, consequence-preserving translations are impossible in polynomial time because entailment is coNP-complete in this semantics but Π p 2 -complete in most semantics; faithful translations are impossible because this semantics always gives a single extension to a theory while the other one may give more.
The results about the existence of translations depend on what we require from the translations. A minimal requirement is that the consequences are preserved. At the other extreme, we may require a translation to preserve the set of processes. In between, and this is perhaps the most interesting case, we have the preservation of the extensions.
Consequence-Preserving: we want to obtain the same consequences of the original theory; Extension-Preserving (Faithful): we want a bijective correspondence between the extensions; Process-Preserving: bijective correspondence between the processes.
In all three cases, we assume that new variables can be introduced, as it is common in translations between logics. Technically, this is possible thanks to the concept of var-equivalence [LLM03].
Definition 4 Two formulae α and β are var-equivalent w.r.t. variables X if and only if α |= γ iff β |= γ for every formula γ that only contains variables in X.
In plain terms, two formulae are var-equivalent if and only if their consequences, if restricted to be formulae on a given alphabet, are the same.
The translations we consider may introduce new variables: a theory D, W that only contains variables X is translated into a default theory D ′ , W ′ that contains variables X ∪ Y . Preservation is assumed to hold modulo var-equivalence: preserving the extensions means that each extension of D, W is var-equivalent to an extension of D ′ , W ′ w.r.t. X, and vice versa; preserving the consequences means that D, W |= γ iff D ′ , W ′ |= γ for each formula γ that only contains variables in X. Faithful translations based on var-equivalence of extensions have been considered by Delgrande and Schaub [DS03] and by Janhunen [Jan98,Jan03].
The condition of process preservation requires a suitable correspondence of sequences of defaults. We assume that the defaults are numbered, both in the original and in the generated theory. We could then enforce the consequences being the same or being var-equivalent; this is however not necessary as polynomial process-preserving translations will be proved not to exist anyway.
A condition on translations that has been considered by several authors is that of modularity [Imi87, Got95,Jan03], which requires only the defaults to be translated. Formally, a default theory D, W should be translated into a theory Another condition on translations that has been considered in the past is that of locality [Kon88]: each sentence and each default is translated separately. We do not consider modularity and locality in this paper.
A requirement we impose on the translations is that of being polynomial. There are two possible definitions of polynomiality, depending on what is required to be polynomial: the running time or the produced output. This difference is important, as some translations require exponential time but still output a polynomially large theory. Formally, two kinds of translations are considered.
polynomial: run in polynomial time; polysize: produce a polynomially large result.
The existence of translations depends both on what we want to preserve (consequences, extensions, or processes), and also on which computational condition we set (polynomial time or size.) The results proved in this paper are summarized in Table 1
Theories Having No Extensions
Some semantics for default logics, like Reiter's, may not have extensions even if W is consistent. On the other hand, all fail-safe default semantics have extensions. As a simple result, there is no way for translating all theories from Reiter's default logics into any fail-safe semantics in general. This argument has been used to prove that Reiter's default logics cannot be translated into justified or constrained default logic by Delgrande and Schaub [DS03], and that semi-normal default theories cannot always be translated into normal default theories by Janhunen [Jan03]. These results are however only consequences of the property of having or not having extensions, not of the property of fail-safeness. For example, we could define a default theory to always have [ ] as a successful and closed process if no other one exists, and this change would not affect the failsafeness of the semantics. As a result, we do not use the possible lack of extensions to prove the impossibility of translations.
In facts, the problem of the possible lack of extensions can be ignored by simply assuming that the default theory to translate has extensions. This simple assumption makes it possible some translations that are otherwise impossible: for example, there exists a poly-size faithful translation from Reiter's default logics into normal or justified default logic if we restrict to default theories having extensions. In the rest of this paper, we only consider theories having extensions.
Translations
In this section, we show two translations from an arbitrary regular semantics into a normal default theory. The first one is a poly-size faithful translation; the second one is a polynomial translation that is "almost" consequencepreserving. Both translations are based on the idea of "simulating" the construction of processes of the original theory. We first show how this simulation can be done, and then apply it for obtaining the two translations.
Simulation of Defaults
The reductions we show are based on simulating a regular semantics using only normal defaults. Before going into the technical details, we explain the basic idea of the translation. For each default of the original theory, we introduce two variables that represent the application of the default in the original theory. Once the value of these variables are set, we can use other defaults for checking successfulness and closure of the original process.
The fact that extensions are generated only when the process is known to be successful and closed is taken into account in two ways: 1. we use new variables, and draw conclusions on the original variables only when we know that the simulated process is successful and closed; 2. if the simulated process is not successful or not closed, the extension we generate is a formula F ; the specific choice of F depends on whether we want a faithful or an almost-consequence-preserving translation, as will be explained in the next sections.
Given a default theory D, W and a formula F , both built over the alphabet X, we generate a normal default theory D F u , W u such that each extension of D, W is var-equivalent to an extension of D F u , W u w.r.t. X, and each extension of D F u , W u is var-equivalent either to F or to an extension of D, W w.r.t. X. Since all var-equivalences are w.r.t. X, we often omit the part "w.r.t. X" in what follows.
Given D, W and a specific regular semantics, the conditions of successfulness and closure of a process Π can be expressed as a number of consistency checks over the formulae of D, W and Π. Let ω 1 , . . . , ω u be the formulae to be checked for consistency. These formulae may depend on which defaults are in Π, i.e., they are not exactly boolean formulae, as they may include the propositions like (d i ∈ Π), where d i ∈ D. The successfulness and closure of a process can be checked in polynomial time given the results of these consistency checks. By a well-known result in circuit complexity [BS90], every polynomial boolean function can be expressed by a circuit of polynomial size.
The first steps of the translations are: 1. for each d i ∈ D, we introduce two new variables c i and e i ; 2. we introduce m + 1 sets of new variables X 0 , X 1 , . . . , X m , where m = |D|; each of these sets X i is in bijective correspondence with X; 3. for each formula ω i , we consider the formula δ i that is obtained by 4. we build the circuit C(o 1 , . . . , o u , c 1 , . . . , c m ) that encodes the successfulness and closure tests, where the input variables o 1 , . . . , o m represent the results of the consistency checks and each c i encodes the presence of d i in the process to be checked.
We give an example of the formulae δ i and the circuit C used for translating D, W from Reiter's semantics into normal default logic. The consistency checks to be done are u = 2m, where m is the number of defaults. Namely, for each default d i = α i :β i γ i we have to check the consistency of the following formulae: The formulae δ i are obtained by replacing d i ∈ Π with c i , and are therefore the following ones: In words, for each default d i ∈ D, if d i is in Π (i.e., c i is true) then the justification of d i must be consistent with the consequences of all defaults in Π, (i.e., o i is true). If d i ∈ D (i.e., c i is false), then either the precondition of d i is not entailed (i.e., o m+1 ) or its justification is not consistent (i.e., ¬o i ).
What has been shown are the formulae δ i and the circuit C for the particular case of Reiter's semantics. Similar definitions can be given for every regular semantics. The default theory that results from the translation is the following one.
Let the defaults of the original theory be D = {d 1 , . . . , d m }, and let d i = α i :β i γ i . The defaults of the translated theory D F u are defined as follows.
each default a i represents the application of the default d i in the simulated process: . . , g u }; each g i relates the inconsistency of δ i with the value of o i : these two defaults are used to compute the result of the circuit C and to "output" the generated extension or F accordingly: We use the following abbreviations: E for e 1 ∧· · ·∧e m and T for t 1 ∧· · ·∧t u . The preconditions of the defaults in D F u have been defined so that the defaults of A ∪ N have to be applied first; then, the defaults of V ∪ G can be applied; the defaults of Z can be applied only at the end. We prove a number of lemmas relating the processes of the theory D F u , W u with the processes of D, W .
Lemma 1 If Π is a successful and closed process of D F u , W u , then Π contains exactly one among a i or n i for each i, and contains them only in the first m positions.
Proof. Since all defaults in D F u \(A ∪ N) contain E as a precondition, and E is only made true once m defaults of A ∪ N are applied, the first m defaults of Π are in A ∪ N. If both a i and n i are in Π, then Π is not successful, as the consequences of these defaults contradicts each other. If neither a i nor n i are in Π, then the default n i is applicable; therefore, Π is not a closed process. Finally, since exactly one between a i and n i is in the first m positions of Π, no other defaults of A ∪ N can be in a position of Π after the m-th.
In words, the processes of D F u , W u begin with the application of exactly one between a i or n i for each i. After that, no other default of A ∪ N can be applied. The idea is that the truth value of c i reflects the application of d i in the default theory D, W . Given a process Π of D F u , W u , we define the "simulated process" O(Π) to be the following process of D, W : where a i 1 , . . . , a i k is the sequence of defaults a i of Π We prove that O(Π) is a process of the original theory D, W .
Lemma 2 For every process
Proof. By assumption, Π is a process of D F u , W u . As a result, W u ∪cons(Π) is consistent and W u ∪ cons(Π[d]) |= prec(d) for every d ∈ Π. These two conditions imply the same ones on W and O(Π) once the inverse substitution [X 0 /X] has been applied.
The converse of this lemma also holds. Proof. Let Π 1 be the process obtained by replacing each d i with a i in Π ′ and adding all n i 's such that d i ∈ Π ′ . This process Π 1 is successful, and O(Π 1 ) = Π ′ . Since D F u , W u is normal, Π 1 is the prefix of a successful and closed process Π 1 · Π 2 . Since Π 1 already contains m defaults, no default of Π 2 is in A ∪ N. As a result, O(Π 1 · Π 2 ) is equal to O(Π 1 ), which is in turn equal to Π ′ .
Together, these lemmas prove that the processes of D, W are in correspondence with the successful and closed processes of D F u , W u . Namely, each process Π ′ of D, W corresponds to some successful and closed processes Π of D F u , W u such that O(Π) = Π ′ , and for each successful and closed processes Π of D F u , W u , it holds that O(Π) is a process of D, W . What is still missing is the effect of the successfulness and closure of the process of D, W on the corresponding processes of D F u , W u . We prove two preliminary lemmas Lemma 4 Every successful and closed process Π of D F u , W u contains either v i or g i , but not both, for every i, in the positions of Π from the m + 1-th to the m + u-th.
Proof. As for Lemma 1, after applying the first m defaults the formula E is true but T is not. As a result, we can only apply defaults in V ∪ G. The rest of the proof is like that of Lemma 1.
Lemma 5 If Π ′ is a process of D, W and Π is a successful and closed process of D F u , W u such that O(Π) = Π ′ , then cons(Π) entails o i or ¬o i depending on whether Π ′ satisfies ω i .
Proof. By Lemma 1 and Lemma 4, the first m defaults of Π are in A ∪ N and the next u defaults are in V ∪ G. Since O(Π) = Π ′ , a j ∈ Π if and only if d j ∈ Π ′ , and n j ∈ Π if and only if d j ∈ Π ′ . As a result, cons(Π) contains either c j or ¬c j , depending on whether d j ∈ Π ′ . Since the value of each c j indicates the presence of d j ∈ Π ′ , the satisfiability of the formulae δ i corresponds to the satisfaction of the conditions ω i .
By construction, the default v i is only applicable if δ i is consistent, which means that Π ′ satisfies the condition ω i . For the same reason, g i is applicable only if ω i is not satisfied. As a result, cons(Π) contains either o i or ¬o i depending on whether Π ′ satisfies the condition ω i .
We now establish a correspondence on the conditions of successfulness and closure.
Lemma 6 If Π ′ is a successful and closed process of D, W and Π is a successful and closed process of D F u , W u such that O(Π) = Π ′ , then W u ∧ cons(Π) and W ∧ cons(Π ′ ) are var-equivalent w.r.t. X.
Proof. The precondition of z 1 and z 2 include E ∧ T and either C(. . .) or ¬C(. . .), respectively. By Lemma 1 and Lemma 4, E ∧ T is entailed by cons(Π). By Lemma 5, the truth value of each o i is related to the process Π ′ satisfying the condition ω i ; since Π ′ is successful and closed, C evaluates to true. Therefore, z 1 is applicable while z 2 is not. Since Π is successful and closed, z 1 is in Π. The var-equivalence of W u ∧ cons(Π) with W ∧ cons(Π ′ ) is due to the fact that the consequence of z 1 is equivalent the latter formula after having replaced each c i with either true or false, depending on whether a i ∈ Π.
The converse of this lemma also holds.
Lemma 7 If Π ′ is a process of D, W that is either not successful or not closed and Π is a successful and closed process of Proof. Same as the proof of the previous theorem, but C this time evaluates to false. Therefore Π includes z 2 , which has F as a consequence.
These lemmas establish a correspondence between the extensions of the original and generated theories.
Theorem 1 Every extension of D, W is var-equivalent to an extension of D F u , W u and every extension of D F u , W u is var-equivalent either to F or to an extension of D, W .
Proof. If Π ′ is a successful and closed process of D, W , then there exists a closed and successful process Π of D F u , W u such that O(Π) = Π ′ by Lemma 3. By Lemma 6, it holds that the extension generated by Π is varequivalent to that generated by Π ′ .
If Π is a closed and successful process of D F u , W u , by Lemma 2 O(Π) is a process of D, W . If O(Π) is successful and closed, by Lemma 6, the extension generated by Π is var-equivalent to that generated by Π ′ . If O(Π) is either not successful or not closed, by Lemma 7, the extension generated by Π is var-equivalent to F .
Faithful (Extension-Preserving) Translations
Delgrande and Schaub [DS03] have proved that Reiter's default logic cannot be translated into justified default logic. Janhunen [Jan03] has proved that semi-normal defaults cannot be translated into normal defaults under Reiter's semantics. Both results imply the impossibility of translating Reiter's semantics into a fail-safe semantics. However, both proofs are based on the possible lack of extensions in Reiter's semantics. We therefore investigate whether theories having extensions under Reiter's semantics can be faithfully translated into a fail-safe default theory, namely, normal default logic.
A faithful, but exponential, translation from every regular default logic semantics into normal default logics always exists: if the extensions of the original theories are obtained by the deductive closure of the formulae in {E 1 , . . . , E m }, the following theory is a faithful translation of it into normal default logic, where the e i 's are new variables.
This theory contains exactly one default for each extension of the original theory. Since these defaults are normal, and their justifications are inconsistent with each other, the successful and closed processes of this theory are exactly the sequences composed of a single default. The generated extensions are exactly the same (modulo var-equivalence) of the original theory.
The problem with this translation is not only that it is exponential: even worst, once the set of all extensions {E i } has been determined, there is no reason for using default logic, as we can simply use propositional logic instead.
Polynomial translations can be of two kinds: either the new theory can be built in polynomial time, or it has polynomial size. The first condition implies the second, but not vice versa. A translation from a regular semantics into normal default logic exists if we only require the result of the translation to be polynomial in size.
Theorem 2 For every regular semantics there exists a poly-size faithful (extensionpreserving) translation that maps all default theories that have extensions into normal default theories.
Proof. The simulation shown in Section 3.1 is "almost" an extension-preserving translation. Indeed, all extensions of the original theory are translated into extensions of the generated theory. On the other hand, processes of the original theory that do not generate extensions correspond to processes that generate F as an extension.
We can build a faithful translation as follows: first, we determine a single successful and closed process Π of the original theory. We then translate the default theory D, W into the simulating theory D F u , W u in which F = W ∧ cons(Π). This is a faithful translation because each successful and closed process of the original theory D, W corresponds to a successful and closed process of the theory D F u , W u generating the same extension (modulo var-equivalence). The processes of D, W that are either not successful or not closed correspond to processes of D F u , W u that generate an extension that is var-equivalent to F . Since F is an extension of the original theory, this translation is faithful.
The translation of this theorem is not polynomial-time, as it requires the generation of at least one successful and closed process of the original theory. On the other hand, such a process has always polynomial size. The result of the translation is therefore always of polynomial size. As will be shown later in the paper, no consequence-preserving polynomial-time translation exists. This implies that no faithful and polynomial-time translation exists as well.
Almost-Consequence-Preserving Translations
A simple computational argument shows that any default theory can be translated into a normal default theory in polynomial time if we admit the queries to be translated as well, i.e., D, W |= q in a regular semantics if and only if D ′ , W ′ |= q ′ , where D ′ is normal. Indeed, query answering is in Π p 2 for all regular default logics and Π p 2 -hard for Reiter's default logic even in the restriction to normal theories [Got92,Sti92]. However, for a translation to be "exactly" consequence-preserving q ′ should be the same as q.
A consequence-preserving translation is easy to give if we allow an exponential blow up of the theory: if the extensions of D, W are {E 1 , . . . , E m }, the skeptical consequences of D, W are exactly the classical consequences of E 1 ∨ · · · ∨ E m , which are also the skeptical consequences of the default theory ∅, {E 1 ∨ · · · ∨ E m } . Ben-Eliyahu and Dechter [BED96] defined a better translation from default logic into propositional logic, which can be polynomial even if the number of extensions is exponential. Translations from default logic into propositional logic are however known to be exponen-tial in the worst case due to the different complexity of the semantics. We now concentrate on polynomial translations.
The case of default theories having no extensions has already been considered, so we restrict to theories that have extensions. We have already shown a faithful translation that is poly-size: this translation clearly preserves the consequences as well. In this section, we show a translation from every regular semantics into normal default logic that is: The translation is based on the theory that simulates the process construction of the original theory. As we have already noticed, the only problem with this simulation is that the processes of the original theory that either are not successful or not closed correspond to successful and closed processes in the simulating theory. Since these processes generate Cn(F ) as an extension, all we have to do is to specify a value of F that do not affect entailment.
The trick we use is to translate q into a ∨ q and to set F = a. If we use the skeptical semantics, the extensions of the simulating theory that do not correspond to extensions of the original theory imply a, which in turn implies a ∨ q; as a result, they do not affect the consequences of the theory.
Theorem 3 For every regular default logic there exists a polynomial-time translation that maps a default theory D, W that has extensions into a normal default theory D ′ , W ′ such that D, W |= q if and only if D ′ , W ′ |= a ∨ q, where a is a new variable.
The formula F and the way in which queries are translated are chosen in such a way the extensions of D ′ , W ′ that do not correspond to extensions of the original theory D, W are irrelevant to the specific query evaluation mechanism. As a result, if we are interested into credulous entailment, we can use F = ¬a and translate a query q into a ∧ q The question of whether the addition of a to the queries is necessary depends on the kind of translation used: we have already shown a faithful (and, therefore, consequence-preserving) translation that is poly-size. We will show that no polynomial-time consequence-preserving (i.e., that do not modify queries at all) translation exists unless part of the polynomial hierarchy collapses.
Impossibility of Translations
In this section, we show that some translations are impossible: namely, there is no polynomial-time exact consequence-preserving translation and no polynomial-time or polysize process-preserving translation from Reiter's default logics into any fail-safe default logic.
Consequence-Preserving Translations
We have already shown a poly-size faithful translation and a polynomial-time almost-consequence-preserving translation. We prove that no polynomialtime reduction that preserves the consequences exactly exists. To this end, we show a problem that is hard for Reiter's default logic but easy for all fail-safe default semantics. We cannot use a problem that has already been analyzed in the past (such as entailment or model checking) because these problems have the same complexity for Reiter's and for some fail-safe semantics.
For all fail-safe semantics, generating an extension is relatively easy, as it can be done by applying defaults until the process is closed. This property can be used to define a problem that is hard for Reiter's semantics but easy for all fail-safe ones: if it is known that either all extensions imply a or all extensions imply ¬a, then a single arbitrary extension suffices to check whether a is entailed. In turns, the assumption that all extensions imply a or all extensions imply ¬a is equivalent to the assumption that the default theory implies either a or ¬a. We prove that entailment is hard for Reiter's default logic even under this assumption.
Theorem 4 The problem of checking whether D, W |= a in Reiter's default logic is Σ p 2 ∩ Π p 2 -hard even if D, W implies either a or ¬a, and it has extensions.
Proof. Let P be a problem in Σ p 2 ∩ Π p 2 . We reduce the problem of telling whether x ∈ P to the problem T |= a, where T is a default theory that either implies a or it implies ¬a.
Since P it is in Σ p 2 , the question x ∈ P can be reduced to the problem of checking the existence of extensions of a default theory with an empty background theory D p , ∅ [Got92]. Since P is in Π p 2 , its complementary problem is in Σ p 2 as well. As a result, the question x ∈ P can therefore be reduced to the existence of extensions of another theory D n , ∅ . Therefore, x ∈ P if and only if D p , ∅ has extensions while D n , ∅ has not, and vice versa if x ∈ P .
Let a be a variable that is mentioned neither in D p nor in D n . The default theory we use is the following one: The only two defaults that can be applied from the background theory are the first two. They cannot be applied together, however. Once the first one is applied, the theory becomes equivalent to D p , {a} , while the application of the second one makes it equivalent to D n , {¬a} . Since the existence of extensions for these two theories are related to the question x ∈ P , we have: 2. if x ∈ P , all extensions of T imply ¬a.
As a result, either T |= a or T |= ¬a; in particular, T |= a if and only if x ∈ P .
The same problem is relatively easy for every fail-safe default semantics. Indeed, to solve it we only need to generate an extension and to check whether it implies a or ¬a: since all extensions are the same as for the entailment of a and ¬a, checking one extension suffices. Generating an arbitrary extension can be done easily in every fail-safe semantics.
Theorem 5 Checking whether T |= a is in ∆ p 2 for every fail-safe semantics, if either T |= a or T |= ¬a.
Proof. Since either all extensions of T imply a or all extensions of T imply ¬a, we can check whether T |= a by finding a single extension E of T and then checking whether E |= a. Finding one extension E is easy because the semantics is fail-safe. If Π is a successful process that is not closed, then there exists Π ′ such that Π · Π ′ is successful and closed. By the antimonotonicity of successfulness, if d is the first default of Π ′ , i.e., Π ′ = [d] · Π ′′ , then Π · [d] is successful. As a result, if Π is successful then it is either closed or there exists a default d such that Π · [d] is successful. We can therefore start with Π = [ ], which is successful. At each step, if Π is closed, we can check whether cons(Π) |= a. Otherwise, there exists d such that Π · [d] is successful. We set Π = Π · [d], and continue. This algorithm must necessarily end up with a successful and closed process.
This algorithm only takes a polynomial number of steps if we have access to an NP-oracle. Indeed, all we have to do is to check closure of Π and successfulness of Π · [d] at each step; these conditions can be verified in polynomial time by letting the NP-oracle perform the consistency tests.
As a result of these two theorems, no polynomial-time consequence-preserving translation exists from Reiter's semantics to an arbitrary fail-safe semantics, unless Σ p 2 ∩ Π p 2 = ∆ p 2 . The following theorem shows that even a polynomial number of calls to an NP-oracle do not suffice to translate from Reiter's semantics to any fail-safe one.
Theorem 6 If there exists an (exact) consequence-preserving translation from Reiter's semantics into any fail-safe semantics that only requires a polynomial number of calls to an NP-oracle, then Σ p 2 ∩ Π p 2 = ∆ p 2 .
Proof. It such a translation exists, then for any P ∈ Σ p 2 ∩ Π p 2 we could translate the question x ∈ P into the question T |= a under Reiter's semantics where either T |= a or T |= ¬a. In turns, with a polynomial number of calls to the oracle we can translate the question into the same question for a failsafe semantics, where it can be solved with a polynomial number of other calls to the oracle.
Since no translation employing a polynomial number of calls to an NPoracle exists, no polynomial-time translation exists either. Since the theorem has been proved using only theories in which all extensions have the same behavior w.r.t. the query (either they all entail it, or they all entail their negation,) this result holds for both skeptical and credulous reasoning.
The impossibility of polynomial-time faithful translations is a consequence of the above theorem: a faithful translation is also a consequence-preserving translation, and cannot therefore be polynomial-time.
Theorem 7 If there exists a faithful translation from Reiter's default logic into any fail-safe default logic that only requires a polynomial number of calls to an NP-oracle, then Σ p 2 ∩ Π p 2 = ∆ p 2 .
Process-Preserving Translations
A process-preserving translation is a translation that not only preserves the extensions, but also the processes of a default theory. Clearly, we cannot enforce the processes to be exactly the same, otherwise the two theories would have the same defaults. Therefore, we only impose that there is a oneto-one correspondence between the defaults of the original and generated theories, such that the processes of the original theory matches the processes of the generated theory thanks to this correspondence. An easy way for creating this correspondence is to assume that the first part D of a default theory D, W is a sequence of defaults rather than a set. In other words, we add an enumeration on the defaults so that we can write D = {d 1 , . . . , d m }. A process-preserving translation is a function that maps a default theory {d 1 , . . . , d m }, W into another default theory {d ′ 1 , . . . , d ′ m }, W ′ with the same number of defaults, and such that [d i 1 , . . . , d ir ] is a successful and closed process of the first theory if and only if [d ′ i 1 , . . . , d ′ ir ] is a successful and closed process of the second one. We prove that there is no polynomial-time or polysize process-preserving translation from Reiter's default logics into any fail-safe default logic. To this aim, we show a problem that is hard in Reiter's default logic but easy in all fail-safe semantics.
Definition 5 (Completability of Process) Given a default theory D, W and a sequence of defaults Π, check whether there exists a successful a complete process Π · Π ′ .
The following theorem characterizes the complexity of completability of processes for Reiter's semantics.
Theorem 8 The problem of completability of processes is Σ p 2 -complete in Reiter's default logic even for theories that have extensions.
Proof. Membership: guess a sequence of defaults Π ′ and check whether Π·Π ′ is a successful and closed process.
Hardness: we reduce the problem of existence of extensions to this one. Given a theory D, W we build the theory D ′ , W , where D ′ = {d n , d p }∪D ′′ and d n , d p , and D ′′ are defined as follows.
This theory, as required, has one extension: the one generated by the process [d n ]. The other processes, if any, are made of d p followed by the defaults that corresponds to the successful and closed processes of D, W . As a result, the consistent process [d p ] can be extended to form a consistent and closed process if and only if D, W has extensions.
The problem of completability of extensions is relatively easy for all failsafe semantics, as it amounts to checking whether Π is a successful process. By definition, indeed, any successful process is either closed or can be extended to form a successful process. Moreover, if Π is not a process, or it is not successful, then it cannot be extended to generate a successful process thanks to the anti-monotonicity of successfulness.
As a result, completability of processes is equivalent to verifying whether a sequence of defaults is a successful process, which is a problem in ∆ p 2 . For the fail-safe semantics defined in the literature, the problem is even simpler, as it is in D p . We prove that is hard for the same class for justified default logic for the sake of completeness.
Theorem 9 Checking whether Π is a successful process is in ∆ p 2 for every regular default semantics, and is D p -complete for justified default logic.
Proof. The conditions of Π being a process and being successful can be computed in polynomial time once a number of consistency tests have been performed. The problem is therefore in ∆ p 2 . For the case of justified default logic, these consistency tests are independent to each other, that is, the formulae to check do not depend to the results of the other tests. As a result, the problem is in D p .
The hardness result is an obvious consequence of the fact that the applicability of a single default is hard: the problem sat-unsat, i.e., checking whether a pair of formulae α, β is composed of a satisfiable formula α and an unsatisfiable formula β, is D p -hard. This problem can indeed be reduced to the problem of checking whether [d] is a successful and closed process of the theory below: The sequence of defaults [d] is indeed a successful process if and only if ¬β is valid (that is, β is inconsistent) and α is consistent. As a result, the problem is D p -hard.
Suppose that there exists a process-preserving translation from Reiter's default logic to any fail-safe default logics. We can then solve the problem of completability of a process Π in Reiter's default logic by simply translating both the theory and the process, and then solving the problem in the fail-safe default semantics. This would imply that Σ p 2 =∆ p 2 . This result can be strengthened to polysize translations. We can indeed prove that the problem of completability of processes does not simplify thanks to a preprocessing phase. This is proved by showing that the problem of completability of processes is ;Σ p 2 -hard for Reiter's default logics, and cannot therefore be "compiled to" ∆ p 2 . The class ;Σ p 2 has been introduced by Cadoli et al. [CDLS02,Lib01] to characterize the complexity of problems when preprocessing of problem is allowed. We omit the details here, and refer the reader to the papers by Cadoli et al. [CDLS02,Lib01].
Theorem 10 The problem of completability of processes is ;Σ p 2 -complete in Reiter's default logic, where the default theory is the fixed part of the instance.
Proof. We adapt the reduction by Gottlob, as follows: given a formula ∃X∀Y . ¬φ, where |X| = |Y | = n and φ contains only clauses of three literals, let A = {γ 1 , . . . , γ m } be the set of all clauses of three literals over the alphabet X ∪ Y ; we build a default theory and a successful process of it as follows: where : The sequence Π is always a successful process. Moreover, W ∧ cons(Π) implies all variables e i and either c i or ¬c i . Namely, c i is entailed if p i is in Π, and ¬c i is entailed otherwise. We can therefore replace each c i such that γ i ∈ φ with true and each c i such that γ i ∈ φ with false. The default f therefore simplifies to: As shown by Gottlob, the resulting theory has extensions if and only if ∃X∀Y . ¬φ. This is therefore a polynomial translation from ∃∀QBF into the problem of completability of a process. Moreover, the default theory only depends on the number of variables of the QBF, while the process is the only part that depends on the specific φ. As a result, this is a ;-reduction, and proves that the problem of completability of processes is ;Σ p 2 -hard. Since the problem is in Σ p 2 , it is in ;Σ p 2 as well. As a result, the problem is ;Σ p 2 -complete. This theorem proves that no polysize process-preserving translation from Reiter's semantics into any fail-safe semantics exists. Indeed, if this were the case, we could translate any theory from Reiter's default logics into a theory that has the same successful and closed processes under a fail-safe semantics. This would prove that the problem is in ;∆ p 2 , which implies that ;Π p 2 = ;Σ p 2 , and therefore the polynomial hierarchy collapses [CDLS02].
Conclusions
The possibility for a default logic semantics to make a sequence of applicable defaults to fail can be seen as a semantical drawback or as a computational feature. In this paper, we have studied how much a semantics gains from this ability to generate failures. In particular, we have restricted to the case of theories having extensions, and have shown that translations from fail-prone to fail-safe semantics are possible or not depending on the constraints that are imposed on the translation. In particular, a translation that preserves the extensions or the skeptical consequences and produces a polynomial sized result exists, while a polynomial-time translation does not. We have also considered more liberal (almost-consequence-preserving) and more restrictive (process-preserving) constraints on the translations. The main results of this paper imply that the ability of failing can only give an advantage in terms of translation time (e.g., not all Reiter's theories can be faithfully translated into normal or justified theories in polynomial time), but not in terms of expressibility (e.g., for every Reiter's theory there exists an equivalent normal or justified theory of polynomial size.) This distinction is important, because it shows that fail-prone semantics are better than fail-safe ones in solving problems by translating them into default logic, but are not in terms of which domains can be encoded in polynomial space. In short, the possible failure of processes is a computational advantage, but not an expressiveness advantage.
These results hold only under some assumptions: the reductions are constrained to be polynomial (either in time or space) but can introduce new variables. Moreover, we only consider theories that have extensions, and prove the existence of translations only for normal default logic, which we considered a prototypical fail-safe semantics. These assumptions make the results of this paper incomparable to what proved in two similar works: 1. Delgrande and Schaub [DS03] have shown reductions from other variants of default logics into Reiter's; they take into account theories having no extensions, and limit to the specific case of justified, constrained, and rational default logic; 2. Janhunen [Jan03] has shown that Reiter's default logic can be translated into semi-normal default logic; translations are assumed not only faithful but also modular, and theories having no extensions are taken into account.
Both these two works consider polynomial faithful translations with new variables. The results presented in our paper are more general than the ones above in the sense that we proved the existence of translations from an arbitrary regular default theory into a fail-safe one and the non-existence of a translation from Reiter's semantics into an arbitrary fail-safe one.
Some apparent contradictions between the results proved in this paper and those by Delgrande and Schaub and by Janhunen are due to the fact that we only consider default theories having extensions. This is why, for example, some results about the impossibility of translations by Delgrande and Schaub [DS03, Theorem 6] and by Janhunen [Jan03, Theorem 5], which relies on the possible non-existence of extensions in Reiter's semantics, are not in contradiction with our results on the existence of such reductions. These apparent contradictions show that some translations are only impossible because of the possible lack of extensions, and become possible as soon as theories having no extensions are excluded from consideration.
An interesting question left open by the present work is whether the comparison between fail-safe and fail-prone semantics can be extended to logics that are not based on defaults. Clearly, a suitable definition of failure is needed; however, it seems somehow natural to consider propositional circumscription [Lif94] as a fail-safe non-monotonic logic (we add negative literals to a theory as far as possible, but never retract an added literal) and autoepistemic logic [Moo85] as a fail-prone one (we can "generate" a conclusion x by means of a formula like 2x → x but then retract the conclusion, if a condition F is met, by means of a formula like F → ¬x.) Finally, we note that frameworks for comparing propositional knowledge representation formalisms have been given by Cadoli et al. [CDLS00] and by Penna [Pen00]. The translations considered in these framework are allowed to translate queries (or models), while the only translation of queries admitted in this paper is the addition of a literal to queries, i.e., q is translated into a∨q. | 2004-03-19T07:20:54.000Z | 2004-03-19T00:00:00.000 | {
"year": 2007,
"sha1": "fefe8bf2ad9a98b5a1d38093c56e473ff0bfb43c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cs/0403032",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b3e5951850aff36584c2975311586339bdf1e9b7",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
96443059 | pes2o/s2orc | v3-fos-license | Multi-parameter estimation along quantum trajectories with Sequential Monte Carlo methods
This paper proposes an efficient method for the simultaneous estimation of the state of a quantum system and the classical parameters that govern its evolution. This hybrid approach benefits from efficient numerical methods for the integration of stochastic master equations for the quantum system, and efficient parameter estimation methods from classical signal processing. The classical techniques use Sequential Monte Carlo (SMC) methods, which aim to optimize the selection of points within the parameter space, conditioned by the measurement data obtained. We illustrate these methods using a specific example, an SMC sampler applied to a nonlinear system, the Duffing oscillator, where the evolution of the quantum state of the oscillator and three Hamiltonian parameters are estimated simultaneously.
I. INTRODUCTION
Stochastic master equations provide a model for the evolution of open quantum systems subject to continuous measurements [1][2][3]. The trajectories that the stochastic master equations generate represent the evolution of the state of an individual quantum system, conditioned on a particular measurement record. In theoretical studies, the measurement record is a simulated sequence corresponding to a particular realization of the evolution. However, recent experiments that implement continuous quantum measurements have demonstrated that the evolution of individual quantum systems can be reconstructed from experimental data [4][5][6][7]. The generation of such trajectories in real time during the measurement process will be an important step towards statedependent feedback control of individual quantum systems [1][2][3]. Feedback control has been demonstrated in quantum systems using the output measurement record as an input signal to the control system in optical [8,9], opto-mechanical [10][11][12], and mesoscopic superconducting systems [13,14]. In most of these examples, while the direct use of the measurement record in the control system demonstrates the utility of quantum feedback control, it is limited by the fact that the evolution of the underlying state of the system is not included in the generation of the controls. State-dependent control is more flexible and can include quantities that are estimated from, * Electronic address: jfralph@liverpool.ac.uk † Electronic address: smaskell@liverpool.ac.uk ‡ Electronic address: kurt.jacobs@umb.edu but are not directly measured in experiments.
Given a measurement record, a Stochastic Master Equation (SME) provides an estimate of the quantum state at each point in time, and -in the case of mixed states -an indication of the uncertainty associated with this state in terms of an estimate of its purity. The SME is derived by taking a single quantum system and coupling it weakly to environmental degrees of freedom that mediate a continuous measurement process. The continuous measurement of the coupled system is realized by continuously measuring the state of the environment, which can be modeled as a sequence of projective measurements on successive environmental degrees of freedom. The simplest example of this process is the measurement of the electromagnetic radiation emitted by the system, in which case the environment is the electromagnetic field [2,3]. In the most common form of the SME, a Markovian condition is applied, meaning that the environment carries information away from the system but does not by itself feed this information back to affect the system at a later time. The resulting evolution of the system is continuous and stochastic, with the stochastic term arising from the effect of the sequence of measurements on the combined system.
In simulations, an SME is used to analyze the properties of an open system under the action of a continuous sequence of measurement operators, using a realization for the noise process and calculating the evolution of the system conditioned on this realization. When interpreting experiments, the SME is used to reconstruct the estimate of the quantum state as a function of time from the given measurement record provided by the experiment. In this regard, the SME is very similar to classical state estimation techniques (often referred to as 'target tracking' or arXiv:1707.04725v2 [quant-ph] 11 Oct 2017 'object tracking' [32][33][34]), which are used to interpret sequences of classical noisy sensor measurements to form a coherent picture of the world. These classical techniques have been developed to interpret sensor data (radar or sonar signals, and sequences of images) where objects are moving against noisy backgrounds. The motion of the objects may be unpredictable or uncooperative, their identity may not be known from the measurements, they may be occluded for periods of time, and individual objects may not be fully resolved by the sensor. Classical state estimation techniques provide methods to solve all of these problems and ambiguities.
With a continuous quantum measurement, the measurement record contains information about the evolution of the particular quantum state (a quantum trajectory) but the properties of this trajectory also contain information about the classical parameters that govern the dynamics of the system: the classical parameters in the Hamiltonian and the strength of the coupling to the environment. In this paper, we demonstrate how the stochastic master equation can be augmented with techniques drawn from classical state estimation, Sequential Monte Carlo (SMC) methods, to estimate several Hamiltonian parameters efficiently alongside the quantum trajectories.
We begin our presentation by first reviewing other approaches to Hamiltonian parameter estimation, and the development of a set of Hybrid SMEs for the quantum evolution and the Kushner-Stratonovich equation for the classical parameters [15] in sections II and III, respectively. In section IV, we then discuss how efficient classical parameter estimation techniques [16] can be applied to the solution of the classical aspects of the Hybrid SME. Section V introduces an example system, the Duffing oscillator, which contains a number of relevant experimental parameters, and Section VI presents results for the simultaneous estimation of the quantum trajectories for the oscillator and up to three Hamiltonian parameters. Section VII discusses how such methods may be useful in practical systems and draws conclusions from the results presented.
II. HAMILTONIAN PARAMETER ESTIMATION
The problem of estimating the dynamical parameters of a quantum system has been studied previously by a number of authors, including those who have adopted a continuous measurement approach. This estimation process is often referred to as Hamiltonian parameter estimation. This is because the basic description of quantum dynamics is encapsulated by the Hamiltonian operator, which determines the equations of motion, and values of the classical parameters in the Hamiltonian determine the specifics of the evolution.
A standard method for determining the dynamics of a quantum system is to prepare it many times in a range of different initial states, allow it to evolve, and then measure it before re-preparation. The results of the measurements can then be combined in a tomographiclike process to obtain the equation of motion for linear Schrödinger evolution [17]. An alternative approach, and the one in which we are interested here, is to prepare the system only once and to continually monitor its subsequent evolution to build a picture of its dynamics. A full description of the problem involves starting with a prior probability density for the parameters one wishes to determine and then using Bayes' theorem to continually update this probability density from the stream of measurement results as they are obtained. A number of authors have considered this problem [15,[18][19][20][21][22][23][24][25][26][27][28]. This is of particular interest when the parameters of a system change slowly with time, and one wishes to be able to track the variations in the parameters. It is also relevant to the problem of using quantum systems as probes to measure time-varying classical fields (such as gravity waves [29] and magnetic fields [30]), as these fields appear as parameters in the Hamiltonian.
As discussed in the introduction, a dynamical equation referred to as the stochastic master equation (SME) can be used to track the evolution of a quantum system from the results of a continuous measurement so long as the dynamical parameters of the system are known. If they are not known then the full estimation problem involves both the SME and a Kushner-Stratonovich equation that evolves the probability density for the parameters of the system. The combined set of dynamical equations has been referred to as a Hybrid SME [15]. The first papers on the subject of Hamiltonian parameter estimation via continuous measurements were concerned mainly with deriving the Hybrid SME and applying it to the estimation of a single parameter [18,19]. Subsequently, Tsang and collaborators considered the more general problem of smoothing in which a time-varying parameter (a signal or wave-form) is estimated from all the measurement results obtained, and determined the ultimate limits to this procedure [21][22][23][24]26]. An alternative and interesting approach to the problem was proposed recently by Bassa et al. [27]. While most of the related work on parameter estimation employs continuous measurements, this approach considered a sequence of instantaneous measurements, and employed a discrete version of the Hybrid SME where several classical parameter values were encoded in an expanded quantum state.
A major problem with the Hybrid SME is that it is highly demanding from a computational point of view; in order to evolve the Kushner-Stratonovich equation for the probability density describing the observer's knowledge of the parameters, the SME must be evolved for every value of the parameters for which this density is appreciable. The grid of points for which the SME must be evolved becomes large very quickly as the number of parameters increases. Two previous papers have put forward methods aimed at addressing this difficulty. Ralph et al. [15] and Cortez et al. [28] considered the es-timation of a single frequency parameter, and presented methods to bypass the Kushner-Stratonovich equation. These papers estimate the natural oscillation frequency of a qubit directly from the measurement record. This approach has many benefits in terms of computational efficiency, but it has the disadvantage of not providing a simultaneous estimate of the quantum state of the system -which would be provided by the full solution of the Hybrid SME. Here, we will explore the use of a potentially more powerful technique in which the probability density is replaced with a finite set of samples of the parameters that are evolved instead. The examples given below typically use 50-100 quantum states and the equivalent of thousands to millions of classical parameter values. The purpose is again to reduce the number of copies of the quantum state that must be evolved in parallel using the SME, but we will apply this method to the challenging problem of estimating multiple parameters simultaneously.
III. HYBRID STOCHASTIC MASTER EQUATIONS
The simultaneous estimation of the quantum state of a system and the classical Hamiltonian parameters that govern its evolution was considered in Ref. [15], where an approach was presented based on a set of parallel SMEs, each using a different set of parameter values contained in a vector λ, which have an associated probability. The final mixed state is then constructed by averaging over the probabilities for the classical parameters. The probabilities associated with the different parameter vectors evolve via a Kushner-Stratonovich equation and are conditioned on the continuous measurement record [15]. For a quantum system subject to a continuous measurement, with a known set of Hamiltonian parameters, the evolution of the quantum state, ρ c (t), conditioned on the measurement record, y(t), is given by the stochastic master equation [1][2][3]. In general, the interaction with the environment can be represented by a set of system operators which are coupled to environmental degrees of freedom, some of which are not measuredV j (j = 1 . . . m) ('unprobed' operators), and some of which are measured and generate the continuous weak measure-mentL r (r = 1 . . . m ). In an ideal case, the measurement record is 100% efficient, with all of the available information being reflected in the measurement record. Unfortunately, real measurements are rarely ideal and the continuous measurement record is often corrupted with extraneous (classical) noise sources. These extraneous degrees of freedom can be characterized by an efficiency parameter for the measurement operators,L r has an efficiency η r . Specifically, η r is the fraction of the total noise power due to the quantum measurement as opposed to power contained in the other extraneous noise sources.
For unprobed operatorsV r and measurement operatorsL r , the general form for the SME is given by, whereĤ is the Hamiltonian of the system, dt is an infinitesimal time increment, and the measurement record for each of the measurement operatorsL r during a time step t → t + dt is given by, We will take dW r to be a real Wiener increment such that dW r = 0 and dW r dW r = δ rr dt for simplicity, but this is not strictly necessary. More general forms of complex increments may also be used [35].
Where the evolution of a quantum system is governed by a set of Hamiltonian parameters that are not known exactly, we can describe the parameters in terms of a classical probability density, P (λ), where λ = (λ 1 , λ 2 , ...). The system is then described by a set of SMEs, one for each set of possible parameter values, The evolution of the probability density P (λ) is governed by a Kushner-Stratonovich stochastic differential equation, derived in [15] for a single efficient measurement operator and in the absence of additional unprobed environmental operators. Any unprobed environmental operators affect the evolution of the individual SMEs but they do not play a role in the evolution of P (λ). However, the equation given in [15] generalizes naturally to include measurement inefficiencies and is given by, where dP r (λ) is the update to the probability density due to a measurement increment dy r (t) corresponding to the measured operatorL r , such that (4) and the full conditional density matrix ρ c is given by,
IV. SEQUENTIAL MONTE CARLO METHODS
Sequential Monte Carlo methods originate in the field of multi-target tracking [31]), but have been adopted and generalized to form a set of very efficient methods for parameter and state estimation in classical signal processing and nonlinear filtering. SMC methods are sometimes referred to as particle filters, but particle filters are a special case of the general approach. An SMC method relies on the approximation of a continuous probability distribution by a finite set of points (or particles) which sample the parameter space. The importance of each sample point changes in response to (is conditioned by) the measurements associated with the parameters being estimated, and the sample points can be periodically resampled to concentrate sampling towards regions of higher relative probability. In particle filters, the sample points are allowed to evolve according to some dynamical process, generating a time dependent history or a track within the parameter space. In the example presented in this paper, the parameters are selected to be constant and another SMC method is more suitable. We adopt an approach used recently for parameter estimation in classical differential equations [16], which is an example of a Sequential Monte Carlo sampler. This approach is particularly well-suited to the estimation of fixed parameters; however, the SMC sampler used here still embodies all of the key features of a general SMC method: sampling, conditioning/updating, and resampling. A number of very approachable tutorials and introductions to particle filters and general SMC methods have been published. For example, a comprehensive guide to SMC methods and their applications is available in [36], a mathematical introduction is given in [37], and a widely cited tutorial to particle filters and SMC methods is contained in [38].
Formally, an SMC method approximates a (classical) expectation for a function h(x) over a probability distribution p(x) defined on some parameter space Λ, x ∈ Λ, given byh = h(x)p(x)dx using a finite sum of a set of points x (i) (i = 1 . . . N ) drawn from p(x), which is known as the target distribution. The expectation value for an arbitrary function can be approximated by, The larger the number of sample points, the better the approximation -in fact, under reasonable assumptions, the variance of the error inh can be shown to scale as 1/N in any number of dimensions [39]. The problem is that, in most practical cases, the probability distribution is unknown. It needs to be estimated from a sequence of measurements. To do this, another distribution, the where w(x) = p(x)/q(x) and the w (i) 's are (unnormalized) weights associated with each sample point. In our case, each sample point is associated with a parameter value or a vector of values for each of the parameters be- Initially, the sample points are randomly selected from a prior distribution, which covers the entire range of possible parameter values, and are given a uniform weight. As new measurements are added, the accuracy of the estimated quantityh is improved by updating the weights associated with the particles to reflect the new information that the measurement contains. Some weights are increased and some weights are reduced when the measurement supports or contradicts the corresponding sample point, respectively. The values w (i) are referred to as 'weights' rather than probabilities because, although they are related to probabilities, they are not necessarily normalized after each time step and do not necessarily sum to one. In practice, it is convenient to normalize the weights after each time step. Here, we denote the normalized weights bỹ w (i) [16]. When sample points have very low weight, and hence very low probability, they can be removed and replaced with alternative particles, but this resampling process must be done carefully so as to ensure that the statistical quantities remain unbiased and will converge efficiently to the desired values.
The proposal distribution should be simple to calculate and different choices of q(x) are used in different variants of the SMC approach [36][37][38]. The secret to working with SMC methods is to pick a suitable proposal distribution to solve the problem in a robust manner using limited computational resources. In particular, a good choice of proposal distribution allows classical parameters to be estimated significantly more efficiently than when using an enumerative or grid based method [38], as was used for a one parameter Hybrid SME problem in [15,27].
For our Hybrid SME problem, we start by selecting an initial set of sample points in the parameter space using a prior distribution and initialize an SME (2) for each of the sample points. For the examples shown below, the quantum state of the system is initialized to be a thermal mixed state, and the prior distribution is chosen to be uniform over some finite range within which the true parameter values are known to lie. An accurate initial prior distribution can significantly reduce the number of particles required by the SMC sampler, but in many situations the prior is not well defined. Once the points have been selected, the weights are initialized with w For each time step, the individual SMEs are integrated using the increment (2) found using the parameter value λ (i) associated with the particle. A corresponding measurement probability is found from (4) and used to update the (unnormalized) particle weight w for the k'th measurement from measurement operatorL r at time t k . All particles are updated after the measurement increment and then weights are normalized.
Resampling to generate new particles only occurs when the distribution of weights amongst the particles is such that the effective sample size (or the effective number of particles) N ef f = 1/( i (w (i) ) 2 ) falls below some threshold value -indicating that the weight is being concentrated in a relatively small number of particles and a significant number of particles have low weight and do not contribute to the estimates; a problem known in the SMC literature as sample impoverishment or weight degeneracy [37]. It is known that the variance of the weight distributions across different realizations of the SMC sampler is guaranteed to grow with each time step [38]. However, since a given realization (i.e. one run of the algorithm) does not have access to the ensemble of all possible realizations, it is convenient to monitor something that can be computed from a single realization. The effective sample size is well established as such a quantity [40] and it can be considered to be a noisy measurement of the (inverse of the) variance. Between resampling events, the variance of the weight distributions will (on average) increase and the effective sample size will decrease. While the precise threshold value used is a somewhat arbitrary choice for the algorithm designer, it is common (across the vast range of applications of SMC samplers and particle filters) to consider threshold values between N/10 and N/2. In the cases shown below the threshold value for N ef f was set to be N/2 [16].
When the particles are resampled, the new candidate valuesλ are sampled from the distribution formed from the current particle weights. The particles with the highest weights are more likely to be selected, although the particles with relatively low weights still have a chance of being selected. The new particle parameter values are then selected using the distribution q(λ|λ (i) ) = N (λ; λ (i) , Σ), where N (x; µ x , S) is a normal distribution with mean µ x and covariance S, and Σ is related to the covariance matrix associated with the current particle weights, Σ k . The role of q(λ|λ (i) ) is to select new points,λ (i) , around the current particles with large weights, but not at exactly the same point. In this paper, we use a defensive strategy [16,41], where 90% of resampled points use a covariance which is 10% of the current covariance, Σ = 0.1Σ k , and 10% of the resampled points use the full covariance matrix Σ = Σ k . This allows for small perturbations in parameter space around the high weight sample points, including the correlations between different parameters seen in the covariance matrix, and a small number of large excursions, to explore more of the parameter space than is currently being covered by the sample points with large weight. There are two specific design considerations relevant to the choice of the distribution, q. The first is to ensure that having more samples will give rise to more accurate estimates of quantities of interest, which is manifest empirically as robustness. Put simply, this demands that samples are proposed in a way that explores possible but potentially low probability states. The second is to ensure that the SMC method is computationally efficient, i.e. that it gets as accurate an estimate as is possible with a given number of samples. This demands that samples are placed in high probability areas. A defensive proposal is an advanced, but relatively standard technique, used in particle filters and SMC samplers, that combines robustness with efficiency by having two elements to the proposal, one that is designed to ensure the sampler is robust and the other that is designed to ensure that it is efficient.
When the new sample points have been selected, they are initially assigned the weightw (i) 0 = 1/N and then the unnormalized weight for the new candidate points is calculated reusing the entire record of measurement increments,w Once all of the new weights have been recalculated they can be renormalized, and the integration of the Hybrid SME can continue as normal. Where the new sample weights are still degenerate and the effective number of particles is still below the threshold value, the resampling (and recalculation) needs to be performed again.
To summarize, the SMC method used here can be described in the following five steps: (a) Initialize the individual density matrices ρ c,λ (i) using thermal mixed states, and select a set of classical parameter values (sample points or particles) λ (i) using a uniform distribution covering the full range of possible parameter values, and assign uniform weights to each of thesew updating their weights using (7). Continue this evolution until N ef f drops below the threshold value.
(c) If N ef f is below the threshold value, the classical parameter values are resampled using a cumulative probability distribution calculated from the particle weights. This resampling creates a new set of particles/sample points, where the classical parameters are selected around the 'parent' values. The defensive strategy introduces small perturbations around the parent values and the occasional large perturbation to explore a wider parameter space -the new weights associated with each of the new parameter values/sample points are uniformly distributed at this point.
(d) Once the new values have been selected, the complete evolution of each quantum state is recalculated using new initial thermal states and the individual SMEs (using the same measurement record), and the uniform weights from step (c) are recalculated using (8).
(e) Return to step (b) with evolution of the quantum state and weights determined by the individual SMEs and the weight update (7), until N ef f drops below the threshold value again, at which point the resampling step (c) and the re-weighting step (d) are again required.
A schematic example of the estimation process for a one dimensional parameter example is shown in Figure 1. In this example, it is possible to see that the initial uniform weighting of the particles evolves so that the relatively large number of particles below a parameter value of 1.0 carry very little weight, and the distribution of particles immediately after resampling is concentrated more towards the values above 1.0. The re-weighted parameter values shown in (d) represent a better approximation to the underlying probability distribution than those shown in (b), which contains significant gaps towards the peak of the distribution. For a more detailed description of the implementation, a full description of the SMC method is given as pseudo-code in [16]. The recalculation over the entire measurement history is an unfortunate, but necessary, computational cost in the SMC sampler. Recalculating the weights for the entire history of measurement increments will often take a significant amount of time. However, the need to regularly resample the entire set of particles reduces as the distribution of the particles improves to reflect the information contained in the measurements [16]. This means that the computational load introduced is biased towards the start of the calculation of a quantum trajectory. In addition, for resampled points very close to the parent particles, some approximations are possible based on the fact that the ratio between the products in (8) is very close to one. It is not possible to remove the recalculation entirely however without constraining the resampled parameter values and therefore not exploring the full parameter space.
V. EXAMPLE SYSTEM -DUFFING OSCILLATOR
The properties of the quantum trajectories generated by the Duffing oscillator have been studied extensively in terms of the appearance of chaotic behavior from quantum systems in the classical limit [42][43][44][45][46][47][48][49][50][51][52], but it is also a model used for a number of other practical systems where quantum effects in classical nonlinear systems are of interest. For example, it has been used to describe the motion of a levitated particle in an electromagnetic trap [53,54], and is the basis for the analysis of the properties of vibrating beam accelerometers [55][56][57]. The Hamiltonian for the Duffing oscillator can be written in the general form, using dimensionless position and momentum operatorsq andp, µq 4 +g cos(t)q + Γ 2 (qp+pq) (9) where the vector λ = (ω, µ, g) contains the three Hamiltonian parameters of interest: the natural (linear) oscil-lation frequency ω, the nonlinear coefficient µ, and the strength of the external driving term g. The measurement is applied via a Linblad operatorL = √ 2Γâ, and a is the harmonic oscillator lowering operator so that q = (â † +â)/ √ 2 andp = i(â † −â)/ √ 2 with [â,â † ] = 1 and = 1. We fix the measurement strength so that Γ = 0.125 for all of the results presented here. The final term in the Hamiltonian is included because, in combination with the dissipative measurement process, it generates linear damping in momentum. This is a useful numerical addition because it keeps the phase space contained, thereby restricting the numbers of states required in the simulation, without affecting the underlying physics.
The numerical integration of the individual SMEs uses a method developed by Rouchon and colleagues [58,59] specifically for stochastic master equations. This method has been demonstrated to provide significant benefits in terms of accuracy versus computational resources when compared to standard methods, such as Milstein's method [60], for both systems involving small numbers of basis states [59] and large numbers of basis states [61]. We also employ a moving basis method used by Schack, Brun and Percival [42,43] to shift basis states to be centred on the current expectation value of the state. Although not strictly necessary [42,43], we shift the basis after each time step. This comes at a computational cost but it also ensures that the number of basis states employed is minimized. Once the evolution of the individual SMEs has been calculated, using the appropriate set of parameters, the combined density operator is calculated by averaging over all of the individual states, weighted appropriately by the particle weights.
VI. RESULTS
The Duffing Hamiltonian (9) has four classical parameters but we will fix the measurement strength so that Γ = 0.125 and we will concentrate on the estimation of the other three parameters: the linear oscillator frequency ω, the coefficient of the nonlinear term µ, and the magnitude of the drive term g. The estimated values for these three parameters are denoted byω,μ, and g respectively. For all of the examples shown below, the actual values for parameter values were set to be ω = 1.2, µ = 0.15, and g = 3.0. The numerical integration of the SMEs was performed using time steps ∆t = 2π/500 so that there were 500 steps per period of the drive term. The individual SMEs for each particle/sample point used a moving basis with 15 harmonic oscillator states, and the composite state was calculated by combining the density matrices from the individual SMEs using (5), using a moving basis with 60 harmonic oscillator states. Figure 2 shows two examples for the estimation of the linear oscillator frequencyω. The examples correspond to the same stochastic record (i.e. the same realization) but with different measurement efficiencies. The blue lines correspond to the case where the measurement is 100% efficient (with η = 1). This shows a rapid convergence to the actual value, ω = 1.2, within about 50-100 periods/cycles of the drive term. The 3 sigma errors predicted for the estimate are also shown, together with the resampling events as blue circles. The convergence is fairly rapid and the estimate is relatively stable once converged. The red line on the same figure shows an example where the measurement is inefficient, corresponding to a measurement efficiency of 40% or η = 0.4 (chosen to match the estimated efficiency reported in [5]). In this case, the convergence is much slower, indicating that the measurement record contains less information upon which a parameter estimate can be constructed. In this case, the estimated parameter value only stabilizes after around 150-200 cycles of the drive term, and the larger estimated errors indicate this increased uncertainty. In both cases shown, there are slight variations in the estimated values (seen around 200-250 cycles) but these are relatively small and are well within the estimated errors. In addition, where the estimation process takes longer, the number of resampling events (red circles) tends to increase and they often occur later in the process than the corresponding resampling events for efficient measurements, leading to increased computational demands to recalculate the weights after resampling. In addition to the estimates, Figure 2 also shows the purity of the full estimated quantum state for both cases as an inset. For efficient measurements, the conditioned quantum state purifies very rapidly (1-2 periods of the drive term) and remains pure throughout the estimation process. For inefficient measurements, the conditioned quantum state purifies somewhat but then the purity fluctuates between 0.8 and 0.9. The state remains mixed because information about the quantum state is being corrupted by extraneous noise. This is a characteristic of inefficient measurements in quantum systems, and it is not affected by, and does not itself affect, the classical parameter estimation process. Figure 2. The resampling events are marked on Figure 2 as large dots, but they are also seen in Figure 3 as large jumps in N ef f after the resampling. The data in this figure is useful when optimizing the resampling parameters. It provides information regarding the average number of particles being used. An efficient SMC process would expect to have rapid fluctuations in N ef f in the initial phases of the estimation process, with frequent resampling, which would become more gradual drops in N ef f as the estimates improve. As time increases, and more measurements are added, the resampling events become less frequent, as is shown in Figures 2 and 3.
The estimation of the frequency of the linear oscillator term is relatively straightforward, and this is also found to be the case for the magnitude of the drive term g. Estimating the coefficient of the nonlinear term µ is more challenging however. When the external drive is very small, the Duffing oscillator will appear to be ap- proximately linear and estimating the degree of nonlinearity is problematic. As the amplitude of the drive is increased, the system will explore more of the nonlinear potential and µ will become easier to estimate. This fact is reflected in the results obtained. For the parameter values selected, the drive term is sufficiently strong to explore the nonlinearity of the potential, but not sufficiently strong so as to require very large numbers of basis states or to make the estimation process easy compared to the other two parameters. An example of the estimation of the nonlinear coefficient is shown in Figure 4, where the convergence to a stable value takes much longer than either example shown in Figure 2, requiring over 500 periods of the drive term to stabilize the estimated value (note the different x-axis compared to Figure 2).
Each of the examples shown in Figures 2 and 4 show the estimation of one parameter, the other parameters are assumed to be known. The estimation of one parameter is relatively straightforward and a value can be found using a grid-based method (as was the case in [15] and [27]). The number of particles required for the estimation ofω andμ is around 101 sample points in each of the SMC examples shown above. The number of resampling events is around 4-6 in the cases shown in Figures 2 and 4, and the maximum number of quantum trajectories that would need to be calculated is approximately equivalent to 200-300 trajectories on a fixed grid Hybrid SME. The expected errors for a fixed grid approach are related to the grid spacing, which is related to the initial range over which these grid points are initially distributed. For the cases considered here, with an initial distribution of points for a parameter value λ between 0.5λ ≤ λ (i) ≤ 1.5λ. Assuming that the actual values of λ are uniformly distributed across each interval, the expected error for a fixed grid approach with N grid points would be limited by σ λ,grid > (λ/N grid )/ √ 12 0.1% − 0.15%λ. This value is achievable only in the long time limit and the actual error is likely to be significantly larger than this.
In the examples given above, the SMC sampler produces parameter estimates with errors approaching this limit within a few hundred cycles. There is therefore a small but potentially significant benefit in using the SMC sampler method for one parameter estimation.
Moving from single to multiple parameter estimation presents a serious problem for grid-based methods. The number of points required scales exponentially in the number of dimensions to achieve the same accuracy. The error from a grid-based approach results from approximating an integral of functions in D dimensions, where the error is O((N grid ) (−1/D) ). The error for an SMC sampler comes from approximating the integral directly (using Monte-Carlo integration) and therefore is O(1/N grid ) whatever value D takes [39]. (See reference [40] for proofs for the convergence of SMC and particle filter based methods). So, for D = 1, the two approaches offer similar scaling of error with N grid , in higher dimensions, an SMC sampler will asymptotically outperform a grid-based method as N grid tends to infinity. Of course, differences in constants of proportionality mean that a computational benefit from using the SMC sampler in a small number of dimensions (number of parameters) is not guaranteed. Estimating all three parameters in our example, at a level of accuracy equivalent to the one parameter examples above, would require around ten million grid points, (300) 3 = 9 × 10 6 . With SMC methods, this number is dramatically reduced. Figure 5 shows an example of the simultaneous estimation of all three parameters using 1001 sample points. The values forω andg still converge rapidly whilstμ takes longer to establish a stable estimate. When comparing this with a grid based method, we note that the number of trajectories is larger than for the single parameter case and the number of resampling events is also increased, approximately 20 in the case shown in Figure 5. This is equivalent to a run-time for approximately 10,000 trajectories on a fixed grid. Using the same assumptions as before, this would give errors limited by σ λ,grid > (λ/ 3 N grid )/ √ 12 1.5%λ. The errors found using the SMC sampler described above are nearly an order of magnitude smaller than this limit for one of the three parameters (ω) and comparable for the remaining two parameters (µ and g). There is an additional benefit, in that the sample points not only provide estimates of the parameter values, they also provide information regarding the correlations between the different parameters. For the example shown in Figure 5, the mean vector and the estimated covariance matrix (S) are given by Note also that in Figure 5, the standard deviation of the linear parameter (ω) is larger than in Figure 2 and the convergence is slower than for the single parameter case. This is partly due to the larger uncertainty generally in the three unknown parameters, and in part due to the slower convergence of the nonlinear parameter (μ). The coupling between the parameters, shown by the nonnegligible correlations shown in the covariance matrix, means that uncertainty in the nonlinear parameter increases the standard deviation of the other two parameters. The use of an SMC sampler to estimate the Hamiltonian parameter values directly from the quantum trajectories is more efficient than an equivalent grid-based method but it still presents a computational challenge. Solving a single SME can be simplified using a stochastic integration method designed specifically for SMEs, like Rouchon's method [58,59], and using efficient numerical tools, like moving basis states [42,43]. However, solving many simultaneous SMEs to determine the evolution of the particle weights still requires significant computational resources. The number of combinations of parameter values explored using the SMC sampler is significantly less than that required by a conventional gridbased method, but each sample point explored requires the full trajectory to calculated, or recalculated after resampling. The number of SMEs required to be calculated can be said to be relatively small but it is still not a trivial exercise. In their favor, SMC methods are amenable to parallelization [16], since the evolution of SME and the recalculation of each trajectory after resampling are largely independent processes and can be distributed simply across a number of processors. However, at present, it is more likely that this type of technique is more likely to be used for post-processing experimental data rather than as part of an on-line closed-loop control system.
VII. CONCLUSIONS
Continuous quantum measurements, and their asso-ciated stochastic master equations (SMEs), provide a means to monitor the dynamical evolution of a quantum system and to provide an estimate of the underlying quantum state. In addition, the quantum trajectories resulting from the integration of stochastic master equations contain useful information about the parameters that govern the evolution of the system. Hybrid stochastic master equations provide a means to extract the information regarding these classical parameters. Hybrid SMEs involve running many parallel SMEs, each one having a different value for the parameter (or parameters). The classical probabilities attached to the individual SMEs and the associated parameter values can then be found by integrating a Kushner-Stratonovich equation. This classical estimation process is numerically costly, and is even more so when estimates are required for multiple parameters. This paper has demonstrated how such estimates can be found using a technique taken from classical state estimation and nonlinear filtering, a Sequential Monte Carlo (SMC) sampler. The SMC sampler used in this paper has been demonstrated to allow the simultaneous estimation of three Hamiltonian parameters, together with their statistical correlation and the associated quantum trajectories, in a computationally tractable form, with a relatively small number of candidate parameter values and parallel SMEs.
Even with such methods, the computational task in solving the Hybrid SME is formidable, and is currently beyond the point where it could be used as part of a closed-loop quantum control system. At present, the strength of such techniques is in the ability to postprocess experimental measurement data to verify the quantum states used in an experiment but also to provide an independent, in-situ means to check the parameters that govern their evolution. | 2017-10-11T14:47:32.000Z | 2017-07-15T00:00:00.000 | {
"year": 2017,
"sha1": "585ac6aed1ddda2c28325b7300950a292ddcb380",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevA.96.052306",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "585ac6aed1ddda2c28325b7300950a292ddcb380",
"s2fieldsofstudy": [
"Physics",
"Computer Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
246338453 | pes2o/s2orc | v3-fos-license | Cytogenetic and developmental toxicity of bisphenol A and bisphenol S in Arbacia lixula sea urchin embryos
Bisphenol S (BP-S) is one of the most important substitutes of bisphenol A (BP-A), and its environmental occurrence is predicted to intensify in the future. Both BP-A and BP-S were tested for adverse effects on early life stages of Arbacia lixula sea urchins at 0.1 up to 100 µM test concentrations, by evaluating cytogenetic and developmental toxicity endpoints. Embryonic malformations and/or mortality were scored to determine embryotoxicity (72 h post-fertilization). It has been reported in academic dataset that bisphenols concentration reached μg/L in aquatic environment of heavily polluted areas. We have chosen concentrations ranging from 0.1–100 μM in order to highlight, in particular, BP-S effects. Attention should be paid to this range of concentrations in the context of the evaluation of the toxicity and the ecological risk of BP-S as emerging pollutant. Cytogenetic toxicity was measured, using mitotic activity and chromosome aberrations score in embryos (6 h post-fertilization). Both BP-A and BP-S exposures induced embryotoxic effects from 2.5 to 100 µM test concentrations as compared to controls. Malformed embryo percentages following BP-A exposure were significantly higher than in BP-S-exposed embryos from 0.25 to 100 µM (with a ~5-fold difference). BP-A, not BP-S exhibited cytogenetic toxicity at 25 and 100 µM. Our results indicate an embryotoxic potential of bisphenols during critical periods of development with a potent rank order to BP-A vs. BP-S. Thus, we show that BP-A alternative induce similar toxic effects to BP-A with lower severity.
Introduction
Bisphenol-A (BP-A) is an industrial chemical that has been used extensively to produce certain plastics and resins (Corrales et al. 2015). Current literature has raised concern about BP-A's implications in several human chronic diseases (Rezg et al. 2014) and/or ecotoxicological complications (Corrales et al. 2015). These toxicologic impacts prompted different authorities to interdict this plasticizer from different industrial applications. Several countries have substituted the parental analog with bisphenol S (BP-S) under the "BP-A-free" label to indicate the safety of new products and reassure the consumer. However, the recent literature raised some doubts about the safety of "BPA-free" plastic products and has raised concern about their possible physiological disruptor properties and/or ecotoxicological effects 2022;Qiu et al. 2019;Rezg et al. 2018;Wu et al. 2018;Wan et al. 2018;Zhou et al. 2019). BP-S is used in consumer products present in daily life such as food containers, canned foods, personal care products, paper products, manufactured plastics, and in many other industrial applications (Liao et al. 2012;Liao and Kannan 2014). Although the impact of microplastics and BP-A on marine wildlife is reported (Shahul Hamid et al. 2018;Xu et al. 2020), the adverse effects of BP-A alternatives as emergent pollutants are less well understood.
Bisphenols pass in aquatic environments through effluents discharged from wastewater treatment (when they are not completely removed before discharge), as well as directly from manufacturing industries, leachate discharges, and degradation of plastic litter (Corrales et al. 2015;Ying et al. 2009). Recently, BP-A and BP-S were detected as the predominant molecules in effluents of wastewater treatment plants in the US (Xue and Kannan 2019). Furthermore, BP-S has been detected in aquatic organisms and surface water samples from major rivers in many countries reaching, e.g., 7.2 μg/L in Adar, India (Yamazaki et al. 2015). As the usage of BP-A is predicted to decline further, environmental emissions of BP-S are likely to intensify in the future (Liu et al. 2021;Yu et al. 2015).
Sea urchins are an ecologically relevant animal group, and a valuable model frequently used for toxicity bioassays (Goldstone et al. 2006;Oral et al. 2017;Pagano et al. 2017). To the best of our knowledge, no data in the literature describes the toxicity of BP-S on sea urchins embryos. Thus, the aim of this study was to evaluate embryotoxicity and cytogenetic toxicity for both BP-A and BP-S in sea urchin embryos.
Sea urchins
A. lixula, which is distributed in shallow rocky reefs all along the Mediterranean coasts and are important grazers in sublittoral benthic communities, was used as test organism (Guidetti and Mori 2005). Specimens were collected by hand from the coastal side in Seferihisar, Izmir, Turkey (38.152331, 26.823245). Twenty liters of seawater were bottled from the sea urchin habitat. Specimens and water samples were transferred to the laboratory in icebox, then water samples were filtered with a 0.45 µm filter. Cytogenetic and developmental toxicity assays were carried out as described previously Pagano et al. 2017). Cytogenetic toxicity tests were completed in polystyrene test beakers and contained 3 replicates whereas embryotoxicity tests were carried out in 6 replicates.
The choice of test concentrations was made according to Bošnjak et al. (2014) and based on the prediction that environmental emissions of BP-S are likely to intensify in the future (Liu et al. 2021;Yu et al. 2015). For this purpose, we selected concentrations ranging from 0.1 to 100 μM. Thus, the test concentrations of both chemicals were 0.1, 0.25, 1, 2.5, 10, 25, and 100 µM for both developmental and cytogenetic toxicity experiments.
Developmental and cytogenetic toxicity control groups consisted of untreated and healthy embryos (30 embryos/ ml) in 10 ml of filtered seawater. Test chemicals were dissolved in dimethyl sulfoxide (DMSO), therefore a DMSO (0.1% v:v) control group for each test was applied as well.
Embryological analysis
For embryotoxicity tests, BP-A or BP-S were placed at the bottom of each culture plate well [Falcon™ Tissue Culture Plates (6 wells, 10 ml/well)], and then suspended in 9 ml FSW. Thereafter, 1 ml of zygotes (10 min post-fertilization, p-f) was added to BP-A or BP-S and incubated at 18°C in the dark for 72 h. After a 72-h incubation, 10 −4 M chromium sulfate was added to the culture wells and the larvae were scored on an inverted microscope (100×) . Embryonic/ larval developmental defects were scored blind by trained readers in 100 random embryos of each test group to determine the embryotoxic effects of the test chemicals, as classified in Fig. 1: N: Normally developed plutei; P1: Malformed pluteus (skeletal and/or gastrointestinal malformations); P2: Developmental arrest at abnormal blastula/gastrula stage (prepluteus stage blockage). Developmental defects were calculated (%DD) = (P1 + P2). Another scored endpoint consists of the observation of dead plutei and dead pre-larval (or prehatching) embryos (D: early embryonic death). Thus, developmental defects and mortality were determined referring to the sum P1 + P2 + D.
Cytogenetic analysis
Cytogenetic tests were carried out 6 h p-f and the embryos were fixed in Carnoy's solution (ethanol, chloroform, acetic acid; 6:3:1 V:V:V). Fixative was replaced with absolute ethanol right after fixation. 24 h after fixation, absolute ethanol was renewed and the samples were ready to be observed under a light microscope (1000×) with oil immersion. Mitotic activity (numbers of metaphase and anaphase) and chromosome aberrations (chromosome bridges, lagging chromosomes, multipolar spindles, free chromosome sets, fragmented chromosomes) as shown in Fig. 2, were scored in each embryo, thus allowing to assess both quantitative endpoints and mitotic anomalies.
Statistical analysis
All datasets gathered from the bioassays were statistically analyzed in IBM SPSS v20. Results of bioassays are given as mean ± standard error in the charts. Homogeneity of variances was checked by Levene's test. Differences between each concentration group and the controls were determined by two-tailed Independent Samples t-test. A normality test was performed and the significance of the difference among the groups was evaluated by One-way Analysis of Variance (ANOVA) with Tukey's HSD and Tamhane's T2 post-hoc tests. Kruskal-Wallis and Mann-Whitney U Tests were applied where ANOVA assumptions were not fulfilled. Differences were considered significant when p < 0.05.
Cytogenetic toxicity
The cytogenetic results for BP-A plasticizer and its substitute BP-S are shown in Fig. 4. Mitotic activity in the embryos exposed to BP-A was inhibited at 25 (p < 0.05, Student's t) and 50 µM (p < 0.01, Student's t) concentrations. At the concentrations of 25 and 50 µM, mitotic activity significantly differed for BP-A and BP-S (p < 0.05, Student's t) (Fig. 4a). Also the data in Fig. 4b showed that the number of embryos lacking mitotic figures (% Interphase Embryos, IE) differed at 25 to 50 µM BP-A vs. Control, and significantly above the corresponding IE values induced by BP-S (p < 0.05, Student's t). As shown in Fig. 4d, a significant difference was observed in average total mitotic aberrations in embryos exposed to 25 to 50 µM BP-A compared to controls (p < 0.05, Mann-Whitney U test), and compared to embryos exposed to BP-S (p < 0.05, Mann-Whitney U test).
Discussion
Several studies have reported on pleiotropic toxic effects of BP-A in aquatic vertebrates and invertebrates at environmental doses (Canesi and Fabbri 2015;Crain et al. 2007;Kang et al. 2007). BP-A-induced embryotoxicity was noted previously, in sea urchins (Cakal Arslan and Parlak 2008), in zebrafish (Tse et al. 2013), in Xenopus (Gibert et al. 2011), and rodents (Chen et al. 2013). It has been reported that BP-A can alter echinoderm physiology, reproduction, and development at environmental concentrations (Bošnjak et al. 2014;Roepke et al. 2005), which can reach 17.2 μg/L (Crain et al. 2007). BP-A can induce aberrant karyokinesis (division of the cell nucleus), leading to defective embryo development through the first cell division and retardation, along with general errors in cytoskeletal functioning in mitosis (Bošnjak et al. 2014).
The present report confirms BP-A-induced developmental and cytogenetic toxicity, while the replacement chemical (BP-S) failed to alter A. lixula early life stages. BP-A is more potent than BP-S in particular, at 10, 25, and 100 μM (~5 fold), indicating the sensitivity of A. lixula embryos to these specific bisphenols during a critical developmental period. Analogous effects were also noted within Daphnia magna and in Zebrafish embryos and larvae (Liu et al. 2021). Thus, we suggest that BP-S raises fewer, if any, environmental problems with its growing use in replacing BP-A.
The toxicity order for different bisphenols reflects that they may operate via distinct mechanisms.
It has been reported in academic dataset that bisphenols concentration reached μg/L in aquatic environment of heavily polluted areas (Liu et al. 2021). For example, levels of BP-S detected in surface waters of the Adyar River and Buckingham Canal in India have been found to reach to range from non-detectable to 7.20 μg/L and 0.058 to 2.1 μg/ L respectively. For BP-A it can reach 17.2 μg/L (Crain et al. 2007). It has been also cited that bisphenol environmentally relevant concentrations are from 0.1 to 1000 μg/L (Qiu et al. 2018). Before 2013, BP-S had been detected in freshwater and sewage sludge, but rarely found in marine surface sediment. However, recent literature showed that BP-S concentration in aquatic environments started to increase progressively ). This observation may indicate that BP-S compounds begin to be extensively used all over the world at different degrees with countries (Liu et al. 2021).
In addition, attention should be paid to the range of concentrations from 0.1-100 µM to develop environmental predictions and risk management because it has been reported that the usage of BP-A is predicted to decline further, and environmental emissions of BP-S are likely to increase in the future (Liu et al. 2021;Yu et al. 2015). Besides, BP-S is less biodegradable than BP-A in aquatic environments, which may lead to its accumulation in the biota (Danzl et al. 2009;Herrero et al. 2018). Thus, in this experimental protocol, we have chosen concentrations ranging from 0.1-100 μM in order to highlight, in particular, BP-S effects. It could be important in the context of evaluation of toxicity and ecological risk of BP-S as emerging pollutant.
Data indicate that BP-S did not exert cytogenetic toxicity at all test concentrations as compared to controls, whereas BP-A can induce cytogenetic anomalies in particular at high concentrations, 25 and 50 μM. In accordance with our data, several studies have reported that BP-A can induce DNA damage as well as structural and numerical chromosomal aberrations in vitro (Santovito et al. 2018;Xin et al. 2015) and in vivo (Izzotti et al. 2009). A recent study describes no cytogenetic effects for both BP-A and BP-S in human HepG2 cells (Hercog et al. 2020). Also, it has been reported that BP-S, compared to BPA, has a lower acute toxicity, similar or less endocrine disruption, similar neurotoxicity, and immunotoxicity, and lower reproductive and developmental toxicity (Qiu et al. 2018). On the other hand, to date there is a lack of information on the effects of BP-S on
Potential mechanisms for toxicity during larval development
It has been found a relationship between species relatedness and the estrogen agonist mode of action in BP-A-induced developmental alterations. Thus, a cross-species mode of a action via estrogen signaling have been shown leading to physiological changes in vertebrates (fish and mammals) and invertebrates (U.S. EPA 2005). Although research on endocrine disruptors and echinoderm has not been abundant, the existence of species-specific sensitivity in urchin species against BP-A and several other endocrine-disrupting compounds, on larval stage development was reported (Roepke et al. 2005). The authors concluded that EDCs could act with Cytogenetic toxicity after BP-A or BP-S exposure in A. lixula sea urchin embryos. a Mean of no.mitoses per embryo (*p < 0.05; **p < 0.01; ***p < 0.001 vs control, Tukey's). b Percentages of interphase embryos (*p < 0.05; **p < 0.01; ***p < 0.001 vs control, Student's t and Mann-Whitney U tests). c Metaphase/Anaphase ratio (*p < 0.05; **p < 0.01; ***p < 0.001 vs control, Student's t). d Percentage of affected embryos (percent embryos having ≥1 mitotic aberrations) (*p < 0.05; **p < 0.01; ***p < 0.001 vs control, Tukey's) different mode of action (other than estrogen signaling), leading to differential response and sensitivity in embryos of each species of sea urchin. Thus, yet the molecular mechanisms or modes of action underlying bisphenolsinduced developmental and cytogenetic toxicity is poorly understood in invertebrates due to the pleiotropic effects. It is instructive to offer some plausible mechanistic hypotheses: Endocrine disruption: While current knowledge of echinoderm endocrinology is still limited and not well understood, early evidence has reported that echinoderms physiology acts via vertebrate-like hormones (such as steroids) (Sugni et al. 2007) and it has been reported that thyroid hormones are implicated in Echinoderm metamorphosis process (Heyland et al. 2005). Also, a genomic analysis of sea urchin nervous system has been elucidated at least 37 putative G-protein-coupled peptide receptors and peptide hormones (Burke et al. 2006). Thus, in sea urchin embryos, hormones may be acts on specific targets larval development and any disruption could induce negative impact. Changes in the expression of a whole host of genes/gene networks, which may impact successful early developmental organisation and growth of larvae (Bošnjak et al. 2014). Lipid peroxidation and oxidative stress to DNA resulting in developmental impacts and toxic effects of both BP-A and BP-S as proved with transcriptome approach in zebrafish model . Epigenetic changes such as alterations in DNA methylation (Qin et al. 2021).
Conclusions
This study evaluated the effects of BPA and BPS on sea urchin embryos providing some data support for their potential ecological risks. Taken together, our results indicate an embryotoxic potential of both BP-A and its substitute BP-S during critical periods of sea urchin development with a potent rank order to BP-A vs. BP-S. We thus show that BP-A alternative, BP-S induces lower toxic effects than BP-A with significantly lower severity, though suggesting possibly concerns regarding the use of this BP-A alternative. Ultimately, several studies have shed light on embryotoxic potential of BP-A in humans, vertebrates, and invertebrates and reveal concern about the Safety of BP-A substitutes. Since the use of BPA alternative compounds is increasing, further monitoring data of the water environment and chronic toxicity in various aquatic organisms appears to be necessary.
Funding This work was supported by Faculty of Fisheries, Ege University and The Tunisian Ministry of Higher Education and Scientific Research. Open access funding provided by Università degli Studi di Napoli Federico II within the CRUI-CARE Agreement.
Compliance with ethical standards
Conflict of interest The authors declare no competing interests.
Consent for publication All authors consent for publication.
Ethics approval Not applicable.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons. org/licenses/by/4.0/. | 2022-01-28T16:57:15.939Z | 2022-01-24T00:00:00.000 | {
"year": 2022,
"sha1": "1826cbe3ce98d87d3bdec659c8e1ead387daeced",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10646-022-02568-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "94d2cedb66646fc72cce111ec15f9c60cf015bcc",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
6935659 | pes2o/s2orc | v3-fos-license | Pricing and Remanufacturing Decisions of a Decentralized Fuzzy Supply Chain
The optimal pricing and remanufacturing decisions problem of a fuzzy closed-loop supply chain is considered in this paper. Particularly, there is one manufacturer who has incorporated a remanufacturing process for used products into her original production system, so that she can manufacture a new product directly from raw materials or from collected used products. The manufacturer then sells the new product to two different competitive retailers, respectively, and the two competitive retailers are in charge of deciding the rates of the remanufactured products in their consumers’ demand quantity. The fuzziness is associated with the customer’s demands, the remanufacturing and manufacturing costs, and the collecting scaling parameters of the two retailers. The purpose of this paper is to explore how the manufacturer and the two retailers make their own decisions about wholesale price, retail prices, and the remanufacturing rates in the expected value model. Using game theory and fuzzy theory, we examine each firm’s strategy and explore the role of the manufacturer and the two retailers over three different game scenarios. We get some insights into the economic behavior of firms, which can serve as the basis for empirical study in the future.
Introduction
In recent years, the management of closed-loop supply chains has gained growing attention from both business and academic research because of environmental consciousness, environmental concerns, and stringent environmental laws, for example, the legislation on producer responsibility, requiring companies to take back products from customers and to organize for proper recovery and disposal.This legislation is partially due to increased awareness of environmental issues.However, smart companies have also understood that used products often contain lots of value to be recovered.They manage closed-loop supply chains simply because it is a profitable business proposition.It is said that the costs derived from reverse-logistics activities in the USA exceed $35 billion per year; remanufacturing is a $53 billion industry in the USA [1].
Without a doubt, closed-loop supply chains has become a matter of strategic importance: an element that companies must consider in decision-making processes concerning the design and development of their supply chains [2].A specific type of closed-loop supply chains is product manufacturing and remanufacturing supply chain.Product remanufacturing is the process that restores used products or product parts to an "as good as new" condition, after which they can be resold on the market of new products.The industrial operations involved with remanufacturing are of a very uncertain nature due to the uncertainty in timing, quantity, and quality of collected products.So one of the important management issues in product manufacturing and remanufacturing closed-loop supply chains is to effectively match demand, and supply by dealing with the uncertainty of the quality and quantities of the collected products and of the market demand.
In fact, in order to make effective closed-loop supply chain management, the uncertainties that happen in the real world cannot be ignored.Those uncertainties are usually associated with the product supply, used product collecting, the customer demand, and so on.Traditional probabilistic concepts have been used to model the various parameters among today's many studies published on the reverse logistics [3][4][5].However, the probability-based approaches may not be sufficient enough to reflect all uncertainties that may arise in a real world manufacturing and remanufacturing closedloop supply chains.Modelers may face some difficulties while trying to build a valid model of a manufacturing and remanufacturing closed-loop supply chains, in which the related costs cannot be determined precisely.For example, costs may be dependent on some foreign monetary unit, current interest rate, stock keeping unit's market price, and the quality of collected product, which may not be known precisely.Since some uncertainty within manufacturing and remanufacturing closed-loop supply chains cannot be considered appropriately using concepts of probability theory, the quantitative demand forecasts based on manager's judgements, intuitions, and experience seem to be more appropriate, and fuzzy theory rather than probability theory should be applied to model this kind of uncertainties [6].Fuzzy theory provides a reasonable way to deal with the possibility and linguistic expressions.Zadeh [7] initialized the concept of a fuzzy set via membership function.From then on, many researchers such as Nahmias [8] and Kaufmann and Gupta [9] made great contributions to this field.Recently, Liu [10] B. Liu and Y. K. Liu [11] laid a new foundation for optimization problems in the fuzzy environment, in which the expected value was proposed to deal with optimization problems.
In recent supply chain studies, some researchers have already adopted fuzzy theory to depict uncertainties in supply chain models [12][13][14][15][16]. Li et al. [17] obtained the optimal order quantity for the fuzzy newsboy models through fuzzy ordering of fuzzy numbers with respect to their total integral values.Mukhopadhyay and Ma [18] addressed the issue of a hybrid system where both used and new parts can serve as inputs in the production process to satisfy an uncertain market demand.Kao and Hsu [19] proposed a newsboy model for cases of fuzzy demand.They obtained the optimal policy to minimize the total cost by adopting a method for ranking fuzzy numbers.
Although some researches on the forward supply chain have been given through considering the supply chain's fuzzy uncertainties, little researches on the reverse supply chain considering the fuzzy uncertainties has been established to our knowledge.So, in this paper, we consider a fuzzy manufacturing and remanufacturing closed-loop supply chain with one manufacturer and two competitive retailers; the fuzziness is associated with the consumer demand, the manufacturing and remanufacturing costs of new product, and the collecting cost of the used product.In the forward supply chain, the manufacturer has incorporated a remanufacturing process for used products into her original production system, so that she can manufacture a new product directly from raw materials, or remanufacture part or whole of a collected unit, and wholesales the new products to the two competitive retailers who then sell them to the end consumers.For the the reverse supply chain, the two competitive retailers are in charge of collecting the used products from the consumers, respectively.Using game theory and fuzzy theory, the optimal decisions for each supply chain participant are explored in the expected value model.Some management insights are given in this paper.
The rest of the paper is organized as follows.Section 2 gives the problem description and notations, and Section 3 details our key analytical results.Numerical studies are given in Section 4. Concluding remarks are presented in Section 5.
Problem Description
Consider a closed-loop supply chain in a fuzzy environment with one manufacturer and two competitive retailers, labeled retailer 1 and retailer 2. In the following discussion, "he" represents one of the two manufacturers, and "she" represents the retailer.In the forward supply chain, similar to Savaskan et al. [20], assume that the manufacturer has incorporated a remanufacturing process for used products into her original production system, so he can manufacture a new product directly from raw materials with unit manufacturing cost c , or from collected products with unit remanufacturing cost c .c and c are all fuzzy variables.(For the preliminaries of fuzzy theory used in this paper see the preliminaries in [16]).The manufacturer wholesales the new product to the two competitive retailers, respectively, with unit wholesale price , then the two competitive retailers sell them to the consumers with unit retail price , which is a decision variable of retailer .We assume that the two retailers are equally powerful and compete in one common market, and all activities occur within a single period.The two competitive retailers face fuzzy linear consumer demands that are influenced by the retail prices of the new product made by the two retailers, respectively.The manufacturer and the two competitive retailers must make their pricing strategies in order to achieve optimal expected profits and behave as if they have perfect information of the demands and the cost structures of other channel members.In the reverse supply chain, the two competitive retailers are in charge of deciding the collecting rates of the remanufactured products in the consumers' demand quantity, denoted as , and taking back the used products from the end consumers with taking back cost ( ) ( = 1, 2), according to our survey results; assume that ( ) = k 2 , where k is a scaling parameter, which is a fuzzy variable.The manufacturer will take back all the used products collected by the two competitive retailers with unit transfer cost c , which is a fuzzy variable.
We define the retailer 's price-dependent demand a where ã, β are nonnegative fuzzy variables, ã denotes the primary demand of retailer 's product, β denotes the measure of the responsiveness of each retailer's product's market demand to its competitor's price.We assume that the fuzzy linear demand (1) is symmetrical.This represents a situation in which two retailers have equal competing power in a duopolistic marketplace.We assume that [ β] < 1, which ensures that the response functions are negatively sloped which, in turn, ensures the existence of the equilibrium solutions.This seems reasonable since sales are relatively more sensitive to price at a retailer's own outlet(s) than at the competing retailer's outlets.In the past, similar demand function has been used widely in marketing research literature (see [21][22][23][24]) and in some economic literature (see [25][26][27]).Moreover, in this paper, assume that fuzzy variables c , c , ã, β, c , k1 , k2 are all independently nonnegative, which is reasonable in the real world.
In our models, the manufacturer can influence the demand by setting the new product's wholesale price; the two competitive retailers can independently decide the retail price of the new product and the collecting rate of the used product.We do not assume any collusion or cooperation among firms; this assumption is typical in analytical model, although it overstates the information climate of the real world.The logistic cost components of the manufacturer and two retailers (e.g., carrying cost inventory cost, etc.) are without consideration for analytical convenience.
Assume each channel member has the same goal: to maximize his/her own expected profit.From the above descriptions, the two competitive retailers' objectives are to maximize their own expected profits (denoted as [ ]), which can be described as follows: where The manufacturer's objective is to maximize his own expected profit (denoted as [ ]), which can be described as follows: where Note that so far we have not made any assumptions regarding the bargaining power possessed by each channel member.The assumption regarding bargaining power possessed by each firm can influence how the pricing game is solved in our model.Variation in bargaining power in a particular supply chain can create one of the following three scenarios: (1) Manufacturer Stackelberg: the manufacturer has more bargaining power than the two competitive retailers and thus is the Stackelberg leader.(2) Retailer Stackelberg: the two competitive retailers have more bargaining power than the manufacturer and are the Stackelberg leaders.(3) Vertical Nash: every firm in the system has equal bargaining power.
Model Analysis
To analyze our model, we follow a game theory approach.The leader in each scenario makes his decision to maximize his/her own expected profit, conditioned on the follower's response.The problem can be solved backwards.We begin by first solving for the decision of the follower of the game, given that he/she has observed the leader's decision.For example, in Manufacturer Stackelberg, the two competitive retailers' decisions are derived first, given that the two competitive retailers have observed the decision made by the manufacturer (on wholesale price).Then, the manufacturer solves his problem given that he knows how the two competitive retailers would react to his decision.
Manufacturer Stackelberg
3.1.1.Retailers' Decisions.In the Manufacturer Stackelberg game case, the manufacturer first announces his wholesale prices of the new product.The two competitive retailers observe the wholesale price and then simultaneously decide the retail prices they are going to charge for their own product and the collecting rates of the used products.Note that the two competitive retailers move simultaneously.Therefore, we need to calculate the Nash decisions between them first.Proposition 1.The two competitive retailers' optimal retail prices and optimal collecting rates of used products, given earlier decision made by the manufacturer, are where Proof.Using (3), we can have the expected value of as follows: From (11), the first order partial derivatives of [ 1 ] to 1 , 1 and [ 2 ] to 2 , 2 can be shown as Then, we can have the first order conditions as follows: Solving ( 13), simultaneously, we can easily have ( 6)-( 9), so Proposition 1 is proven.
Manufacturer's Decision.
The manufacturer in this game is the Stackelberg leader.He announces his new product's wholesale price .Using the retailers' decisions, we can derive the manufacturer's optimal wholesale price.This is carried out by maximizing the manufacturer's expected profit [ ], given the two competitive retailers' decisions, which are given as in Proposition 1.The manufacturer chooses the wholesale price to maximize his own individual expected profit [ ], which can be given as follows: where * 1 , * 2 , * 1 , * 2 are defined as in ( 6)-( 9), respectively.
Proof.With some manipulations, the expected value [ ] of , defined in (5), can be rewritten as follows: With ( 6)-( 9) and ( 16), the first order derivative of [ ] to can be shown as Therefore, by setting (17) to zero, we can easily have (15).
Proof.By Propositions 1 and 2, we can easily see that Proposition 3 holds.
Retailer Stackelberg.
The Retailer Stackelberg scenario arises in markets where the two competitive retailers' sizes are larger compared to their manufacturer.Because of their sizes, the two competitive retailers can maintain their margin on sales while squeezing profit from their suppliers.Similar game-theoretic framework as applied in the Manufacturer Stackelberg case is implemented to solve this problem.First, the manufacturer's problem is solved to derive the decision conditional on the retail prices and collecting rates chosen by the two competitive retailers.The two competitive retailers' problems are then solved given that the two competitive retailers know how the manufacturer would react to their retail prices and collecting rates.Without loss of generality, let be the margin of retailer enjoyed by selling the new product, namely, where > 0.
Manufacturer's Decision.
Since the two competitive retailers move first in this game, we need to calculate the manufacturer's decision.The manufacturer is trying to maximize his own expected profit [ ], where is defined as in (16).
Proposition 4. In the Retailer Stackelberg game case, the manufacturer's optimal decision, given retail prices 1 and 2 and the collecting rates 1 and 2 , is where Proof.Using ( 16), we have the first order derivative of [ ] to as follows: We can easily see that Proposition 4 holds, by setting ( 22) to zero and solving it.
Retailers' Decisions.
Having the information about the decision of the manufacturer, each retailer would then use it to maximize her own expected profit [ ], where is defined as in (11).
Note that the two competitive retailers move simultaneously.Therefore, we need to calculate the Nash decisions between them first.Proposition 5.In the Retailer Stackelberg game case, the optimal retail price and collecting rate (denoted as * 1 and * 1 , resp.) of retailer 1 and the optimal retail price and collecting rate (denoted as * 2 and * 2 , resp.) of retailer 2 are given as follows: where Proof.By (11) and (20), we have the first order partial derivatives of [ 1 ] to 1 and 1 and the first order partial derivatives of [ 2 ] to 2 and 2 as where * is defined in (20).We can get the first order conditions as follows: Solving (31), simultaneously, we can easily see that Proposition 5 holds.Proposition 6.In the Retailer Stackelberg game case, the manufacturer's optimal decision (denoted as * ) is where * 1 , * 2 , * 1 , * 2 are defined as in ( 23)-( 26), respectively, and (34) Proof.By Propositions 4 and 5, we can easily see that Proposition 6 holds.
Vertical Nash.
In the Vertical Nash model, every firm has equal bargaining power and thus they make their decisions simultaneously.This scenario arises in a market in which there are relatively small-to medium-sized manufacturers and retailers.Since a manufacturer cannot dominate the market over the two competitive retailers, his price decision is conditioned on how the two competitive retailers price the new product.On the other hand, the two competitive retailers must also condition their own retail price and own collecting rate decisions on the wholesale price.Consider that the decisions of the two competitive retailers and the manufacturer are already derived in the Manufacturer Stackelberg and Retailer Stackelberg game cases, respectively.From the Manufacturer Stackelberg game, the two competitive retailers' decisions for given wholesale price are given in ( 6)- (9).From the Retailer Stackelberg game, the manufacturer's decision for given retail prices 1 and 2 and the collecting rates 1 and 2 is given in (20).
Solving ( 6)-( 9) and ( 20) simultaneously yields the Nash decision solution.The optimal Nash decisions can be derived and be given Proposition 7.
Proposition 7. In the Vertical Nash case, the optimal retail prices (denoted as * 1 and * 2 ) chosen by retailer 1 and retailer 2, respectively, the optimal collecting rates (denoted as * 1 and * 2 ) chosen by retailer 1 and retailer 2, respectively, and the optimal wholesale price (denoted as * ) chosen by the manufacturer are Proof.Solving ( 6)-( 9) and ( 20), simultaneously, we can see that Proposition 7 holds.
Numerical Studies
In this section, we compare the results obtained from the above three different decision scenarios using numerical approach and study the behavior of firms facing changing environment.By the results obtained from the above three different decision scenarios, we can easily see the expressions of the optimal wholesale price, retail prices, collected rates, and optimal expected profits under different decision scenarios.Here, assume that the relationship between linguistic expressions and triangular fuzzy variables for manufacturing cost, remanufacturing cost, market base, scaling parameter, collecting transfer cost, and price elasticity is determined by experts' experiences as shown in Table 1.
Consider the case that the remanufacturing and manufacturing costs c and c are high (c is about 16, c is about 29), the market base ã is large (ã is about 400), price elasticity β is sensitive ( β is about 0.5), taking back transfer cost c is medium (c is about 4), and scaling parameters k1 and k2 are medium ( k1 is about 800, k2 is about 850).Using Table 1, c = (14,16,19) 2 and 3.
Observation
From Tables 2 and 3, we have the following results.
(1) For the three decentralized decision cases, the firm who is the leader in the supply chain has the advantage to get the higher profit; for example, the manufacturer's profit under Manufacturer Stackelberg game scenario is higher than that under Retailer Stackelberg game scenario and the Vertical Nash game case.
For the two competitive retailers, they have their own minimal expected profits under Manufacturer Stackelberg game scenario.
(2) The new product's optimal retail prices charged by the two competitive retailers, respectively, under Vertical Nash decision case are lower than those under the Manufacturer Stackelberg and Retailer Stackelberg decision cases, and the optimal retail prices achieve the biggest value under Manufacturer Stackelberg game scenario.
(3) The new product achieves the highest wholesale price in the Manufacturer Stackelberg game, followed by the Vertical Nash game and then the Retailer Stackelberg game case.
(4) The optimal collecting rates of the used products charged by the two competitive retailers, respectively, achieve the highest wholesale price in the Retailer Stackelberg game, followed by the Vertical Nash game and then the Manufacturer Stackelberg game case.
Conclusions
Different from the conventional studies, this paper explores the roles of the two competitive retailers and the manufacturer and their bargaining powers by examining the supply chain in a fuzzy environment over three different game scenarios.We derive the expressions for optimal retail prices, wholesale price, and collecting rates with expected value model.By analyzing a numerical example, we further analyze the analytical solutions and give some managerial analysis.Compared to the traditional approach used in the study of closed-loop supply chain, the proposed approach in this paper requires less data to model the fuzziness which is associated with the consumer demand, the manufacturing and remanufacturing costs of new product, and the collecting cost of the used product and can make use of the subjective estimation based on decision maker's judgment, experience, and intuitions.It is appropriate when the situation is ambiguous and lacks historical data.
However, we have made some assumptions that may be relaxed to improve the model in the future research.One assumption is that the demand function is linear; further work is desirable to test whether our conclusions extend to other forms of demand function.The other assumptions are that the closed-loop supply chains only with one period and competition only existing in retail process.Thus, the supply chain with competitive manufacturers and/or competitive retailers, and the model over multiple periods can be considered in the future.
Table 1 :
Relation between linguistic expression and triangular fuzzy variable.
Table 3 :
Optimal decisions of retail prices, wholesale price, and collecting rates. | 2017-07-27T15:29:57.806Z | 2013-02-28T00:00:00.000 | {
"year": 2013,
"sha1": "dae632be498694c8fce9ceaba64b63c37c3b78fe",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ddns/2013/986704.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "dae632be498694c8fce9ceaba64b63c37c3b78fe",
"s2fieldsofstudy": [
"Business",
"Engineering"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
15901588 | pes2o/s2orc | v3-fos-license | Possible Production of High-Energy Gamma Rays from Proton Acceleration in the Extragalactic Radio Source Markarian 501
The active galaxy Markarian 501 was discovered with air-Cerenkov telescopes at photon energies of 10 tera-electron volts. Such high energies may indicate that the gamma rays from Markarian 501 are due to the acceleration of protons rather than electrons. Furthermore, the observed absence of gamma ray attenuation due to electron-positron pair production in collisions with cosmic infrared photons implies a limit of 2 to 4 nanowatt per squaremeter per steradian for the energy flux of an extragalactic infrared radiation background at a wavelength of 25 micrometers. This limit provides important clues on the epoch of galaxy formation.
Gamma rays (γ rays) from cosmic sources impinging on Earth's atmosphere initiate electromagnetic showers in which the energy of the primary γ ray is imparted among secondary electron-positron pairs. The blue Cerenkov light emitted by the pairs in the atmosphere can be detected from the ground with optical telescopes triggering on the short (∼ 1 ns) optical pulses. The technique has advanced considerably in recent years (1), and some surprising discoveries have been made. Among them is the detection of the blazar Markarian 501 (Mrk 501) at energies above 10 TeV (1 TeV = 10 12 eV) (2).
Blazars are remote but very powerful sources characterized by their variable polarized synchrotron emission. They are associated with radio jets (bipolar outflows) emerging from giant elliptical galaxies seen at small angles with the line of sight. Mrk 501 is ∼ 3 × 10 8 light years from Earth but nevertheless produces a tera-electron volt γ ray flux during outbursts that is many times stronger than that of the Crab nebula, a supernova remnant inside our Milky Way at a distance of only 6 × 10 3 light years. The radiation mechanism responsible for the γ rays could be either inverse-Compton scattering of low-energy photons by accelerated electrons (3) or pion production by accelerated protons. In the latter case, the sources could be among the long-sought sources of cosmic rays, that is the isotropic flux of relativistic particles with differential number density (N) spectrum dN/dE ∝ E −2.7 (for energies E < 10 3 TeV) mainly consisting of protons and ions (4).
Particle acceleration in astrophysics is typically observed to be associated with (collisionless) shock waves when a supersonic flow of magnetized material hits a surrounding medium. Examples of shock waves are shell-type supernova remnants (explosion of a massive star), plerions (pulsar wind), γ ray bursts (relativistic ejecta from the collapse of a compact stellar object), or the jets ejected from active galactic nuclei (collimated relativistic wind from the accretion disk around a supermassive black hole).
In the theoretical picture of shock acceleration, relativistic particles (protons, ions, electrons) scatter elastically off turbulent fluctuations in the magnetic field on both sides of the shock and thereby gain energy because of the convergence of the scattering centers (approaching walls). The acceleration time scale for the process can be written as t acc = ξr g c/u 2 where u denotes the velocity of the shock wave (c is the speed of light) and r g ∝ E/B denotes the radius of gyration of a particle with energy E in a magnetic field of strength B. The effects of shock obliquity, turbulence spectrum, and other unknowns are conveniently hidden in an empirical factor ξ ≥ 1. The most rapid (gyrotime scale) particle acceleration for relativistic shocks corresponds to ξ = 1 (5). Balancing the acceleration time scale with the energy loss time scale due to synchrotron radiation t syn ∝ B −2 E −1 one obtains the maximum energy of the electrons E max = 10 (ξ/10) −0.5 (B/3µG) −0.5 (u/10 8 cm s −1 ) TeV (6). The observed 10 TeV γ rays from the Crab nebula (7) and the observed synchrotron x-rays in shell-type supernova remnants (8) (corresponding to 10 TeV electrons) require ξ ∼ 1 to 10. Because protons lose less energy, they can reach larger E max 's than electrons and give rise to γ ray emission even above ∼ 10 TeV via pion production and subsequent pion decay. Although shock acceleration theory predicts that most of the cosmic rays are accelerated in supernova remnants (4), no definitive γ ray signature has yet been discovered.
It is commonly argued that the assumption of electron acceleration also suffices to explain the γ rays from blazar jets such as Mrk 501 (9). Estimates of the magnetic field strength in the γ ray emitting part of the jet in Mrk 501 then yield values in the range B ∼ 0.04 to 0.7 G. This magnetic field is much stronger than the one in supernova remnants and the associated stronger cooling of the relativistic electrons due to synchrotron energy losses reduces E max accordingly. The effect is almost compensated by the high shock wave velocities in extragalactic radio sources which speed up the acceleration rate. Using radio interferometry, shock wave velocities close to the speed of light have been inferred corresponding to typical bulk Lorentz factors in the range Γ jet = (1 − β 2 ) −0.5 ∼ 2 to 10 (β = u/c) with a few cases of still higher values (10). Due to the alignment of the jet axis and the line-of-sight in Mrk 501 superluminal motion has not been observed. With u = c, ξ = 10, and taking into account equal synchrotron and inverse-Compton losses, one obtains TeV from the balance between acceleration gains and energy losses. The additional factor Γ jet takes into account the boost in energy due to the relativistic bulk motion. Therefore, electron maximum energies of ∼ 10 TeV as required for Mrk 501 (at least 5-8 TeV are required for the similar blazar Mrk 421 (11)) are formally allowed, but one certainly has to push the theory to its limits and this raises a number of concerns: (i) the multi-TeV spectrum should show considerable curvature due to the (socalled Klein-Nishina) decrease of the scattering cross section when the energy of the scattered photon approaches E max and due to the onset of electronpositron pair production (the observed multi-TeV spectrum is consistent with a smooth power law), (ii) the ratio between the γ ray and synchrotron (simultaneous) luminosities depends sensitively on the jet Lorentz factor Γ jet and therefore requires fine-tuning (both nearest bright blazars, Mrk 421 and Mrk 501, show a similar γ-to-x-ray luminosity ratio), (iii) the magnetic field pressure turns out to be much lower than the relativistic electron pressure in the electron acceleration models which seems inconsistent with the shock acceleration mechanism (the turbulent magnetic field is responsible for pushing the electrons back and forth across the shock), larger values of B are also expected from the adiabatic expansion of a magnetically collimated jet (the observed B-field at the tips of the jet in Cygnus A is consistent with adiabatic expansion (12)), and (iv) larger values of Γ jet could ameliorate the problem that E max ∼ 10 TeV, however, one would run into a serious problem with unification models of active galaxies if Γ jet > 10 would be the rule rather than the exception (13) (number of required host galaxies would exceed the number of known radio galaxies).
A natural solution to the problem is to assume that the 10 TeV γ rays are due to pion production from accelerated protons. The balance equation between energy gains and losses for protons yields maximum energies of ∼ 10 6 TeV and short variability time scales t var ≥ 10 5 (ξ/Γ jet )(B/G) −1 s in Mrk 501 (14). The relativistic proton energy loss is dominated by photoproduction of pions in collisions with low-energy synchrotron photons (originating from accelerated electrons). Collisions of the accelerated protons with matter are negligible due to the low matter density in relativistic jets (unless a high-density target moves across the jet (15)). The γ rays from the decay of the neutral pion (far above the observed range of energies) are subject to pair creation in further collisions with the low-energy synchrotron photons (γ + γ → e + + e − ). This initiates an electromagnetic cascade shifting the average photon energy to the TeV range and below. A model based on the combined acceleration of protons (γ rays) and electrons (radio-to-x-rays)coined the proton blazar model (16) -was fitted to published data of Mrk 501 in order to obtain a prediction of its multi-TeV spectrum (17). Data from 1995 and earlier was available for the analysis and covered the radio-to-x-ray wavelength range including a flux limit above 100 MeV and an integral flux above 300 GeV (for details see references in (17)). The published flux values showed considerable variability in the optical-to-x-ray range and the model spectrum was therefore fitted to match the time-averaged spectrum (from the fit one obtains B = 37 G and Γ jet = 10). The predicted multi-TeV spectrum is shown in Fig.1 and compared with the data obtained from recent (1996, 1997) air-Cerenkov observations. The fairly robust spectral slope of the model spectrum fits the observations very well, whereas the absolute flux normalization is somewhat too low. Considering that the sub-TeV (350 GeV) flux has been reported to increase from 1995 to 1997 (9), the agreement is actually rather impressive if one scales up the model spectrum accordingly. Contemporaneous multi-wavelength observations of blazars such as Mrk 501 will be important to discriminate between electron-based and proton-based models for the γ ray emission from them. Generally, electron-based models require larger values of Γ jet and lower values of B than proton-based models to obtain high-energy γ rays.
There is a further hint that proton acceleration might be important. Unresolved blazars are the most probable source population to produce the observed extragalactic γ ray background between 10 MeV and 10 GeV (18).
The energy density of this γ ray background is ≃ 4 × 10 −6 eV cm −3 . A similar value is found for an extragalactic flux of protons (with dN/dE ∝ E −2 differential spectrum between 10 9 eV and 10 20 eV) providing all of the observed cosmic rays with energies above 10 18.5 eV (the so-called "ankle" above which the very high energy differential cosmic ray spectrum ∝ E −3 flattens) (19). On the assumption that the γ rays are from proton (p) acceleration, the comparable energy densities result from simple decay kinematics: Photoproduction of neutral pions (π 0 ) (p + γ → π • + p) and their subsequent decay gives rise to γ rays (subject to electromagnetic cascading) which carry ∼ 1/5 of the proton energy. Charged pions (p + γ → π + + n) are produced with approximately the same rate and give rise to neutrons (n) which carry ∼ 4/5 of the accelerated proton energy. Since the neutrons do not scatter off the magnetic field fluctuations responsible for the acceleration and storage of the charged particles in the blazar jet, they must escape the accelerator (energy losses are small and modify the neutron spectrum only at the very highest energies). Time dilation allows the most energetic neutrons to leave the host galaxy freely before β-decay occurs (for protons there would be adiabatic losses due to the magnetic field in the host galaxy). Hence the neutron (and after β-decay proton) luminosity is equal to the γ ray luminosity within factors of order unity. An extragalactic origin of the highest energy cosmic rays is indeed suggested by the absence of an enhancement of the cosmic ray flux toward the Galactic disk (20) and by the change in chemical composition from heavy (protons and ions) to light (protons) above 10 18.5 eV (21). The ultimate challenge to the hypothesis is the measurement of the high-energy neutrino (ν) flux associated with the charged pion decay (π ± → e ± + 3ν).
The energy density in these multi-TeV neutrinos would be of the same order of magnitude as that in extragalactic cosmic rays and γ rays. Their measurement therefore constitutes an experimentum crucis within reach for the planned cubic kilometer underwater(ice) detectors (22).
Although γ rays are known from laboratory experiments for their penetrating power, propagation over intergalactic distances is not without hurdles.
A diffuse isotropic infrared background (DIRB) was produced when the first galaxies formed. Massive stars in early galaxies produced large amounts of dust in their winds reprocessing the visual and ultraviolet light from the stars into infrared light. By colliding with these ample infrared photons, γ ray photons can disappear and turn into electron-positron pairs (23)(24)(25). The most numerous infrared photons above threshold for pair production with 10 TeV γ rays have wavelengths ∼ 25 µm. The mean free path (λ γγ ) for pair creation at multi-TeV energies is of the order of the distance (d) of Mrk 501. The exact value depends on the DIRB which is difficult to measure directly owing to the presence of zodiacal light and galactic cirrus clouds.
One can use the observed power law spectrum (2) to put a limit on the maximum allowed pair attenuation assuming that the observed power law is the unattenuated spectrum emitted by the source (consistent with the proton-based model). In general, only very contrived intrinsic spectra would look like a smooth power law after the quasi-exponential attenuation. The maximum allowed deviation from the power law (1 − exp[−d/λ γγ ]) is taken to be the size of the statistical error bar at 10 TeV yielding an optical depth τ γγ = d/λ γγ < 0.7. This limit can be relaxed by a factor not larger than ∼ 2 admitting for weakly absorbed spectra which still approximate a power law (see the dashed line in Fig.1). There is some dependence of the attenuation on the shape of the DIRB spectrum. Useful models for the spectral shape can be found in (23)(24)(25) and yield a similar limit for the 25 µm DIRB normalization νI ν (25 µm) < 2 to 4 nW m −2 sr −1 . The absence of γ ray attenuation in Mrk 501 is consistent with no contribution to the DIRB other than from the optically selected galaxies for which one expects ∼ 10% of their optical emission to be reprocessed by warm dust yielding νI ν (25µm) ∼ 1 nW m −2 sr −1 (26), but would also allow a DIRB stronger by a factor 2 to 4. A DIRB of at least ∼ 3 nW m −2 sr −1 is suggested by faint infrared galaxy counts and indicates contributions from dust-enshrouded galaxies at redshifts of z ∼ 3 − 4 (24). Electron-based models for the γ ray emission from Mrk 501 (9) predict deviations from a power law in the multi-TeV range even without external attenuation and therefore impose an upper limit on the DIRB below the lower limit from faint infrared galaxy counts. If both methods to estimate the DIRB (deviations from a power law spectrum in the multi-TeV range, faint infrared galaxy counts) use correct assumptions, a cutoff in the γ ray spectrum of Mrk 501 must be present in the energy range 10 to 30 TeV. | 2014-10-01T00:00:00.000Z | 1998-01-30T00:00:00.000 | {
"year": 1998,
"sha1": "59bedca4d1fc7c412bae0952db743f396e47b4d1",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Arxiv",
"pdf_hash": "59bedca4d1fc7c412bae0952db743f396e47b4d1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
244494437 | pes2o/s2orc | v3-fos-license | A Prospective Randomised Case Control Study on the Role of Probiotics in Controlling Chronic Kidney Disease Progression
Aim: To study the role of probiotics in controlling chronic kidney disease progression. Sample: To correlate renal parameters like creatinine, urea, uric acid, PCR in patients with chronic kidney disease. Study Design: It is a Prospective case control study Place and Duration of the Study: Department of Nephrology, Santhiram Medical college and General Hospital, between December 2020 – May 2021. Methodology: We included 150 patients with chronic kidney disease from in and out patient departments. In this study patients are divided into two groups; case and control group. Control group is treated with normal conventional therapy whereas the case group is treated with conventional therapy along with probiotics. The lab parameters like creatinine, PCR, urea, uric acid were analyzed before and after the therapy in both groups. Results: The lab parameters were analyzed by paired student`s ttest and the p value of these Original Research Article Reddy et al.; JPRI, 33(49A): 264-271, 2021; Article no.JPRI.75932 265 parameters were found to be in control group creatinine ranges from (4.42+/2.84 to 3.54+/2.73) and in case/ interventional group creatinine ranges from (5.13+/-2.43 to 2.29+/-1.57) shows <0.001. It shows significant improvement in these parameters in both control and case group. CKD stages were analyzed by Chisquare test, the p value of CKD stages in case group was found to be <0.0001 and in control group it was found to be 0.03. Conclusion: It shows that there is significant improvement is found in both interventional (case) and non-interventional (control) groups. But more betterment is observed in case group than in control group. Hence probiotics are used as a natural bio-treatment to control the progression of CKD and improves the quality of life.
INTRODUCTION
CKD is becoming a major worldwide health issue. The present burden of disease might be due to change of the underlying pathogenicity of CKD.
Chronic kidney disease has been subdivided into 5 stages.
Lifestyle modifications such as weight reduction, exercise and dietary manipulations can be effective. Approaches to control hypertension by means of dietary salt restrictions and diets rich in Fruit and Vegetables and low in saturated fat have been recommended [4]. Hence the application of probiotics to decline the progression of CKD is an emerging area of medicine and a new hope for the CKD patients [5]. The term "probiotics" was introduced in 1965 by Lilly and Stillwell. Probiotics were defined as microbially derived factors that stimulate the growth of other organisms. In 1989, Roy Fuller emphasized the requirement of viability for probiotics and introduced the idea that they have a beneficial effect on the host. Species of lactobacillus and bifidobacterium are most commonly used as probiotics, but the yeast saccharomyces cerevisiae and some E. coli and bacillus species are also used as probiotics [6].
The dose needed for probiotics varies greatly depending on the strain and product. Although many over-the-counter products delivery in the range of 1 to 10 billion CFU/dose, some products have been shown to be efficacious at lower levels while some require a substantially higher levels [7]. The mechanism of action of probiotics includes few steps such as inhibition of adhesion, immunomodulation, production of antimicrobial substances, modification of toxin -and toxin receptors, competition for nutrients, reduction in bacterial translocation and anti-inflammatory signalling with the epithelium [8]. Probiotics have been proven that it is possible to enhance both innate and adaptive arms of the host immune system [9]. Probiotic strains have the ability to promote the differentiation of B cells and increase the production of secretory Ig A. Polymeric Ig A sticks to the mucus layer overlying the gut epithelium and binds to pathogenic microorganisms, thereby reducing their ability to gain access to endothelial cells. Other probiotic strains stimulate the innate immune system by signalling to dendritic cells which then travel to mesenteric lymph nodes where the induce regulatory T cells and the production of anti-inflammatory cytokines (interleukin-10 and transforming growth factor β) [6]. The first aim of administering probiotics during CKD is URS removal [10]. Therefore as the production of URS, mainly generated by protein degradation, could not be completely blocked by a low protein diet, reducing the conversion of amino acids into tri methylamine n oxide, p-cresyl sulfate or indoxyl sulphate by modelling intestinal microbiota could be considered as an additional beneficial intervention [11]. Because of the potential beneficial effects of probiotics (reducing inflammation and uremic toxins) it is possible that renal function improve during treatment. However studies performed in CKD used the markers such as serum urea, uric acid, PCR, creatinine. Probiotic supplementation is generally preferred over food sources because of high potassium, Phosphorus, Sodium and sugar content of any food containing probiotics. The hypothesis being tested is that specially formulated probiotic dietary supplement product comprised of defined and tested microbial strains may afford Reno protection and possibly alleviate the symptoms of uremic syndrome. Health Promotion is the leading theme in the probiotic concept which implies that there should be no adverse effects linked to an intake of probiotics [12].
METHODOLOGY
A prospective randomized case control study which involves two groups such as case and control. Control group is treated with the normal conventional therapy whereas the case group is treated with the conventional therapy along with probiotics. The study was conducted in nephrology department of a tertiary Care Hospital, for a period of six months and this study was included the patients of all ages. The data was collected from the case sheets, the patient reports who had meeting the nephrology department with CKD symptoms, interviews with reporting persons or clinicians, patients or patient Guardians, use of past medication history, which was generally obtained from past prescriptions. The study was conducted in Santhiram Medical College and General Hospital, Nandyal, after the approval of Institution human ethics committee at Santhiram Medical College and General Hospital, Nandyal this study was initiated. During the study period of six months of this study the total sample size was 150 patients this study included patients attending in nephrology in and out patient department are included in the study and the patients who had kidney transplantation and pregnant and lactating women or excluded from the study. The necessary information was collected by interviewing the patients using the patient data profile form.
The results were analyzed and tabulated statistically by using SPSS (statistical package for social science) Chi-square test is used to compare the analyzed the stages and the P value of case group is < 0.001 is statistically taken as significant and the P value of control group is 0.03 is taken as significant. Paired student T test is used to analyze the lab parameters. The P value of lab parameters creatinine, urea, uric acid, PCR in case group are found to be <0.001 and the p-value of lab parameters of creatinine, urea, uric acid, PCR in control group were found to be < 0.001 both are statistically significant but the probiotics are more significant.
RESULTS AND DISCUSSION
The prospective observational study was conducted for a period of six months from December 2020 -May 2021 in all outpatient and inpatient departments at a tertiary care teaching hospital, Nandyal Among total CKD patients, 72% are males and remaining 28% are females. It shows that males are more likely to suffer from kidney failure sooner than women.
More than 60% of the CKD patients are in between the age groups of 40-50 and 50-60. Hence greater than 40 years age people are more prone to CKD condition.
Among 150 members who are suffering with CKD, about 68% of the patients are hypertensive and diabetic. Hence hypertension along with diabetes would more likely prone to CKD than hypertension and diabetes alone.
Among 150 CKD patients, 41.3% of patients have 6-8gms/dl of HB levels and 37.3% of patients have HB levels of 8-10gms/dl. During CKD, the potential utilization of therapies modulating the gut microbiota such as probiotics has emerged as an attractive strategy to reduce URS and improve CVD. Probiotics when administered in adequate amounts, confer a health benefit on the host'. the modification of the intestinal microbiota in CKD strongly increases transformation of amino acids into URS, e.g., indoxyl-sulfate (IS), p-cresyl sulfate (PCS), and trimethylamine noxide (TMAO)1 among others. Increased intestinal concentration of uremic toxins may lead to microbial dysbiosis and pathobionts overgrowth. Pathobionts trigger the intestinal immune system toward a proinflammatory response by preferentially activating Th17-Th7 cells and increasing the production of lipopolysaccharides (LPSs), a major component of the outer membrane of Gram-negative bacteria. Thus, dysbiosis also contributes to an increase in intestinal permeability by disrupting the colonic epithelial tight junction, which may subsequently lead to translocation of LPS and bacteria into the host's internal environment. It is therefore possible that the modification of intestinal microbiota in CKD might be involved in insulin resistance and dyslipidemia through increased LPS production, modified carbohydrate fermentation or bile acid level and composition. Given that gut-derived uremic toxins, inflammation and insulin resistance contribute to progression of CKD.
CONCLUSION
CKD is a chronic health issue with a high economic burden on patients and health care and is associated with decreased quality of life. Hence probiotics are used to improve the condition.
In this study, the patients are divided in to two groups; case /interventional group and control/ non interventional group. Control group is treated with normal conventional therapy whereas the case group is treated with convention therapy along with probiotics.
By using probiotics the renal parameters such as creatinine, urea, uric acid, PCR were improved.
In case group before the treatment with probiotics approximately 5-10% of patients have normal levels of these parameters and after the treatment with probiotics, approximately 50-60%of patients reaches to normal levels of these parameters. In control group before the treatment approximately 5% of patients have normal levels of these parameters and after the treatment approximately 10-15% of patients reaches to normal levels of these parameters.
The lab parameters were analyzed by paired students t-test and the p-valve of these parameters in both groups were found to be < 0.001. It shows significant improvement in these parameters both in case and control group. CKD stages were analyzed by Chi-square test, the pvalue of CKD stages in case group was found to be < 0.0001 and in control group it was found to be 0.03. It shows that there is significant improvement in both groups, but more betterment is observed in case group than in control group.
So, the application of probiotics in kidney health is an emerging area of medicine and probiotics are cost effective easily available which impacts low cost burden on patients. Hence probiotics are used as a natural bio-treatment along with dietary probiotic foods to control the progression of CKD condition and improves the quality of life.
CONSENT
As per international standard or university standard, patients' written consent has been collected and preserved by the author(s).
ETHICAL APPROVAL
As per international standard or university standard written ethical approval has been collected and preserved by the author(s).
COMPETING INTERESTS
Authors have declared that no competing interests exist. | 2021-11-24T16:34:27.728Z | 2021-11-11T00:00:00.000 | {
"year": 2021,
"sha1": "9bf262486c63547d0fae58bd9e20824bf6add427",
"oa_license": "CCBY",
"oa_url": "https://www.journaljpri.com/index.php/JPRI/article/download/33329/62762",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d4f820b44926572f4b15de3d9ceff35e6c32ec6b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
114716944 | pes2o/s2orc | v3-fos-license | ReVIeW OF the BOOk a RiFt iN tHE EvERyDay: a DiaLoGUE tHat LaStED FoR 300 CUPS oF CoFFEE aND tHREE CaRtoNS oF CiGaREttES BY a. SeRgeeV and B. SOkOlOV
A. Sergeev, B. Sokolov. Razryv povsednevnosti: dialog dlinoyu v trista chashek kofe i tri bloka sigaret [a Rift in the everyday: a dialogue that lasted for 300 Cups of Coffee and Three Cartons of Cigarettes]. 2015. Sankt-Peterburg: izdatel’stvo Aleteya.
A. Sergeev and B. Sokolov's book A Rift in the Everyday: a Dialogue that Lasted for 300
Cups of Coffee and Three Cartons of Cigarettes is in many ways an unusual phenomenon. In the reviewer's opinion, its unusual character is best revealed in two of its principal aspects.
Firstly, the very form of co-authorship is worth mentioning. There can be no question that co-authoring a major text is, as such, no longer something rare in the field of the humanities and social sciences. In fact, this is one of the reasons why the philosophical text can no longer be regarded as a result of a particular thinker's individual enterprise. Typically, however, the co-authors of a philosophical work aim at producing a unified text, a kind of "monolith" the writing of which is to a certain extent divided between its authors. More often than not, it is simply a matter of dividing the text between them on a trivial thematic basis, while maintaining a certain unity of its conceptual plan. A less common form of co-authorship can be described as a kind of intellectual struggle. A good case in point is P. Feyerabend's famous book Against Method which, in his own words, was initially meant to be the first part of a polemic "duology", the second part of which was to have been written by I. Lakatos. In this respect, the readers of A.Sergeev and B. Sokolov's book will find themselves face to face with an even more complex "product" resulting from a synthesis of three "modules": the initial text by one author, the other author's intellectual reaction to it, and, to some extent, the answers to the comments made. The mutual reactions become apparent in the commentaries, in the acts of intellectual struggle and, finally, in the way one of the authors takes up and clarifies the thoughts of the other.
The result is a highly peculiar "archeomodern" product. On the one hand, the book exemplifies a thoroughly modern phenomenon, that of a hypertext, which has taken the form of a philosophical monograph; on the other hand, it can be seen as a philosophical conversation written down as a text. This dialogical character is something that refers us not to the modern, or post-modern, period, but rather to the tradition of ancient thought. But, unlike most of Plato's dialogues, where one can identify something like a "chorus" (a weak position) and a "soloist" (a strong, or leading, position), this conversation is characterized by the equality of the symbolic and mental statuses involved. As a result, the form of this book is capable of capturing the reader's interest and, at the same time, requires intensive intellectual work (especially when reading the first part).
Secondly, the specific character of the book is connected with the peculiarities of the authors' theoretical position. The monograph is written from an existential-anthropological perspective, and largely focuses on the problems of finding and acquiring "selfhood", "oneself " and what is "one's own". The very practice of philosophizing is seen by the authors as a way to this acquisition which is, at the same time, salvation. One might object to this by saying that the idea of what may be broadly described as "therapeutic" philosophy is not a new one in modern philosophical discourse (it would be enough to mention E. Fromm, M. Foucault, P. Hadot, K. Jaspers and some others). A strict, though superficial, critic might also claim that, methodologically, the nature of this salvation is determined by M. Heidegger's idea (quite familiar to 20 th -century philosophers) that it is possible for philosophy to appropriate the everyday. In fact, it is something that the book's intriguing title itself "alludes" to, suggesting that, for the authors, philosophical writing and dialogue are exactly a way out of the field of what is inauthentic and not our "own". However, the specific possibilities of "trans-coding" and segmenting the initial texts render the problems raised significantly more profound. This exit -break out -breakthrough -into the field of "one's own", too, comes into the focus of the analysis (in terms of the book's content, its principal part is devoted to the theme of "one's own"). In this context, it becomes apparent that matters are not all that simple when it comes to the "existentiales" which are normally seen as "lifts" one may take on the way to oneself, such as the horizon of risk, the profundity of language, mood, reflection, the symbol of death, mental states, ecstaticity and eccentricity. On the other hand, it turns out that everything is not clear with the opposite mode of existence, either, with that of "falling" (das Verfallen) into, and staying in, the everyday (in its contemporary version). Another fundamental theoretical peculiarity of the book is that its authors are entirely free of monism which is typical of so much philosophical writing. This latter tendency may be illustrated by the critical remarks made, and the sharp divisions drawn by such revered figures of 20 th -century philosophy as M. Heidegger and M. Mamardashvili who found them a legitimate way of reasserting their methodological positions. On the contrary, in the existential-anthropological analytic of the philosophical hypertext found in A Rift …, none of the three "major forms of Being" (as the authors describe them) is given a special place, a "royal seat". These three forms are consciousness, language and life. All the three phenomena are considered as equally important foundations of being human. At the same time, the authors pay equal attention to a fourth component -culture.
Another important point is worth mentioning which cannot be missed by anyone who has even the smallest experience of writing their own texts. The authors must have had to bring a certain courage to their enterprise in that they made their texts (or parts of them) available to each other for "dissecting", "breaking" and "trans-coding". Most writers would be familiar with a feeling which is the exact opposite of this -a feeling that their work is their inalienable property that must be jealously guarded (it would be enough to mention in this context Cyrano de Bergerac's notorious refusal to change even a single comma in his poem). It is to be hoped that this book will enable its readers to experience a rift in their everyday, too. The experience of thinking contained in its text, with its modes of conversation and symbolic trans-coding, may generate both criticism and commentary, and, most importantly, co-thinking. There is every reason to expect that this book will make a contribution to the overall hypertext of the philosophical tradition, or, in other words, that it will serve as a remark in the global philosophical conversation. | 2019-04-15T13:11:12.304Z | 2017-03-27T00:00:00.000 | {
"year": 2017,
"sha1": "43a37e2cf265691ca8326c2fa656e3346ea43ba0",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3846/cpc.2017.259",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f1caff8c71726b0cc9ce69a37b9c3d68fab37e84",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Engineering"
]
} |
215406631 | pes2o/s2orc | v3-fos-license | Trend analysis of tuberculosis case notifications with scale-up of antiretroviral therapy and roll-out of isoniazid preventive therapy in Zimbabwe, 2000-2018.
OBJECTIVES
Antiretroviral therapy (ART) and isoniazid preventive therapy (IPT) are known to have a tuberculosis (TB) protective effect at the individual level among people living with HIV (PLHIV). In Zimbabwe where TB is driven by HIV infection, we have assessed whether there is a population-level association between IPT and ART scale-up and annual TB case notification rates (CNRs) from 2000 to 2018.
DESIGN
Ecological study using aggregate national data.
SETTING
Annual aggregate national data on TB case notification rates (stratified by TB category and type of disease), numbers (and proportions) of PLHIV in ART care and of these, numbers (and proportions) ever commenced on IPT.
RESULTS
ART coverage in the public sector increased from <1% (8400 PLHIV) in 2004 to ~88% (>1.1 million PLHIV patients) by December 2018, while IPT coverage among PLHIV in ART care increased from <1% (98 PLHIV) in 2012 to ~33% (373 917 PLHIV) by December 2018. These HIV-related interventions were associated with significant declines in TB CNRs: between the highest CNR prior to national roll-out of ART (in 2004) to the lowest recorded CNR after national IPT roll-out from 2012, these were (1) for all TB case (510 to 173 cases/100 000 population; 66% decline, p<0.001); (2) for those with new TB (501 to 159 cases/100 000 population; 68% decline, p<0.001) and (3) for those with new clinically diagnosed PTB (284 to 63 cases/100 000 population; 77.8% decline, p<0.001).
CONCLUSIONS
This study shows the population-level impact of the continued scale-up of ART among PLHIV and the national roll-out of IPT among those in ART care in reducing TB, particularly clinically diagnosed TB which is largely associated with HIV. There are further opportunities for continued mitigation of TB with increasing coverage of ART and in particular IPT which still has a low coverage.
Strengths and limitations of this study
► Use of national aggregate data gives a comprehensive picture of the country. ► The ecological design of the study may result in 'ecological fallacy'. ► Despite use of aggregate data, findings are similar to studies using individual data. ► Use of tuberculosis (TB) case notifications does not account for undiagnosed TB cases. ► All factors contributing to declining TB case notifications are not accounted for.
AbStrACt
Objectives Antiretroviral therapy (ART) and isoniazid preventive therapy (IPT) are known to have a tuberculosis (TB) protective effect at the individual level among people living with HIV (PLHIV). In Zimbabwe where TB is driven by HIV infection, we have assessed whether there is a population-level association between IPT and ART scale-up and annual TB case notification rates (CNRs) from 2000 to 2018.
Design Ecological study using aggregate national data.
Setting Annual aggregate national data on TB case notification rates (stratified by TB category and type of disease), numbers (and proportions) of PLHIV in ART care and of these, numbers (and proportions) ever commenced on IPT. results ART coverage in the public sector increased from <1% (8400 PLHIV) in 2004 to ~88% (>1.1 million PLHIV patients) by December 2018, while IPT coverage among PLHIV in ART care increased from <1% (98 PLHIV) in 2012 to ~33% (373 917 PLHIV) by December 2018. These HIV-related interventions were associated with significant declines in TB CNRs: between the highest CNR prior to national roll-out of ART (in 2004) to the lowest recorded CNR after national IPT roll-out from 2012, these were (1) for all TB case (510 to 173 cases/100 000 population; 66% decline, p<0.001); (2) for those with new TB (501 to 159 cases/100 000 population; 68% decline, p<0.001) and (3) for those with new clinically diagnosed PTB (284 to 63 cases/100 000 population; 77.8% decline, p<0.001).
Conclusions This study shows the population-level impact of the continued scale-up of ART among PLHIV and the national roll-out of IPT among those in ART care in reducing TB, particularly clinically diagnosed TB which is largely associated with HIV. There are further opportunities for continued mitigation of TB with increasing coverage of ART and in particular IPT which still has a low coverage.
bACkgrOunD Tuberculosis (TB), one of the oldest bacterial infections known to mankind and the HIV infection are a deadly combination. While the lifetime risk of developing active TB disease among people with latent TB infection (LTBI) is 5%-15%, 1 those with HIV have a 5%-15% annual risk of acquiring active TB disease. 2 The TB epidemic in sub-Saharan Africa is largely driven by the HIV epidemic which disproportionately affects this region. About 70% of the 36.9 million people living with HIV (PLHIV) globally live in sub-Saharan Africa, of whom 76% are found in the eastern and southern regions. This is despite the fact that sub-Saharan Africa contributes only 14% of the world's population. 3 Of the estimated 10 million people who developed TB disease in 2017, 9% were PLHIV of whom 72% lived in the Africa region. 4 Among those with TB and HIV coinfection, the majority (84%) were from the Africa region. 4 Despite this large prevalence, there has been a 44% decline in the number of HIV/TB associated deaths since 2000. This is largely attributed to an increased uptake of antiretroviral therapy (ART) among those with HIV-associated TB. Coverage of ART in Africa has more than doubled from 35% in 2005 to 84% in 2017. 4 5 Open access ART is associated with a 67% reduced risk of developing active TB disease among PLHIV 6 and an even further reduction in recurrent TB. 7 ART has also been shown to reduce the population incidence of TB by between 27% and 80%. 8 9 However, despite ART having a TB protective effect at the individual level, which has further been shown to positively correlate with higher updated CD4 cell counts, 10 11 the incidence of TB still remains higher in the HIV-positive population compared with the HIVnegative population. 11 To further mitigate TB in this high-risk group, isoniazid preventive therapy (IPT) has also been found to be effective in reducing the incidence of TB and death from TB among PLHIV with further reductions in TB risk of up to 90% among those receiving both IPT and ART. [12][13][14] This has led the WHO to recommend initiation of IPT among PLHIV enrolled in ART care in whom TB has been excluded using a clinical TB screening algorithm. 15 Zimbabwe is one of the sub-Saharan African countries with a largely HIV-driven TB epidemic and is among the 14 countries globally with a triple burden of TB, TB/ HIV and multidrug resistant TB. 4 A previous publication from Zimbabwe evaluating a 14-year period from 2000 to 2013 showed that with increasing ART coverage since its inception in the public sector in 2004, annual TB case notification rates more than halved for all forms of TB. Furthermore, the largest declines were among those with recurrent TB (53%), new smear-negative pulmonary TB (58%) and extrapulmonary TB (58%). 16 Zimbabwe also adopted WHO guidance on provision of IPT among PLHIV in 2011. There was an initial pilot phase in 10 health facilities where the delivery of IPT was found to be feasible, and this was followed by nationwide roll-out to most of the public health facilities in the country. 17 18 The country has also adopted the 'HIV treat all' approach from mid-2016 where those who test HIV positive are immediately eligible for ART initiation regardless of their CD4 cell count or WHO clinical staging.
We hypothesise that the nationwide scale-up of IPT; the continued increase in ART coverage and the earlier initiation of ART have all had a population impact on reducing TB incidence in Zimbabwe, translating to further declines in TB case notification rates. Currently, there are no reports or published papers comparing the national scale-up of IPT and ART with national TB case notification rates. We therefore describe in this paper the association between IPT and ART scale-up and annual TB case notification rates in Zimbabwe from 2000 to 2018.
Study design
This was an ecological study design using aggregate programme data to analyse for trends and associations.
Setting
Zimbabwe is a developing country in Southern African with a population of 13 million 19 according to latest census data. There is a gross national index per capita of US$860 20 and 72.3% of the population live below the national poverty line. 20 The country has a high HIV prevalence rate of 15%, and the total number of adults and children living with HIV was estimated at 1. 3
million in 2018 according to the Joint United Nations Programme on HIV/AIDS (UNAIDS). 21
The National Art Programme in Zimbabwe Antiretroviral drugs were first offered in public health facilities in 2004 under the National ART Programme at five central level hospitals. As of December 2018, ART was available at 1604 (93%) out of 1722 health facilities nationwide. 22 Since the inception of the ART programme in 2004, the criteria for initiating ART among adults has shifted from a CD4 cell count threshold of <200 cells/mL until 2010, to <350 cells/mL from 2011 to 2013 and to <500 cells/mL from 2014 in accordance with WHO guidelines. [23][24][25] Those with advanced HIV disease have always been eligible for ART regardless of CD4 cell count.
Towards the end of 2013, there was a shift towards universal eligibility for lifelong ART in Zimbabwe. This started under option B+ when all HIV-infected pregnant and breastfeeding women were initiated on ART irrespective of their CD4 counts or WHO staging. Over this same period, ART regimens became simpler, less toxic and with a reduced pill burden given the transition to one pill per day fixed-dose combination regimens. From mid-2016 onwards, the country started implementing the 'HIV treat all' approach whereby all patients who were diagnosed HIV positive are immediately eligible for ART initiation regardless of CD4 cell count or WHO clinical staging.
Intensive TB case finding, IPT initiation and enrolment in art care All confirmed PLHIV in Zimbabwe are enrolled at ART clinics to receive HIV treatment and care services. According to Zimbabwe's national IPT guidelines, all PLHIV should be screened for active TB at every clinic visit or every encounter with a health worker using the WHO recommended four-symptom TB screening checklist. 26 If PLHIV present with any of the four symptoms (cough of any duration, night sweats, weight loss and fever), they are considered as presumptive TB cases and have their sputum specimens collected for investigation by smear microscopy or Xpert MTB/Rif assay (Cepheid, Sunnyvale, California, USA) . Those who are negative on smear microscopy or Xpert MTB/Rif assay or both and who have no other symptoms or signs of extrapulmonary or smear-negative PTB, including chest radiography if necessary, are not diagnosed with active TB and are potentially eligible for IPT.
IPT eligibility has changed over time. At the start of the programme in 2011, all patients enrolled in HIV care were eligible for IPT regardless of ART status, provided active TB was excluded. Since the national ART programme moved towards an 'HIV treat all' approach, all PLHIV in pre-ART care were recalled to health facilities for ART initiation and were prioritised for ART care first and then Open access initiated on IPT. All those eligible for IPT are started on a 6-month daily oral dose of isoniazid (5 mg/kg body weight for adults or 10 mg/kg body weight for children) plus a daily low dose of pyridoxine (25 mg/day). To prevent unnecessary visits to health facilities, resupplies of IPT and pyridoxine are synchronised to ARV resupplies until completion of TB preventive treatment. During clinic visits, patients are assessed through self-reporting and pill counts for adherence, and they are also screened for TB. Patients found to have developed active TB disease are discontinued from IPT and are subsequently treated for TB according to national TB guidelines. 26 By the end of June 2019, there were 1280 (74%) out of 1722 public health facilities offering IPT in addition to ART across the country's 10 provinces (National ART Programme data).
The National TB Programme in Zimbabwe Zimbabwe like other countries with a high prevalence of TB has a well-established WHO recommended directly observed treatment short course (DOTS) 27 national TB programme (NTP), whereby TB treatment services are integrated with general health services at all health facilities countrywide. This DOTS model consists of a centralised and prioritised system of TB monitoring, recording and training of TB case management, and there is also a standardised recording and reporting system that allows assessment of treatment results. National TB treatment outcomes are recorded and reported in line with WHO guidelines and are classified as treatment success (cured plus treatment completed), loss to follow-up (LTFU), died, transferred out or not evaluated and failed treatment.
Patients with TB are classified as either new or previously treated TB cases, whereby new cases are those who have never received anti-TB treatment or have previously received anti-TB drugs for <30 days, while previously treated TB cases are those patients who have previously received anti-TB drugs for >1 month. Prior to 2013, new TB cases were divided into smear-positive pulmonary (PTB), smear-negative PTB or extrapulmonary TB (EPTB) cases, while retreatment patients with TB were categorised as follows: (1) relapse cases, (2) treatment after failure, (3) treatment after LTFU or (4) 'retreatment other' based on smear microscopy. However from 2013 onwards, both new and recurrent TB cases have been classified as new/recurrent bacteriologically confirmed, clinically diagnosed and EPTB cases based on Xpert MTB/Rif results, the preferred first-line TB diagnostic test. The NTP also ensures these drug-susceptible patients with TB are prescribed a standardised treatment regimen of 6 months for new TB cases or 8 months for recurrent TB cases: doses are observed by a healthcare worker or community health worker for at least the first 2 months of therapy.
Study population
The study population included: all PLHIV reported nationally in ART care at the end of each year; all PLHIV in ART care who were initiated on IPT annually and all patients annually diagnosed and notified with TB in Zimbabwe between 2000 and 2018.
Patient and public involvement
Patients or the public were not involved in the design, or conduct, or reporting, or dissemination of our research study.
Data variables and sources of data National aggregate data on annual estimated numbers of PLHIV and annual numbers receiving ART were obtained from UNAIDS and WHO. 21 28 Data on PLHIV estimates are generated annually using the Estimation and Projection Package and Spectrum software 29 from UNAIDS based on primary collected data from census reports, antenatal clinic surveillance, population-based surveys and programme data. National population figures were obtained from the 2002 and 2012 national census reports with intercensus projections for the periods in between these reports and after 2012. 19 30 31 Data on annual numbers started on IPT were obtained from the Zimbabwe Demographic Health Information System version 2 (DHIS2). Finally, aggregate data on numbers notified each year with TB and further stratified by type and category of TB were obtained from the WHO website. 32 Analysis and statistics Data were analysed by plotting bar and line graphs of the various indicators over time from 2000 to 2018. ART coverage was calculated by dividing the annual total number of PLHIV receiving ART by the annual estimated number of PLHIV, instead of relying on ART coverages reported in annual UNAIDS or WHO reports since these were dependent on changing ART eligibility criteria over the years. IPT coverage rates were calculated as cumulative annual numbers of PLHIV in ART care ever started on IPT divided by annual numbers of PLHIV receiving ART.
Annual TB case notification rates were calculated by dividing annual TB case notifications by annual national population figures to obtain case notifications per 100 000 population. Changes in TB notification rates between 2000 and 2018 were determined by comparing the highest TB case notification rates prior to national ART scale-up and the lowest annual TB case notification rates after roll-out of both ART (in 2004) and IPT (in 2012) using tests for proportions in Stata V.15.0 (StataCorp, College Station, Texas, USA). Levels of significance were set at 5%.
reSultS
Since the inception of ART in the public sector in 2004, there was a substantial increase in the number and proportion of PLHIV initiated on ART from 8400 (<1% ART coverage) in 2004 to over 1 million (88% ART In figure 5, there was an observed decline in TB case notifications over the years with increasing ART and IPT coverage since their national roll-out in 2004 and 2012, respectively. In table 1, there were overall significant declines in all TB case notification rates between the highest recorded case notification rate prior to ART and IPT national roll-out in comparison with the lowest Open access
DiSCuSSiOn
This study is a follow-on of a previous study showing the association between declines in TB case notifications from 2000 to 2013 and scale-up of ART in the public sector since its inception in 2004. 16 In this current study, we assessed TB case notification rates for a further 5-year period up to 2018 after ART coverage had nearly doubled from nearly half of all estimated PLHIV to nearly 9 in 10 PLHIV and after the National AIDS Programme had attained an IPT coverage of approximately one-third of patients receiving life-saving ART. Our findings showed that these HIV-related interventions (ART and IPT) were associated with significant declines in TB case notification rates between the highest and lowest reported case notification rates prior to and after the inception of ART and IPT in the public sector, respectively. These declines were observed for all TB cases combined and among those with new TB, in particular those with new clinically diagnosed PTB. In comparison to the previous study, 16 there was a further percentage decline of TB case notification rates between the lowest recorded TB case notification rates reported then and those currently presented of 34%, 33%, 27%, 52% and 59% for all forms of TB, new TB, new clinically diagnosed PTB, new EPTB and previously-treated TB cases, respectively.
These findings reaffirm the population impact of the pooled TB-protective effect of ART and IPT in a setting where TB is mainly HIV driven. This may also be coupled with implementation of intensified TB case finding among PLHIV to identify those eligible for IPT. Simultaneously, those needing TB treatment can be identified and treated, thus resulting in improved infection control in ART care settings where newly diagnosed TB cases are most prevalent. 33 34 The further national gains in significantly increasing ART coverage towards the UNAIDS 90% ART coverage target by adopting an 'HIV treat all' approach in recent years have resulted in 64% of all estimated PLHIV in Zimbabwe being virally suppressed according to the last 2015-2016 Zimbabwe Population HIV Impact Assessment population-based survey. 35 This has had a significant impact on the near threefold reduction in the estimated TB incidence in the population from 584 to 221 cases/100 000 population between 2000 and 2017. 4 36 In line with our findings, a modelling exercise involving countries from sub-Saharan Africa also showed that the most notable decline in incidence of HIV-associated TB Open access is seen when those diagnosed HIV positive are immediately initiated on ART compared with delaying ART until 5 years after HIV seroconversion. 37 Interestingly, the decline in TB case notifications which was higher in those with clinically diagnosed PTB has been sustained even though TB diagnosis has shifted from smear microscopy to Xpert MTB/Rif. The Xpert MTB/Rif assay is more sensitive in diagnosing TB in patients whose sputum smears are negative for acidfast bacilli (which is more common among PLHIV), 38 and hence one might have anticipated an increase in numbers of TB cases detected rather than the observed decrease.
A recent modelling study showed that treatment of latent TB infection can result in a 14-fold decline in global TB incidence between 2013 and 2050 in comparison to no treatment of latent TB infection. 39 Therefore, despite the observed and encouraging decline in TB case notifications, there are still opportunities for further declines given that IPT coverage is only approximately one-third of PLHIV on ART. Uptake of IPT has slowed down because of resistance of some health workers who fear the development of isoniazid resistance coupled with fears among patients given a few anecdotal reports of serious side effects among some patients taking this prophylactic treatment. Although IPT completion rates in Zimbabwe are higher than reported elsewhere, completion rates are still suboptimal probably due to the increased pill burden of IPT over a 6-9 month duration. 17 18 Improved completion rates may lead to a further decline in TB case notification rates.
Zimbabwe's planned national roll-out of a shorter regimen of TB preventive therapy (namely, a 12-week course of weekly rifapentine plus isoniazid-3HP) for children and adult PLHIV, following the feasibility phase roll-out in 10 pilot sites beginning in early 2020, can potentially lead to increased TB preventive therapy coverage. This may lead to further declines in TB incidence among PLHIV in ART care. In comparison to the standard 6-9 months isoniazid monotherapy course, the 3HP prophylactic treatment course has been shown to have higher completion rates and a lower risk of adverse events, particularly hepatotoxicity. 40 The major strength of this study is that national aggregate data that is routinely tracked as part of HIV/TB global response indicators were used, hence giving a comprehensive picture of what is prevailing in the country over time. The major limitation comes from the ecological design of the study which results in 'the ecological fallacy', whereby other factors that are attributed to the observed decline in TB case notifications over time aside from ART and IPT are not accounted for. 41 However, our findings are largely reliable as they are coherent with findings from wellconducted studies at the individual level using robust study methodologies. [12][13][14] There are other limitations. TB case notifications may underestimate the true TB cases in Zimbabwe over time especially among PLHIV. Despite the use of GeneXpert, the test has been shown to have low sensitivity and specificity in settings were HIV prevalence is >30%. 38 There may also have been missed TB cases through suboptimal TB screening among patients attending health facilities and leakages along the care cascade to sputum collection, submission for laboratory diagnosis, return of results and commencement of TB treatment. [42][43][44] Nevertheless, based on TB treatment coverages for Zimbabwe that were reported by WHO, these were relatively constant around 70%, 32 hence the observed annual TB case notification rates are probably standardised for undiagnosed TB cases.
In conclusion, this study shows continued declining trends of TB case notification rates that support the continued scale-up of ART and TB preventive therapy among PLHIV as key strategies for mitigating the dual burden of HIV/AIDS and TB in Zimbabwe and other regions with HIV-driven TB epidemics.
Author affiliations
1 | 2020-04-08T19:11:15.045Z | 2020-04-01T00:00:00.000 | {
"year": 2020,
"sha1": "bed15a60dd63b37a79c71882825132ed486d4618",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/10/4/e034721.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fb574ff91159d4fd33e58487923c53096b44fbb3",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233783074 | pes2o/s2orc | v3-fos-license | Research on Power Quality Evaluation Method for High Energy-consuming Enterprises
With the increase of the voltage level of the power system, the demand for electric energy of high energy-consuming enterprises continues to increase, which puts forward higher requirements on the power quality. As power quality problems become more and more prominent, how to manage power quality scientifically and reasonably has attracted more and more attention from power quality workers. At the same time, users have increased their power quality requirements and the continuous development of power marketization. To achieve comprehensive power quality management, it is necessary to have a scientific, accurate, and standardized assessment of power quality. In-depth understanding and understanding of power quality, attach great importance to the harm and impact of power quality degradation on the operation of the power supply system, and it is particularly urgent to achieve comprehensive power quality management.
Introduction
With the deepening of the informatization and intelligence of the power grid, power quality monitoring has evolved from single-point monitoring in the past to system monitoring now, and a complete power quality monitoring system framework has been gradually formed. Due to the rapid development of monitoring systems, the scale of data is increasing day by day, and exploring fast and intelligent power quality data analysis and processing methods has become an important issue that needs to be solved urgently. Correctly assessing the economic cost of power quality and proposing an economical and reasonable governance plan can not only improve the production management of users, power companies, equipment manufacturers and other participants, increase economic benefits, but also have important significance for energy conservation and alleviation of energy crisis. By strengthening power quality management, the power loss and economic loss on the demand side can be reduced, and the goal of power demand side management can be fully realized.
In recent years, artificial intelligence represented by machine learning has continued to develop, such as deep learning, reinforcement learning, etc. The application of new methods to solve traditional power quality disturbance recognition continues to expand, and with the increase of power quality monitoring data, based on artificial intelligence algorithms More exploration has been carried out by mining power quality disturbance monitoring data information, and good results have also been achieved. Literature [1] combines power quality research issues with the development of artificial intelligence algorithms, and sorts out some ideas about artificial intelligence algorithms in solving power quality problems. Literature [2] pointed out that power quality problems will bring huge economic losses to users, leading to reduced management efficiency and energy waste, and proposed a basic framework for power quality economic analysis. Literature [3][4][5][6] gives the relevant standards for various evaluation indexes of power quality, and establishes a power quality evaluation system suitable for nonlinear loads. Literature [7] uses the power information integrated management system to complete the 24-hour information monitoring and control of the five indicators of power quality by the power grid, and realizes online real-time monitoring and management of harmonic loads. Literature [8] proposed a method of using electricity information to collect data to analyze power quality. According to the severity of power quality at different times, a power quality management plan was formulated to refine the power quality management work.
Power quality indicators
Any kind of product has its quality index, and electric energy also has a certain quality index. Only when the electric energy sent to the user meets the quality index can it exert its best benefits. However, due to the different perspectives of people's thinking, there is still no accurate definition of the technical meaning of power quality. From a practical point of view, this article analyzes the power quality in detail and breaks it down into the following categories: (1)Voltage deviation In the power system, the voltage deviation is defined as the ratio of the difference between the rated voltage and the actual voltage to the actual voltage value. In the actual operation of the power system, if the voltage deviation is unqualified, it will have many negative effects on industrial and agricultural production and people's lives. If the voltage is not effectively controlled, it may even cause the system voltage to collapse, cause a large-scale blackout, and cause huge economic losses. The calculation formula is as follows: Δ 100% In the formula: represents the actual voltage value, represents the rated voltage value.
(2)Voltage sag Generally, the ratio of the mean square value of the voltage when the sag occurs to the mean square value of the nominal voltage is the amplitude of the voltage sag. The voltage sag is currently the most serious and concerned indicator of power quality.
(3)Three-phase unbalance rate
The three-phase unbalance rate can be expressed as the ratio of the maximum deviation of the threephase voltage to the average value of the three-phase voltage. It can also be defined from the symmetrical component method, which is the percentage value of the root mean square value of the negative sequence component of the voltage and the root mean square value of the positive sequence component of the voltage.
(4)Voltage fluctuation
Voltage fluctuation is manifested as a rapid change in voltage value or a continuous voltage deviation phenomenon, and its change period is larger than the power frequency period. The ratio of the difference between the maximum value and the minimum value of the changing voltage mean square value to the rated voltage is the mathematical description of voltage fluctuation:
100%
In the formula: represents the maximum value of the mean square value of fluctuating voltage, represents the minimum value of the mean square value of fluctuating voltage, and represents the rated voltage value.
(6)Voltage harmonics
Harmonics are sine components whose frequency is an integer multiple of the fundamental frequency, and sine components whose frequency is not an integral multiple of the fundamental frequency are called fractional harmonics or interharmonics.
(7)Frequency deviation
Frequency is the reciprocal value of the period. The difference between the actual frequency value and the power frequency value of the power system is the frequency deviation, which is expressed mathematically as follows: In the formula: represents the actual frequency value, represents the power frequency value.
(8)Power supply reliability
Power supply reliability can be considered as a reflection of the power system's uninterrupted provision of power and electrical energy that meets the power quality standards and the amount required by the system to the power user. It reflects the power's satisfaction limit and importance for the development of the national economy.
(9)Evaluation of power supply serviceability
Electricity supply is a service behavior, and improving service quality is an important measure to optimize industrial structure and improve people's satisfaction. The service index is the quantitative evaluation through the evaluation of the power supply service by the power user.
Determine the weight of quality evaluation indicators
This paper proposes a method based on the minimum error of the power quality standard evaluation sample to determine the weight value of each indicator of power quality, which is called the minimum standard error weighting method. The specific steps are as follows: 1) Based on the power quality evaluation standard indicators, use the unifrnd function to randomly generate n uniformly distributed power quality data within each indicator interval of each power quality level to form a sufficiently complete and representative power quality sample Sequence set.
2) Taking into account the influence of the boundary value of each index of power quality, especially taking the boundary value of each index m times, the corresponding power quality level is the arithmetic average of the two boundary evaluation levels.
3) Construct the following objective function: s. ⋅ 1 0 1 In the formula: represents the corresponding power quality index weight, represents the corresponding power quality evaluation level, prime is the evaluation value obtained by various evaluation methods, m takes the boundary value 20 times, n Take 100, which means 100 pieces of power quality data that are uniformly distributed.
The above objective function is actually a quadratic programming mathematical model, which can be solved by using the function in matlab software to determine the weight value of each indicator of power quality.
Power quality evaluation system
The evaluation of power quality is essentially a comprehensive evaluation of the operation level of the power system and the power supply capacity. It is a standard for managing and reminding both power companies and power users to protect the power quality environment of the power system, and it is also the implementation of power quality governance and improvement. The basis for the reference, and the means to verify governance and improve effectiveness. Therefore, formulating a feasible, scientific, comprehensive and reasonable power quality evaluation system is the key to power quality research.
As the scale of the power system grows larger and larger, users have put forward higher requirements for power quality. When evaluating power quality, it is not only necessary to proceed from a system perspective, but also to evaluate power quality based on power quality standards to obtain high-quality power quality. We must also consider the power quality requirements of power users and their electrical equipment. From a development perspective, the initiative of both users and power companies should be used to improve the overall economic and social benefits, rather than emphasizing the partial benefits of a certain aspect.
Different electrical equipment in the power system has different load characteristics. They have different requirements for power supply reliability, and also have different impacts and losses on power quality. This article mainly focuses on the discussion of the power quality evaluation of pollution source users, especially the power quality evaluation for high energy consumption users.
The prerequisite for correct power quality prediction and evaluation for high-energy users is to establish a reasonable, high-quality, and implementable power quality evaluation system. As the State Grid Corporation of China pays more and more attention to the power quality of users, it has provided scientific and detailed regulations for the power quality evaluation of high energy-consuming users. The evaluation method divides the user-based power quality prediction evaluation into a three-level evaluation principle. Considering the different impacts of different users on power quality, different evaluation methods should be adopted for users with different characteristics. According to the requirements of the assessed object on the power quality indicators and the degree of influence, the predictive assessment can be divided into three levels, and the division principle is shown in the figure:
Power quality evaluation process
First of all, power users need to provide the type, model and electrical parameters of the equipment they use in order to understand the working principle, working characteristics and various parameters of the equipment. In order to judge and determine the power quality evaluation index and evaluation level, and the corresponding power quality evaluation method. The simple flowchart is shown in the figure: Relevant power laws in China stipulate that during the use of electrical energy, power users are prohibited from disrupting the safety of power supply and use, and are prohibited from interfering with the order of power supply and use. If the power user violates the regulations, the power supply company has full power to contain it. Clear provisions are also given in other rules: the user's non-linear equipment, impact equipment, fluctuating equipment and three-phase unbalanced equipment will affect and destroy the power quality, or cause disturbances and disturbances to the safe operation of the power system. Damage, then the user must take reasonable and effective measures to eliminate it. If the treatment plan is not taken as required, or the measures taken do not meet the national standards, the power supply company has the right to stop power supply to the user. For the non-linear equipment of a power user that has been put into operation, it needs to be tested at the power supply point of the user equipment. According to the test results, and integrate various power quality indicators, compare with the corresponding national standard values. For those indicators that exceed national standards, effective governance measures must be taken. For users who are applying for electricity, predictive evaluation should be carried out in advance. Only when the evaluation results meet the standards set by the state can they be connected to the grid.
Conclusion
This article first discusses the meaning of power quality in depth, and has an in-depth understanding of the concept of power quality. On this basis, power quality indicators such as voltage deviation, voltage sag, three-phase unbalance, voltage fluctuation, voltage flicker, voltage harmonics, frequency deviation, power supply reliability, and power supply service evaluation are given, and each the specific meaning | 2021-05-07T00:03:59.007Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "5648e84ec09f3c0c07817b0bb594e2cf02abf696",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1820/1/012005",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "d46ca859ef011730e810f6cd5dccd63c88508761",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
236367560 | pes2o/s2orc | v3-fos-license | Socioeconomic Effects of Ambitious Climate Mitigation Policies in Germany
: The EU Commission has introduced the instrument of National Energy and Climate Plans (NECP) to better achieve energy and climate policy targets. In Germany, a comprehensive study was commissioned for this purpose. Its methods and main results are presented here. It starts with a set of energy system models that maps the necessary changes in the energy system, together with corresponding measures bottom-up. The results then enter the PANTA RHEI macroeconometric top-down model as scenario inputs to determine the socioeconomic effects. According to the bottom-up models, achieving the target of 55% GHG reduction by 2030 will not be easy. The macroeconomic effects are mostly positive. Driven by additional investment, GDP and the number of jobs will be higher than in the baseline. The construction and service sectors can benefit from energy and climate policy measures. The share of final consumer expenditures on energy in GDP declines by 2030 compared to today. However, the direction and magnitude of the effects are not undisputed in the literature. The results show that ambitious climate policies are possible in Germany, which can also improve the achievement of economic and social goals.
Introduction
The EU Commission has introduced the instrument of National Energy and Climate Plans (NECP) to improve the planning and monitoring of the development of the energy transition within and between the EU member states. This is based on the EU Regulation on the Governance of the Energy Union and Climate Action, which focuses on the five dimensions of the Energy Union: (I) energy security, (II) internal energy market, (III) energy efficiency, (IV) decarbonation, and (V) research, innovation, and competitiveness [1]. The aim of the NECP is to implement the Energy Union strategy in all dimensions. For this, all member states have to report their national efforts to achieve the common EU target in the NECPs. For the first time, NECPs had to be submitted for the period from 2021 to 2030, followed by biannual progress reports [2].
For Germany, the NECP was passed by the Federal Cabinet on 10 June 2020 [2,3]. It builds on the targets and measures of the Energy Concept [4], the Energy Efficiency Strategy 2050 [5], and the Climate Action Program 2030 [6] adopted in 2019. The latter complies Germany's commitments to climate protection at the international level (Paris Climate Agreement), as well as at the EU level, thus implementing the Climate Action Plan 2050 [7] at the national level.
The German government has commissioned a study to develop target scenarios for the NECP process [8]. Detailed bottom-up models were used, as well as socioeconomic and environmental impact assessments. The following summarizes the emission pathways and their sectoral allocation in bottom-up models and details the results of the socioeconomic impact assessment (IA). With the scenarios, the current national GHG reduction targets (55% against 1990 for 2030, 80-95% in 2050) are (almost) achieved. The IA examines
Materials and Methods
To assess ex-ante the impacts of such energy development and climate change mitigation plans on economy and society, scenario analysis can be applied. For this purpose, various scenarios have been developed, each of them representing a possible future. A comparison of scenarios with a baseline development then allows an assessment of the effect of the modified assumptions, e.g., regarding different energy consumption or prices.
Coupling of Sectoral Energy System Models and Macroeconomic Models
For the development of scenarios and the evaluation of macroeconomic effects, there are several types of models. First, technology-oriented bottom-up models must be distinguished from economic top-down models. From a technical point of view, the bottom-up models depict options for the development of energy consumption in individual sectors and processes. Top-down models describe the macroeconomic effects resulting from the implementation of corresponding technology pathways.
Top-down macroeconomic models can be grouped into two main categories [26]: optimisation models, which focus on the supply side and for which computable general equilibrium (CGE) models are well-known representatives, and simulation models, which emphasise the demand side, and for which macroeconometric models (also referred to as macroeconometric input-output models) are examples, such as PANTA RHEI used here. The data requirements are similar for all top-down model types. They are based on the national accounts of the official statistics, which annually record the activities of the government, enterprises, private households, and the rest of the world, and their linkages at the national level. In addition, the interdependencies of different economic sectors are reported in input-output tables. These are a necessary component of all macroeconomic models that capture effects of measures and instruments that have an impact beyond the directly affected sector or impact channel. Furthermore, energy data are needed to address the energy transition.
According to [26,27], the two types of models can be differentiated from each other as follows: CGE models are based on neoclassical theory, whereby households and firms are represented by an agent with rational expectations, maximizing the utility or profit. Prices adjust so that supply and demand are in equilibrium and resources are fully used, such that markets are generally cleared. Higher demand (e.g., for the energy transition) leads to higher prices and an (optimal) reallocation of resources. Macroeconometric models follow a post-Keynesian theory and, although they focus more on the demand side, both sides of the market play an important role, in contrast to simple input-output approaches. Behavioural parameters are not determined by optimisation, but by econometric estimation of time series data, so empirical evidence is highly important. Markets are usually not cleared. Imbalances between supply and demand are compensated by quantity effects rather than price effects.
Sectoral energy system models are used to calculate the development of energy consumption and energy supply in the scenarios. These are bottom-up models that can depict technologies and their application in detail. The energy system models for the four demand sectors industry, commerce, wholesale and retail and services ("GHD", tertiary sector), transport, and residential are simulation models. Electricity generation is calculated with a European electricity market model. The model system is completed by modules for district heating, refineries, and the generation of electricity-based fuels. The models cover the entire energy system and account for interactions between the sectors [8].
The so-called quantity components of the models (e.g., reference area, transport capacity or vehicle fleet, industrial production quantities, and labour force) are influenced by exogenous variables, such as economic structure and growth, population, standard of living, spatial and transport organisation, etc.
The specific energy consumption values are based on technical information about processes, electrical devices, heating systems, vehicle fleets, etc. The change in specific consumption over time reflects technical developments; this is influenced by policy instruments such as regulations, target agreements or subsidy programmes, energy prices, as well as values and social priorities. Modelling by cohorts allows the inclusion of the age structure of the plants, cars, or appliances in the respective consumption sectors. GHG emissions are calculated by linking energy consumption by energy source to energy-source-specific emission factors.
To calculate the effects on macroeconomic variables, such as GDP, employment, production, prices, and other variables in the scenarios, the results of the energy system model (bottom-up) are entered as input into a macroeconomic model (top-down), thus soft linking the two types of models. In contrast to the bottom-up models, in which the sectors are considered separately, the focus of a top-down model is on the interlinkage of all economic sectors and their feedback effects on the overall economic development (see Figure S1 in the supplement for an overview of the procedure).
The geographical level of the model has been chosen depending on the research question. International models account for international feedback effects that cannot be covered in national models or can only be captured by simple assumptions. National models, on the other hand, depict domestic developments in a detailed and timely manner, while international datasets usually have a longer time lag and a lower level of detail, and the complexity of the model inter-relationships is higher.
As the focus is on measures on the national level, the macroeconometric model PANTA RHEI for Germany is used for the analysis presented here [28]. It is the environmentally extended version of the simulation and forecasting model INFORGE [29]. In addition to comprehensive economic modelling, energy, and emissions, as well as transport and housing are covered in detail. All model sections are consistently linked with each other. The entire model is solved simultaneously, i.e., the mutual impact of model variables is considered simultaneously.
The model contains a large number of macroeconomic variables from national accounts and input-output tables and provides sectoral information according to 63 economic branches. The energy balances [30] are fully integrated into the model.
In contrast to CGE models, which assume that firms and households optimise their behaviour, the behavioural parameters are estimated econometrically using time series data, mainly from 2000 onwards. This basically assumes that behavioural patterns or reactions to price or quantity changes in the past will also prevail in the future. Adjustments can be implemented through exogenous specifications.
In addition to the harmonised framework data, the (changes in) input data from the energy system model characterising the different scenarios are used as inputs in PANTA RHEI. In particular, information from the bottom-up models is taken for energy consumption, investment differences, electricity, and CO 2 prices, as well as Power-to-X (PtX) imports (see Section 2.3). The comparison of the scenario results provides an assessment of the macroeconomic effects; the differences in the model variables can be interpreted as the effect of the scenario-specific measures. Feedbacks from the macroeconomic model into the energy system model, i.e., of the changed economic data on energy consumption, are not applied.
Scenarios
To analyse different pathways to achieve the emission reduction targets, the following scenarios are developed (see [8]): • A baseline scenario, which is based on policies implemented by 2017 and extrapolates historical trends; • Two target scenarios (scenario 1 and 2), which examine different sets of policy measures and different strategies for long-term development leading to GHG reductions between 85 and 90% in 2050 relative to 1990; • A further scenario (scenario 3), which represents the measures adopted in the Climate Action Program 2030. As in scenarios 1 and 2, further measures are introduced in scenario 3 after 2030 that lead to a GHG reduction of about 85% in 2050 relative to 1990.
In scenario 1, the central macroeconomic measure is a carbon tax (see Table 1) levied in proportion to the CO 2 content of fossil fuels, covering those energy sources that are not included in the European emissions trading system (EU ETS). It is effective in all consumption sectors; in the transport sector, for example, rising fuel prices due to the carbon tax will lead to traffic reduction, a shift to other modes of transport, and a switch to vehicles with lower specific CO 2 emissions. In the building sector, the carbon tax is mainly paid by owner-occupiers and tenants. The latter can only react to the measure by adjusting their heating behaviour but cannot reduce their consumption of fossil fuels by investing in a new heating system. Scenario 2 assumes the introduction of a separate national emissions trading system for the heating and transport sector. In contrast to the carbon tax in scenario 1, it is applied at the beginning of the value chain, i.e., to primary energy sources (upstream ETS). The maximum emission level is fixed and reduced annually, so that the resulting CO 2 price increases from year to year. The higher prices for fossil fuels create incentives for mitigation measures. The introduction of the emissions trading system is supplemented by flanking measures in the final consumption sectors to overcome barriers to the implementation of the energy transition and to avoid distributional inequities that arise, for example, due to a lack of information or capital availability. Thus, in scenario 2, a reduction in the electricity tax and the EEG (Renewable Energy Sources Act (German: Erneuerbare-Energien-Gesetz)) surcharge is introduced as an additional overall measure that is financed from public budget funds. The reduction of certain administrative electricity price components will also benefit sector coupling.
Scenario 3 implements a CO 2 pricing for the non-EU ETS sectors under a national emissions trading system adopted in the Climate Action Program [6]. As in scenario 2, this is an upstream ETS. In the beginning, emissions are priced at a fixed rate, which offers private households and industries the advantage of being able to adjust their behaviour according to a reliable price path in the first years. However, the risk of exceeding the maximum emission level is thereby accepted. From 2025/26, the price will be determined by emissions allowance trading, for which the German government is setting up a platform. In addition to the emissions trading system, scenario 3 assumes a reduction in the EEG surcharge, which is partly financed from emissions trading revenues. Additional measures at the sectoral level (industry, transport, buildings, transformation/energy industry), such as regulation, funding programs, or reduced loans, assumed in scenarios 1 to 3, are described in [8].
Assumptions about important framework data, such as population development and international energy prices (see Table S1 in the supplement), are not varied between the scenarios. As calculations have been finalized in spring 2020, effects of the COVID-19 pandemic are not included. Variables such as population or import prices have a major influence on the development of energy consumption and emissions. The framework data were specified early in the project and could not be adjusted in the further process, as all calculations are based on them. Thus, the population development deviates slightly from the more recent national projections. The population projection depends on various factors, such as fertility or net immigration. Depending on the assumptions for these factors, there are wide ranges of projections, within which the development assumed here still lies. To assess the effect of energy and climate policies separately by comparing the Sustainability 2021, 13, 6247 6 of 20 scenarios, identical framework data are therefore used as a basis. Only the energy prices for end consumers are changed, as they depend on the assumed measures, especially carbon pricing and reduction of the EEG surcharge. For the scenarios, except the baseline scenario, it is assumed that ambitious GHG reduction targets are also pursued and implemented in other countries, especially in Europe, so that similar competitive conditions exist globaly and the risk of carbon leakage is low.
Central Differences of the Scenarios with Regard to Socioeconomic Effects
Regarding the socioeconomic impact assessment, the most important differences in the input data between the scenarios result from differences in investments, in electricity prices, in prices for CO 2 emissions in the non-EU ETS sector, and in imports of power-to-X (PtX) energy sources. The following figures are in real terms (2016 prices).
The investments (see Figure S2 in the supplement) in the scenarios that are exogenously set as inputs in PANTA RHEI do not represent the total investments, but only those that are additionally made or omitted compared to the baseline scenario. These are the investments in efficiency measures and the expansion of the energy system that are necessary to achieve the climate targets. Negative investments, e.g., in the conventional power sector, that are also considered, mean that less is invested than in the baseline scenario. Over the projection period from 2020 to 2050, the net additional investments in scenario 2 are highest, averaging EUR 49.8 billion per year, followed by scenario 3 (EUR 45.3 billion per year on average) and scenario 1 (annual average: EUR 38.4 billion). The lower investment activity in scenario 1 is compensated by significantly higher imports of PtX fuels (see below), while scenarios 2 and 3 focus more on the development of domestic PtX structures.
Electricity prices (see Figure S3 in the supplement) are differentiated according to the final consumer groups: private households, commerce, wholesale and retail, and services ("GHD"), industry, and energy-intensive industry. They are the result of a European electricity market model.
Wholesale prices are the main driver of the electricity price for energy-intensive industry. Rising fuel and EU-ETS prices have an increasing effect on the wholesale price, while the expansion of renewable energies leads to a reduction. However, the increasing effect is stronger than the decreasing one, so that the electricity price increases over the projection period and ranges between 7.25 cents/kWh (scenario 2) and 8.9 cents/kWh (scenario 1) in 2050.
In the case of private households, the tertiary sector, and non-energy-intensive industry, the government components of the electricity price are decisive for the price development, in addition to the wholesale price; in scenario 2, the electricity tax and EEG surcharge are reduced, and in scenario 3, only the EEG surcharge is reduced, resulting in lower electricity prices in these two scenarios than in scenario 1 and the baseline scenario. In 2050, prices are, thus, lowest in scenario 2; private households would pay 28.6 cents/kWh, the tertiary sector 20.07 cents/kWh, and non-energy-intensive industry 15.7 cents/kWh. The highest prices result-slightly above the baseline development-in scenario 1, with 33 cents/kWh (private households), 23.58 cents/kWh (tertiary sector), and 17.89 cents/kWh (non-energy-intensive industry).
CO 2 prices (see Figure S4 in the supplement) for sectors not covered by the EU ETS are also varied in the scenarios, while they do not exist in the baseline scenario. In scenario 1, a uniform CO 2 price in the form of a carbon tax is assumed, starting at EUR 30/t in 2020 and increasing to EUR 250/t by 2050. In the scope of sectoral emissions trading systems, the price in scenario 2 is differentiated by the heating and transport sectors, which result from certain CO 2 quantity restrictions in these two sectors. Compared to scenario 1, the CO 2 prices in scenario 2 start significantly higher in 2020 (EUR 115/t CO 2 in the heating sector and EUR 150/t CO 2 in the transport sector). In 2050, the CO 2 price in both sectors reaches EUR 220/t. The CO 2 price in scenario 3 is introduced in 2021. In 2030, it is EUR 140/t, which is below the developments in the other two scenarios. By 2037, it rises to Sustainability 2021, 13, 6247 7 of 20 EUR 220/t and then remains constant, so that the CO 2 prices in scenarios 2 and 3 converge by 2050.
Power-to-X (PtX) imports (see Figure S5 in the supplement) are required in the scenarios to cover the high domestic demand for these energy sources. A supply exclusively by domestic production is not assumed in any of the scenarios because of limited capacities of renewable energies in Germany and lower generation costs in other countries with more favourable renewable energy conditions. It is assumed that only hydrogen (H 2 ) production is possible domestically, so that power-to-gas (PtG) and power-to-liquid (PtL) products (fuel oil, kerosene, diesel, gasoline) can only be sourced via imports.
In the baseline scenario, there are no PtX imports. In scenario 1, the demand for H 2 is relatively low, so that it can be completely covered by domestic production capacities. For the other PtX products, the import volume increases significantly, especially from 2035 onwards, so that imports in 2050 are EUR 76.6 billion. In scenario 2, PtX imports increase until 2040 and then remain almost constant until 2050, amounting to EUR 20.5 billion in 2050. This includes hydrogen imports, as the demand for hydrogen in scenario 2 is significantly higher than in scenario 1 and, thus, around a quarter of the demand in 2050 will have to be imported. Scenario 3 assumes a moderate development of PtX imports until 2030, exclusively in the form of hydrogen for use in mineral oil refineries. They then increase and amount to EUR 43.6 billion in 2050.
Results
Differences between the scenario results are reported for GHG emissions and energy consumption, macroeconomic and sectoral economic variables, employment, and final consumer expenditures on energy, so that all three pillars of sustainability are considered.
Energy Consumption and GHG Emissions
In the baseline scenario, primary energy supply (PES, see Figure S6 in the supplement) decreases by an average of 1.2% p.a. to 11,418 petajoules (PJ) between 2015 and 2030 (see Figure 1). At the same time, final energy consumption decreases by a total of 8% to 8370 PJ. Drivers for the decrease are the increasing efficiency of appliances, equipment, and vehicles, as well as demographic trends. By 2050, primary energy supply is reduced to around 9000 PJ (−30% compared to 2008).
In scenarios 1 and 2, PES falls to below 10,200 PJ by 2030. Through the measures of the climate action program in scenario 3, PES is reduced to 10,372 PJ by 2030, and the final energy consumption is reduced by an additional 606 PJ compared to the baseline scenario. The additional reduction is distributed almost equally among the sectors of transport, industry, and buildings. Important causes for the additional reduction are the increased diffusion of electric mobility and heat pumps, as well as more efficient cross-sectional technologies in the industry and tertiary sectors. By 2050, PES is reduced to below 7000 PJ in scenarios 1, 2, and 3. The long-term target of a 50% reduction by 2050 (compared to 2008) is met in all scenarios (except in the baseline).
The goal of the current German Climate Action Plan is to reduce GHG emissions by at least 55% by 2030 compared to 1990 [7]. In the baseline scenario, the reduction in the period 2015 to 2030 is small. The emissions target of a maximum of 562 Mt CO 2 eq in 2030 set out in the Climate Action Plan is exceeded by 169 Mt CO 2 eq. There are large gaps in the targets for the energy and transport sectors. By 2050, GHG emissions are reduced to 475 Mt CO 2 eq in the baseline (−62% compared to 1990, see Figure S7 in the supplement). In scenarios 1 and 2, the sector targets of the Climate Action Plan for 2030 are achieved (however, the somewhat more ambitious sector targets of the Climate Change Act (German: Klimaschutzgesetz) from 2019 will be missed). By 2050, GHG emissions are reduced to 186 Mt CO 2 eq (−85% compared to 1990) in scenario 1, and to 179 Mt CO 2 eq (−86% compared to 1990) in scenario 2. The specified reduction of at least 85% compared to 1990 is, therefore, achieved in both scenarios. In scenario 2, a more even distribution of GHG reductions among the sectors is implemented. Emission-intensive industrial processes are converted, including the iron and steel, chemical, and cement industries. In iron and steel, conventional coke blast furnaces are increasingly replaced by direct reduction with hydrogen. This increases the demand for renewable hydrogen, leading to an increased renewable electricity generation compared to scenario 1 (for electricity generation in the scenarios see Figure S8 in the supplement). The industrial sector's share of the emissions remaining in 2050 decrease to 28% in scenario 2 (scenario 1: around 50%). In addition, due to the increased reduction of process emissions, less electricity-based fuels must be imported in scenario 2 than in scenario 1.
converted, including the iron and steel, chemical, and cement industries. In iron and steel, conventional coke blast furnaces are increasingly replaced by direct reduction with hydrogen. This increases the demand for renewable hydrogen, leading to an increased renewable electricity generation compared to scenario 1 (for electricity generation in the scenarios see Figure S8 in the supplement). The industrial sector's share of the emissions remaining in 2050 decrease to 28% in scenario 2 (scenario 1: around 50%). In addition, due to the increased reduction of process emissions, less electricity-based fuels must be imported in scenario 2 than in scenario 1.
In scenario 3, GHG emissions are reduced to 598 Mt CO2eq by 2030 (see Figure S9 in the supplement). Compared to the base year 1990, this corresponds to a reduction of 52.2%. Thus, the overall reduction target of 55% compared to 1990 is not yet fully achieved. By 2050, GHG emissions in scenario 3 are reduced to 167 Mt CO2eq, which corresponds to a reduction of 87% compared to 1990. As in scenario 2, GHG-intensive industrial sectors are profoundly restructured towards low-carbon processes in scenario 3, to distribute residual emissions evenly among the sectors. For a significantly further reduction of GHG emissions, CCS or negative emission technologies would have to be used. In scenario 3, GHG emissions are reduced to 598 Mt CO 2 eq by 2030 (see Figure S9 in the supplement). Compared to the base year 1990, this corresponds to a reduction of 52.2%. Thus, the overall reduction target of 55% compared to 1990 is not yet fully achieved. By 2050, GHG emissions in scenario 3 are reduced to 167 Mt CO 2 eq, which corresponds to a reduction of 87% compared to 1990. As in scenario 2, GHG-intensive industrial sectors are profoundly restructured towards low-carbon processes in scenario 3, to distribute residual emissions evenly among the sectors. For a significantly further reduction of GHG emissions, CCS or negative emission technologies would have to be used.
In summary, energy consumption and, thus, CO 2 emissions decrease in all scenarios. In the baseline, however, the reduction is not sufficient, so that the emissions exceed the reduction targets by far. In scenarios 1, 2, and 3, the minimum target of an 85% reduction by 2050 can be achieved, but the interim target of 55% by 2030 is slightly missed in scenario 3.
Macroeconomic Effects
For the sustainability assessment of energy and climate mitigation scenarios, it is important to consider economic development. A sustainable and prosperous economy with a long-term perspective secures the future investment capability of economic actors, provides jobs, and passes on a working economic system to future generations.
Analysing the scenarios with PANTA RHEI, as described in Section 2.1, positive macroeconomic effects result for all three scenarios: the price-adjusted gross domestic product (GDP) is higher in the scenarios compared to the baseline development (Figure 2). These positive deviations increase until 2030 in scenario 1, and until 2035 in scenarios 2 and 3, and decrease thereafter but remain positive. In 2030, GDP is around 1.7% (scenario 1) and 1.4% (scenarios 2 and 3) higher than in the baseline.
Macroeconomic Effects
For the sustainability assessment of energy and climate mitigation scenarios, it is important to consider economic development. A sustainable and prosperous economy with a long-term perspective secures the future investment capability of economic actors, provides jobs, and passes on a working economic system to future generations.
Analysing the scenarios with PANTA RHEI, as described in Section 2.1, positive macroeconomic effects result for all three scenarios: the price-adjusted gross domestic product (GDP) is higher in the scenarios compared to the baseline development (Figure 2). These positive deviations increase until 2030 in scenario 1, and until 2035 in scenarios 2 and 3, and decrease thereafter but remain positive. In 2030, GDP is around 1.7% (scenario 1) and 1.4% (scenarios 2 and 3) higher than in the baseline. The differences in GDP effects between the scenarios result from the assumptions described in Section 2.3. The main reason for the positive GDP effect is the significantly higher investment, which has an immediate effect on stimulating demand and directly increases GDP. In subsequent years, investments lead to higher depreciation, which is reflected in higher commodity prices and dampens economic growth somewhat. In scenarios 1 and 2, the additional investment in the period up to 2030 is higher than in scenario 3. Between 2030 and 2040, scenario 2 shows the highest additional investment, while from 2040 onwards, the only low additional investment requirement in scenario 1 is particularly noticeable.
At the same time, electricity prices are only slightly higher in scenario 1 than in the baseline development, and even lower in scenarios 2 and 3, with prices slightly lower in scenario 2 than in scenario 3. Higher CO2 prices in transport and heating lead to higher The differences in GDP effects between the scenarios result from the assumptions described in Section 2.3. The main reason for the positive GDP effect is the significantly higher investment, which has an immediate effect on stimulating demand and directly increases GDP. In subsequent years, investments lead to higher depreciation, which is reflected in higher commodity prices and dampens economic growth somewhat. In scenarios 1 and 2, the additional investment in the period up to 2030 is higher than in scenario 3. Between 2030 and 2040, scenario 2 shows the highest additional investment, while from 2040 onwards, the only low additional investment requirement in scenario 1 is particularly noticeable.
At the same time, electricity prices are only slightly higher in scenario 1 than in the baseline development, and even lower in scenarios 2 and 3, with prices slightly lower in scenario 2 than in scenario 3. Higher CO 2 prices in transport and heating lead to higher fossil fuel prices but have almost no negative impact on international competitiveness. Revenue recycling from carbon pricing can significantly influence the effect on GDP. CO 2 prices are lower in scenario 3 than in the other scenarios until 2030. They are particularly high in the first years in scenario 2. From 2035, the differences between the scenarios are only small. However, since the revenues are partly redistributed, certain structural changes result. For example, in scenario 3, the German renewable energy surcharge (EEG surcharge) is reduced with the additional revenue from the CO 2 price.
High PtX imports worsen the trade balance, while lower fossil fuel imports have a positive effect on the trade balance. Although less fossil energy sources are imported, the expensive PtX imports have a negative impact on GDP. For PtX, imports are very high from 2040 onwards in scenario 1. In scenario 3, this value is lower than in scenario 2 until 2040 and increases significantly thereafter.
The respective combinations of assumptions in the three scenarios result in scenarios 2 and 3 showing almost identical positive GDP effects relative to the baseline in 2030, while higher investment slightly favours scenario 1. In 2025, the GDP effects of scenarios 1 and 3 are a bit higher than in scenario 2. After 2030, scenario 1 falls significantly below the other two scenarios in comparison. Reasons are lower additional investment, high PtX imports, and the slightly higher electricity prices. The differences between scenarios 2 and 3 are very small in 2035 and 2040. Thereafter, electricity prices, investment, and PtX imports develop slightly more favourably in scenario 2 than in scenario 3. Table 2 shows the effects on the components of GDP in 2030 and 2050. The additional investment shown in Section 2.3 becomes visible in gross fixed capital formation; in 2030, the additional investment in the transformation and final demand sector is highest in scenario 1, and in 2050 in scenario 3. With lower energy costs and higher GDP, private and public consumption increase. Imports also develop in line with economic output, although energy imports are significantly lower than in the baseline development. Exports fall with the slightly higher producer prices because it is assumed that no additional competitive advantages are gained on international markets because of the energy transition. Besides GDP, further socioeconomic variables are important for evaluating the scenarios. Figure 3 shows the effects on employment, which is-given the higher economic output-also higher compared with the baseline scenario over the entire projection period; in 2030, there are 0.42% (scenario 1), 0.36% (scenario 2), and 0.46% (scenario 3) more persons employed compared to the baseline. In the period from 2025 to 2040, the employment effects are highest in scenario 3. The effect on employment is, thus, generally significantly lower than that on GDP across all scenarios, because part of the higher economic output leads to higher wages and, thus, again, to higher labour productivity.
A closer look at the labour market is provided by Figure 4 for the years 2030 and 2050. The consumer price index (see Figure S11 in the supplement) is higher in the scenarios than in the baseline; in 2030, the price index exceeds the level of the baseline by about 2.0% (scenario 1), 1.8% (scenario 2), and 1.3% (scenario 3), respectively. The main reasons for this are the carbon prices for non-EU ETS sectors, higher capital costs for the additional investments, and, in scenario 1, slightly higher electricity prices for final consumers in several years. The positive development on the labour market also leads to higher wages; in 2030, the average wage per hour exceeds the baseline by about 3.2% in scenario 1, 2.2% in scenario 2, and 2.5% in scenario 3. In the long term, the effects weaken in all scenarios. Sustainability 2021, 13, x FOR PEER REVIEW 11 of 21 A closer look at the labour market is provided by Figure 4 for the years 2030 and 2050. The consumer price index (see Figure S11 in the supplement) is higher in the scenarios than in the baseline; in 2030, the price index exceeds the level of the baseline by about 2.0% (scenario 1), 1.8% (scenario 2), and 1.3% (scenario 3), respectively. The main reasons for this are the carbon prices for non-EU ETS sectors, higher capital costs for the additional investments, and, in scenario 1, slightly higher electricity prices for final consumers in several years. The positive development on the labour market also leads to higher wages; in 2030, the average wage per hour exceeds the baseline by about 3.2% in scenario 1, 2.2% in scenario 2, and 2.5% in scenario 3. In the long term, the effects weaken in all scenarios. Figure 5 presents the employment effects for selected economic sectors in scenario 3 (see Figure S12 for deviations of gross production of those sectors in scenario 3). Scenarios 1 and 2 show similar patterns of sectoral deviations from the baseline. The highest negative effects result in the manufacturing industry, which is included in the figure both in total and by major sectors, such as chemical industry and machinery. Although the effect on production in 2030 is positive, the potential for increasing labour productivity is highest in this sector. Moreover, the automobile sector is slightly negatively affected by the transition to electromobility. More inputs (e.g., batteries and electronics) and, among other things, fewer transmission parts are needed, leading to increased imports given the Figure 5 presents the employment effects for selected economic sectors in scenario 3 (see Figure S12 for deviations of gross production of those sectors in scenario 3). Scenarios 1 and 2 show similar patterns of sectoral deviations from the baseline. The highest negative effects result in the manufacturing industry, which is included in the figure both in total and by major sectors, such as chemical industry and machinery. Although the effect on production in 2030 is positive, the potential for increasing labour productivity is highest in this sector. Moreover, the automobile sector is slightly negatively affected by the transition to electromobility. More inputs (e.g., batteries and electronics) and, among other things, fewer transmission parts are needed, leading to increased imports given the current input structure. However, the assumptions made for this are characterised by high uncertainties regarding the development of the automobile industry. In summary, scenarios 1, 2, and 3 show positive effects compared to the baseline, ranging between 1.4 and 1.7% for GDP in 2030. The differences between these three scenarios are, thus, quite small. In the long term, the highest positive effects result in scenario 2, both for GDP and employment.
Impact on Final Consumer Expenditures on Energy
The development of final consumer expenditures on energy in relation to GDP provides an indication of the overall burden of energy expenditures at macro level. This parameter was proposed by the Expert Commission on the "Energy of the Future" Monitoring Process [31] as a key indicator for the ex-post evaluation of the cost development of the energy transition. First calculations for the electricity, heating, and transport sectors were presented in [32]. Since then, it is part of the monitoring process, and figures are available in the current monitoring report up to 2019 [33].
One open question is the handling of investment in the energy transition. The Expert Commission advocates that investment in the heating sector should be included in final consumer expenditures [32]. However, there is a lack of data in some cases and there are various methodological difficulties, e.g., that some of the investments in the heating sector are not made by private households and, therefore, do not restrict their budgets.
To assess the future development of this indicator, the calculation of final consumer expenditures was projected into the future using the energy quantities and prices from the scenarios. Some methodological details about the indicator are described in [34]. An increase in employment compared to the baseline can be seen, in particular, in the construction industry due to higher investment, as well as in wholesale and retail trade and services due to the overall higher output in 2030. In 2050, the employment effect in business-related services, which strongly depend on activities in the manufacturing sector, becomes negative. However, the overall employment effects are clearly positive in all scenarios over the entire projection period until 2050, even if the effects weaken after 2030.
In summary, scenarios 1, 2, and 3 show positive effects compared to the baseline, ranging between 1.4 and 1.7% for GDP in 2030. The differences between these three scenarios are, thus, quite small. In the long term, the highest positive effects result in scenario 2, both for GDP and employment.
Impact on Final Consumer Expenditures on Energy
The development of final consumer expenditures on energy in relation to GDP provides an indication of the overall burden of energy expenditures at macro level. This parameter was proposed by the Expert Commission on the "Energy of the Future" Monitoring Process [31] as a key indicator for the ex-post evaluation of the cost development of the energy transition. First calculations for the electricity, heating, and transport sectors were presented in [32]. Since then, it is part of the monitoring process, and figures are available in the current monitoring report up to 2019 [33].
One open question is the handling of investment in the energy transition. The Expert Commission advocates that investment in the heating sector should be included in final consumer expenditures [32]. However, there is a lack of data in some cases and there are various methodological difficulties, e.g., that some of the investments in the heating sector are not made by private households and, therefore, do not restrict their budgets.
To assess the future development of this indicator, the calculation of final consumer expenditures was projected into the future using the energy quantities and prices from the scenarios. Some methodological details about the indicator are described in [34]. Figure 6 shows the results for 2030 and 2050 in absolute terms and in relation to GDP. Measured in absolute terms, expenditures in all scenarios rise in the future (in current prices), with the increase in spending on electricity being above average. In relation to GDP, final consumer expenditures decrease in the baseline from 6.6% in 2015 to 5.1% in 2030, and finally to 3.7% in 2050. In the scenario 1 with high imports of expensive PtX, they develop at a level slightly above the baseline. Thus, scenario 1 would at least not turn out worse than the baseline. The most favourable scenario in 2030 is scenario 3, with final consumer expenditures falling from 6.6% in 2015 to 4.7% of GDP in 2030. Even if final consumer expenditures are interpreted in a broader sense and the additional energy-related investment in final demand is added, as proposed by the Expert Commission [32], the relation to GDP increases and is, thus, slightly higher than in the baseline; nevertheless, they are lower in the future in all scenarios than in 2015 (see Figure S13 in the supplement). In 2030, this relation is lowest in scenario 3 at 5.4%. In addition to the methodological difficulties mentioned above, it should be noted when interpreting the results that expenditures in the building sector increase wealth and have a beneficial effect in the long term, while energy expenditures incur annually. Thus, they have a different quality.
In summary, the development of final consumer expenditures during the energy transition is, thus, positive at macro level. Although energy expenditures increase in the projected scenarios compared to today, it is at a similar level or even lower compared to the baseline. Measured in relation to GDP, the share of expenditures for the energy transition and for energy will be lower than today, even if investments in the energy transition in the final demand sector are fully included. In the narrow definition as energy expenditures, their proportion relative to economic output will fall sharply in the course of the energy transition. Even if final consumer expenditures are interpreted in a broader sense and the additional energy-related investment in final demand is added, as proposed by the Expert Commission [32], the relation to GDP increases and is, thus, slightly higher than in the baseline; nevertheless, they are lower in the future in all scenarios than in 2015 (see Figure S13 in the supplement). In 2030, this relation is lowest in scenario 3 at 5.4%. In addition to the methodological difficulties mentioned above, it should be noted when interpreting the results that expenditures in the building sector increase wealth and have a beneficial effect in the long term, while energy expenditures incur annually. Thus, they have a different quality.
In summary, the development of final consumer expenditures during the energy transition is, thus, positive at macro level. Although energy expenditures increase in the projected scenarios compared to today, it is at a similar level or even lower compared to the baseline. Measured in relation to GDP, the share of expenditures for the energy transition and for energy will be lower than today, even if investments in the energy transition in the final demand sector are fully included. In the narrow definition as energy expenditures, their proportion relative to economic output will fall sharply in the course of the energy transition.
Discussion
In the following, the results are first placed in the relevant literature. Then, the methods used are reflected upon and, finally, some policy implications are discussed.
The results show that ambitious climate protection in Germany can lead to positive macroeconomic effects, with final consumer expenditures on energy declining further in relation to GDP. Scenario comparison shows between 1.4 and 1.7% higher GDP in 2030 compared to the baseline development. Employment effects are also positive.
The impact assessment of the German Action Plan using the ISI-Macro model [35] arrives at similar results. The effect of the Climate Action Plan on GDP in 2030 is determined to be between 1.1 and 1.6% higher than the reference development, depending on the target path. Even in the case of a complete crowding out of other investment by the additional climate protection investment, both target paths still result in a slightly higher GDP compared to the reference development (0.3% and 0.5%, respectively). Another study on the impacts of the energy transition, which also analyses the sectoral and regional distribution of these effects [25], finds a GDP increase of 1.6% and additional employment of 1.1% in 2030. The sectoral effects of ambitious climate mitigation in Germany are mostly positive, except for the conventional energy and mining sector. According to this study, the construction industry, real estate activities, and the electricity sector, in particular, benefit. While results according to Figure 5 are similar for construction, they are more optimistic for manufacturing and a bit less optimistic for some service sectors.
An ex-post assessment of the employment effects in the renewable energy industry in Germany is provided in [36] using static input-output modelling; the gross effect of investment in RES technologies and related operation and maintenance, as well as the provision of biofuels is in the range of several hundred thousand jobs, peaking in 2011 at 417,000, declining to 304,000 jobs by 2018. The decrease is due to declining investments, while the importance of operation and maintenance and biogenic fuels is increasing. Further studies show positive net economic effects of expansion of renewable energy in Germany [20] and positive economic effects from wind energy expansion in a German region [37].
Impact assessment by the European Commission [38] analyses the transformation toward a climate-neutral economic system. Here, the macroeconomic effects are examined using three different models: GEM-E3 [39] is a CGE model, and E3ME [40] and QUEST [41] are macroeconometric models. However, QUEST is a macro model without sector detail. The impact on GDP is calculated for four different scenarios. On the one hand, a differentiation is made between the targets of 1.5 • C and 2 • C temperature increase. On the other hand, the policy efforts undertaken outside the EU are varied; in the one case, emissions are reduced as specified in the Nationally Determined Contributions (NDC), in the other case more ambitious emission targets are assumed. According to E3ME, EU GDP in 2050 is between 1.26 and 2.19% higher than the baseline scenario, whereas GEM-E3 projects a negative deviation from the baseline scenario between 0.13 and 1.3%. The results from QUEST lie in between in the slightly positive range.
At the global level, the socioeconomic effects of the energy transition can be estimated using multiregional input-output models, as in [42]. Here, a 2-degree scenario in which climate change mitigation is implemented globally is compared with a business-as-usual scenario. This results in 0.3% more jobs worldwide for the year 2030. The effect varies between countries and reaches up to 0.9%. These effects are in line with studies from international organisations [43][44][45][46] that have published studies confirming positive macroeconomic effects of climate protection programs at the international level. For example, the effects on the global labour market between a reference scenario, in which the RE deployment plans currently adopted by national governments are reached, and an energy scenario, in which a greater expansion of renewable energies, as well as more electrification and higher energy efficiency, are pursued, are compared in [46]. For the energy sector, the energy transition scenario results in 14% more jobs in 2050 compared to the reference development. Other sectors see some job losses. Nevertheless, the number of jobs on macro level is slightly above the level in the reference scenario (+0.2%), i.e., the overall net employment effect is positive.
Ultimately, the results shown in Section 3 are consistent with the literature. Compared to the national studies, our results on GDP and employment are in the same direction and of similar magnitude. The sectoral effects differ to some extent. Such a picture also emerges in comparative studies at the European level. Our results also fit well within the range of studies at the European and global levels.
Assuming that the economy is not already at the optimum (as in CGE models) and that additional investments in the energy transition are possible, a smart energy and climate policy that combines price instruments with further specific measures can lead to positive macroeconomic effects. The assumption of additional investments is central to this. Sensitivity calculations have, therefore, been carried out on various occasions. They show that, assuming a complete crowding out of investments, the effects would be significantly lower, although they would remain positive. The assumption of additional investment is the more valid the weaker the capacity utilization of production and the larger the overall investment gap in the economy. In Germany, both these factors currently suggest that the assumption of investment additionality is plausible.
Even though the positive macroeconomic effects are also found in other studies, this is not uncontroversial, as methods also have an influence on the results. A meta-regression analysis on studies calculating employment effects of renewable energy deployment and energy efficiency improvement [47] shows that CGE models tend to find more negative effects than input-output based (including macroeconometric) methods. It is also noted that policy reports have a greater tendency for reporting positive employment effects than academic papers.
The German Council of Economic Experts (SVR) refers to studies applying CGE models that examine the economic effects of a nationally uniform carbon price, mainly for the US, and determine predominantly negative consequences for economic development [48] (p. 102). Accordingly, the average GDP growth rate could decline in the long term because of the introduction of a carbon price. The argumentation assumes that capital and energy would be used complementarily in companies, so that the higher costs for fossil energy sources due to the carbon price would lead to less investment. However, depending on the use of carbon tax revenues, studies with positive effects on GDP are also cited.
These studies assuming a uniform carbon price can only be compared to a limited extent to the scenarios examined here, which, in addition to regulation and support instruments for renewable energy and energy efficiency, introduce a carbon price only for companies (and households) that are not already part of the EU ETS. The assumption of complementarity of energy and capital, as assumed in the studies cited by the SVR, seems too simplistic for less energy-intensive companies. In the scenarios considered above, a substantial part of the climate policy measures leads to the substitution of carbon-intensive by carbon-free energy sources, so that the introduction of a carbon price does not necessarily lead to a reduction in energy use or to a crowding-out of investments due to higher energy costs. The promotion of energy efficiency in companies, as well as the insulation of houses, which are important components of the scenarios analysed here, precisely target the replacement of energy by capital.
The combination of bottom-up models with a top-down macroeconomic model has also been used in other analyses, which combines the respective advantages of both approaches. This works better for already known and marketable technologies than for new technologies, for which it is partly unclear which economic sectors the goods are composed of. It is also advantageous if the technology can be clearly assigned to an economic sector of the statistics. This works well for building renovation and for established renewable energies, such as wind and PV, for which corresponding input vectors and, thus, allocations are available [36]. It is much more difficult to translate the technology development into the macroeconomic model if, for example, in the case of hydrogen or synthetic fuels, it is unclear where the production will take place, how high the costs will be, and how the still small-scale production can be classified by statistics. In such cases, the uncertainties of translation into economic impulses are much larger. More research is needed in this area to better understand these economic aspects of emerging energy and climate technologies. This especially holds for the area of agriculture and forestry, as well as the possible carbon capture and storage.
There is also some trade-off between national models as applied here, which are built on current and detailed data and concentrate on impacts of national policy, and international models, which focus more on trade effects and climate change mitigation levels relative to international trading partners, which are often expressed as different carbon price levels.
This aspect also plays an important role for policy implications. With respect to international trade, the analysed scenarios lead to lower exports resulting from the slightly higher production prices compared to the baseline and the associated weaker position in international competition. However, more internationally coordinated action, in which the energy transition is implemented on a global level and more countries introduce a carbon tax, could prevent the weakening of exports. In this context, it is important not to underestimate the opportunity that Germany has in the markets for climate mitigation goods by moving as one of the first. German industry is already very well positioned internationally in many climate mitigation goods [49,50]. The more countries that pursue ambitious climate policies, the greater the opportunities for additional exports of corresponding technology goods.
In addition to the consideration of the international dimension, the main factors that play a role in practical implementation are possible losers at both the sectoral and regional levels, as well as possible negative personnel distribution effects. The energy transition will only be successful if policy offers new opportunities to particularly affected groups and distributes burdens fairly.
Regarding the social dimension, final consumer expenditures indicate the extent to which the overall economy is burdened with energy expenditures, but no distributional effects between income groups are apparent from this. Even though final consumer expenditures in relation to GDP are lower in scenarios 2 and 3 than in the baseline development, price increases can have a different impact on the financial burden of different income groups.
Low-income households mainly live in rented accommodation, drive older cars, if at all, and cannot afford to invest in renewable energies on their own roofs. They also spend a larger share of their income on energy. Specific measures are needed to make it financially possible for these groups to participate in the energy transition. Future studies should take a closer look at the distributional effects of climate policy. However, determination of distributional effects between households is methodologically difficult. A classification based on socioeconomic characteristics abstracts from the factors that determine personal energy consumption expenditures, which are, regarding heating, e.g., location, size, and insulation of the house or apartment, as well as energy source and age of the heating system. In the case of transport, the distance to the workplace or the available infrastructure play a decisive role for the expenditure level.
Specific funding measures can help to ensure that the energy transition is implemented in a socially just way. Low-income households, which are more burdened with energy expenditures relative to their income, or tenant households, which cannot choose the type of heating themselves and therefore have only limited influence on their heating expenditures, can thus be supported and enabled to benefit from the progress made through the energy transition. A per capita rebate on carbon price revenues and specific subsidy programs for low-income households could be useful supplementary instruments for this purpose.
Conclusions
The result show that the German energy and climate plan is not only technically feasible, but can also lead to positive macroeconomic effects, including employment effects, while remaining affordable for consumers. Some sectors are slightly negatively affected. In general, a just transition is possible. There are some minor differences in detail between the three scenarios. The instrument of socioeconomic impact assessments is useful and should be used in future policy making to an even greater extent and, if possible, more integrated with technical analyses. Certain aspects, such as the inclusion of impacts on different social groups, should be considered even more in the future.
What do the results presented mean for further analysis and research? First, the European NECP process is a major step forward towards a comprehensive impact analysis of ambitious energy and climate policies. It should further differentiate sectoral, spatial, and distributional effects so that policies can be designed to achieve a just transition of economies. This also includes specific skills on the labour market.
Individual carbon-intensive industries, such as iron and steel or non-metallic minerals, partly face challenges as a result of the envisioned transition. They need public support to invest in carbon-free technologies. Individual regions, especially those with coal mining and carbon-intensive heavy industry, are equally affected. There, structural change and the creation of carbon-free technologies should be promoted. On the labour market, the transition will lead to new requirements, for which training and continuing education should be provided. Effects on individual occupations and skill levels should be analysed in more detail. Impact assessments should be expanded to include additional indicators, such as more SDG indicators, to draw appropriate conclusions.
Finally, Germany must increase its climate mitigation ambitions if it is to contribute to achieving the 1.5 • target. The EU has already agreed on more stringent targets (−55% in 2030 compared to 1990), which still have to be shared among the member states. Against this background, new zero-carbon and negative emission technologies must be included in the analysis. At the same time, technology alone may not be enough to achieve the more ambitious targets. Behavioural changes may also be necessary and have to be considered. To achieve the just transition as soon as possible, effects in sustainable development indicators must be analysed comprehensively, and the results considered in climate and energy policymaking.
Supplementary Materials: The following are available online at https://www.mdpi.com/article /10.3390/su13116247/s1: Figure S1: Procedure of a macroeconomic model analysis, Figure S2: Cumulative additional investments, Figure S3: Electricity prices, Figure S4: CO 2 prices, Figure S5: Power-to-X imports, Figure S6: Primary energy consumption, Figure S7: GHG emissions in the baseline, Figure S8: Electricity generation, Figure S9: GHG emissions in scenario 3, Figure S10: GHG emissions in scenario comparison, Figure S11: Relative deviations of consumer price index compared to the baseline, Figure S12: Deviations of gross production in scenario 3 compared to the baseline, Figure S13: Final consumer expenditures, including additional investments and its relation to GDP. Table S1: Central framework data, Table S2: Deviations of GDP and components compared to the baseline.
Conflicts of Interest:
The authors declare no conflict of interest. The funder (German Federal Ministry for Economic Affairs and Energy, BMWi) had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. However, the funder was involved in the design of the scenarios. | 2021-07-27T00:04:48.773Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "77b00537b3cf49ae54693632304f779ca0dfa0d3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/11/6247/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "09325ff1e6882fcc8c65438c47912ea160670242",
"s2fieldsofstudy": [
"Economics",
"Environmental Science"
],
"extfieldsofstudy": [
"Economics"
]
} |
216421143 | pes2o/s2orc | v3-fos-license | BIOSYNTHESIS OF FUNCTIONALIZED GOLD NANOPARTICLES BY USING METHYL COMMATE C IN Scoparia dulcis LEAF EXTRACT AS REDUCING AGENT
Biosynthesis of nanoparticles is studied extensively for its biocompatibility and its environmentally friendly nature. In the present study, the synthesis of gold nanoparticles is carried out using Scoparia dulcis aqueous leaf extract. The synthesis of gold nanoparticles is achieved at room temperature using aqueous leaf extract of Scoparia dulcis without the addition of an external agent. The color change was observed from light brown to ruby red indicating the formation of gold nanoparticles. The surface plasmon resonance band appeared at 542 nm in Ultra Violet-visible spectroscopy. In Fourier Transform Infrared spectroscopy the functional groups responsible for the reduction of chloroauric acid have been identified and in Transmission Electron Microscopy revealed that the gold nanoparticles were of 20-27 nm in size. Gas Chromatography-Mass Spectroscopy analysis and Nuclear Magnetic Resonance spectroscopy were used to assess the structure of the reducing agent.
INTRODUCTION
Plant-derived products are present in most pharmaceutical preparations that many are currently recommended by medical practitioners as they are part of the health-care system. Relating phytochemistry of plants with nanomaterials will be helpful in developing more effective drugs for various diseases. Recently, many nanomaterials, such as nanoparticles, nanowire and nanodisks have been prepared using various synthetic protocols. 1 Nanoparticles are used to combat several diseases as medicinal agents in therapeutic applications. 2,3 Metallic gold nanoparticles have been employed in many fields, such as catalysts for numerous environmental progressions, antimicrobial agents against a wide range of microorganisms. 4 Hence, the controlled fabrication of nanoparticles in terms of size and shape can augment in specific to drug delivery 5 and catalytic applications. 6 Conventionally, nanoparticles are synthesized by different physical and chemical methods and some disadvantages of these methods are expensive and also involve the use of toxic and hazardous chemicals. 7 Therefore, biological methods like using microorganisms , 9 , enzymes 10 are being been proposed as possible eco-friendly alternatives for the synthesis of nanoparticles. The potential plant extracts for the synthesis of nanoparticles is gaining priority in recent terms for their medicinal value. The use of plant materials for the green synthesis of nanoparticles has evolved in the last decade. Plants contain certain bioactive compounds like flavonoids, phenols, citric acid, ascorbic acid, polyphenolic, terpenes, alkaloids and reductase which act as reducing agents. Plant and seaweeds involved in the synthesis of gold nanoparticles is a very promising area of nanomaterials because the bioorganic compounds act as both reducing and capping agents. 8,10 It is well established that plants and its phytochemicals are used in various biomedical applications. 11,12 Plants materials having property to cure various disorders in human beings or in animals with lower or no adverse effects can be termed as medicinal plants. S. dulcis plants have been traditionally used as remedies for diabetes mellitus in India both as fresh and dried form. 13 It is used in curing ailments such as fever, diarrhea, ulcer, cancer, wounds, skin rash, cough and tuberculosis. 13 Considering the potential applications into account, the present work has been focused on synthesizing gold nanoparticles using Scoparia dulcis extract without the addition of any external agent. In addition, the reducing agent present in the extract was also deducted with the help of mass and nuclear magnetic resonance spectroscopy.
EXPERIMENTAL Preparation of Plant Extract
Freshly collected leaves of Scoparia dulcis were washed properly with tap water followed by Milli-Q water to remove any contaminants. 5 g of leaves were boiled separately with 100 mL of deionised water at 80 °C for 10 min. The extract obtained was filtered through Whatman No.1 filter and kept at 4 °C until used for further study.
Synthesis of Gold Nanoparticles
An aqueous solution of leaf extract (Scoparia dulcis) from various concentrations were prepared separately with each concentration of the leaf extract was added 1 mM HAuCl 4 prepared solution. The synthesis of gold nanoparticles was carried out with 60 mL of 1mM HAuCl 4 solution as mixed with 40 mL of Scoparia dulcis aqueous leaf extracts, the reaction was kept at room temperature. After 60 min, the color of the solution (leaf extract + HAuCl 4 + ) changed from light yellow to ruby red color indicating the formation of gold nanoparticles. The resulting colloidal solution of gold nanoparticles was analyzed using various spectroscopic and microscopic analyses.
Characterization of the Scoparia dulcis Synthesized Gold Nanoparticles
The indication of the color change from light yellow to ruby-red color indicated the formation of the gold nanoparticles. The excitation spectrum of the synthesized gold nanoparticles and has been measured by UV-Visible spectrophotometer (Shimadzu UV-1800) in the wavelength range of 250-800 nm. FT-IR spectroscopy was used to characterize the functional groups involved in the reduction of gold ions to nanoparticles was achieved by FT-IR (Fourier Transform Infrared) Shimadzu spectrophotometer IRAffinity-1s, Japan. FT-IR analysis was carried out for both plant extract and gold nanoparticles to identify the possible biomass responsible for the reduction and capping agent molecules. Transmission Electron microscopy (TEM) (JEOL 3010 instrument with a UHR polepiece) study reveals that the shape and size of the nanoparticles and were recorded by placing a drop of the suspension on carbon-coated grid and allowing water to evaporate.
GC-MS and NMR Analysis
GC-MS was used to assess the biomolecules present on the surface of the gold nanoparticles. Gold nanoparticles synthesized using plant aqueous extracts of Scoparia dulcis was taken for GC-MS analysis using GC-MS QP2010 Ultra (Shimadzu
RESULTS AND DISCUSSION
Scoparia dulcis Synthesized Gold Nanoparticles Gold nanoparticles have many applications in the biomedical field. Improving delivery of anticancer agents to tumors using nanoparticles is one of the most promising research areas in the field of nanotechnology. 14,15 The treatment of an aqueous solution of chloroauric acid with biomass of Scoparia dulcis led to the formation of gold nanoparticles. Gold nanoparticles exhibit plasmon absorption bands that depend on their size and shape. The absorption maxima in the range 520 nm indicates the formation of the gold nanoparticles which may be due to excitation of Surface Plasma Resonance (SPR). UV-vis spectra of aqueous leaf extract Scoparia dulcis (Fig.-1). The UV-vis spectra of the gold nanoparticles synthesized using plant leaf aqueous extracts of Scoparia dulcis showed peak as 542 nm (Fig.-2). 16 Formation of gold nanoparticles was monitored with respect to different time intervals under UV-visible spectroscopy. Synthesis of gold nanoparticles increased with an increase in time of incubation with HAuCl 4 solution along with aqueous leaf extract of Scoparia dulcis. The leaf extract of Scoparia dulcis significantly reduced the formation of HAuCl 4 into gold nanoparticles, the secondary metabolites present in the extract act as a reducing and a capping agent for gold nanoparticles synthesis. The observed symmetric nature of the surface plasma resonance indicates the formation of spherical nanoparticles. It was further confirmed by the TEM images. FTIR Analysis FTIR analysis of Scoparia dulcis aqueous leaf extract and Scoparia dulcis aqueous leaf extract synthesized gold nanoparticles were studied to identify possible functional groups involved in the reduction and stabilization of gold ions into gold nanoparticles. The FTIR spectra of Scoparia dulcis leaf extract indicated various functional groups present at different positions. The corresponding absorption peak at region 3400, 2939, 2389, 2118 1633, 1389, 1250 and 1179 cm -1 (Fig.-3), while the 3373, 2919, 1660, 1512, 1413, 1383, 1326 and 1237 cm -1 for the synthesized gold nanoparticles can be observed in Fig.-3 and 4. The results revealed that the absorption peak at region 3400 and 2939 cm -1 was shifted and appearance of the new broad absorption peak at 3373 and 2919 cm -1 was seen corresponds to -O-H Stretching, H-bonded alcohols, and phenols. Whereas the disappearance peaks at 2389 and 2118 cm -1 were observed which was responsible for the reduction and capping of gold nanoparticles .17, 18
Transmission Electron Microscope (TEM) Analysis
Transmission electron microscopy was used to investigate the shape and size of the gold nanoparticles synthesized using aqueous leaf extract of Scoparia dulcis. The sample preparation for TEM characterization involves placing a drop of solution on a carbon-coated copper grip which was dried at room temperature, while the residual solution was removed with the blotting paper. It is clear from highresolution TEM image analysis to observed shape and size of the nanoparticles diameter ranging from 20-27 nm. 17 indicated that the biomolecules of plant Scoparia dulcis were effectively involved in the synthesis and controlled the formation of gold nanoparticles. Gold nanoparticles with thin, smooth ends on the exterior of the nanoparticles were seen in the TEM micrographs. Nuclear magnetic resonance was used to ascertain the caped molecule on the surface of gold nanoparticles. Figure-8 (A) and (B) represent the 1 H and 13 C NMR assignments for the compound methyl commate C taken at the retention time 38.64. The peaks found between 140 ppm and 160 ppm indicates the aromatic resonances of the compound with slight impurities. From the assignments, it is evident that the compound obtained in the mass spectrometry matches with the NMR signals confirms the presence of methyl commate C on the gold nanoparticles. A schematic diagram represents the Methyl Commate C present in aqueous leaf extracts of Scoparia dulcis combines with gold ions to form the complex (Fig.-8). This will be helpful for further application studies to prepare methyl commate C reduced gold nanoparticles for potential biomedical applications. Based on the study it could be inferred that medicinal plants the subject of human curiosity and to utilize in an effective way in nanoparticle synthesis.
CONCLUSION
Biosynthesis of gold nanoparticles using a simple green method has been achieved and the capping procedure could be useful for potent biomedical applications. Spectroscopic and microscopic analyses inferred that the gold nanoparticles are spherical and hexagonal in shape. The reducing agent in the plant was identified using GC-MS and NMR analysis out and the stabilizing compound was found to be methyl commate C a well-known compound of biomedical applications. | 2020-04-02T09:22:40.137Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "5b452a33aac9690c33b3c81c21c05e8bb25b23d9",
"oa_license": null,
"oa_url": "https://doi.org/10.31788/rjc.2020.1315515",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "60ab7e5f990e543c458954b193b5e9a89d9c479a",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
258133726 | pes2o/s2orc | v3-fos-license | Rapidly progressive respiratory failure due to antisynthetase syndrome related interstitial lung disease
Abstract A 65‐year‐old female was admitted with rapidly progressive respiratory failure requiring intubation and mechanical ventilation. She was considered to have an infective exacerbation of underlying interstitial lung disease (ILD). She improved on antibiotics, but the interstitial process progressed rapidly, and she could not be weaned. An antimyositis antibody panel yielded a strongly positive anti‐Jo‐1 and anti‐Ro 52. A diagnosis of antisynthetase syndrome (ASS) associated ILD, a very rare disease with high mortality, was made. She was managed with high‐dose corticosteroids and intravenous immunoglobulin therapy and was eventually liberated from mechanical ventilation. This case highlights the importance of considering ASS in an otherwise unexplained rapidly progressive ILD requiring mechanical ventilation.
INTRODUCTION
Antisynthetase syndrome (ASS) is a rare but probably underdiagnosed autoimmune disorder. 1,2 It is characterized by the presence of IgG anti-aminoacyl RNA-synthetase antibodies, with various clinical manifestations which may be present at different stages of the disease. These include autoimmune inflammatory polymyositis (PM), dermatomyositis (DM), interstitial lung disease (ILD), arthritis, fever, Raynaud's phenomenon and mechanic's hand. [1][2][3]
CASE REPORT
A 65-year-old woman was referred with rapidly progressive respiratory failure, initially thought to be caused by community acquired pneumonia (CAP). She was intubated and mechanically ventilated at a district hospital and transferred to our intensive care unit (ICU).
Collateral history obtained from her son, however, suggested a 6-month history of progressive dyspnoea and impaired exercise tolerance, with a one-week history of coughing, fever, and rapid decline. Her past medical history included Hashimoto's thyroiditis. She also had a history of previous breast and previous cervical cancer, both treated with curative intent.
She was considered to have an infective exacerbation of underlying interstitial lung disease. In addition to signs of consolidation of the lower lobes, she was febrile and shocked on admission. There were no signs suggestive of systemic disease.
Her initial imaging was suggestive of consolidation, although some features suggestive of an underlying ILD were present as well ( Figure 1). She had an elevated white cell count of 47.9 Â 10 9 /L (normal 3.90-12.60) and C-reactive protein (CRP) of 239 mg/L (normal <5 mg/L). Initial blood cultures were all negative, her tracheal aspirates were Gene Xpert negative, and her urine Legionella antigen test was negative.
She initially required significant ventilatory support with high positive end-expiratory pressure (PEEP) and required high-dose adrenalin (0.02 mcg/kg/min) to maintain adequate tissue perfusion. She was commenced on co-amoxiclav and azithromycin at the district hospital but given the continuing rapid deterioration, resistant organisms were considered, and her antibiotics were escalated to empiric Meropenem.
The patient was liberated from vasopressor support and her ventilatory support was weaned within 5 days of admission. She had two failed attempts at extubation, on both occasions she was successfully extubated for 24 h, and subsequently became tachypnoeic and desaturated, requiring reintubation.
Her infective parameters improved (CRP to 16 mg/L), and imaging showed radiographic improvement of the consolidation, but worsening ground glass opacities and interstitial infiltrates (Figure 2).
A diagnosis of antisynthetase syndrome was made (ASS), and she was initially treated with high dose corticosteroids (hydrocortisone 100 mg 6-hourly), followed by five daily infusions of IV immunoglobulin (IVIG) at a dosage of 400 gm/kg. She was once again weaned to minimal settings, and her CK dropped to 327 U/L.
Her ICU stay was complicated by several episodes of nosocomial sepsis (including two episodes of ventilatorassociated pneumonia (Serratia marcescens and Stenotrophomonas maltophilia)), candidaemia (Candida albicans) as well as an iatrogenic pneumothorax. She was eventually successfully weaned and liberated from the ventilator, discharged from ICU on oral prednisone (1 mg/Kg/day) to a high care setting and after almost 3 months after presentation, discharged home without the need for domiciliary oxygen. She is still maintained on oral prednisone with the view to review the management and potentially add a steroidsparing immunosuppressive agent, in all probability, Mycophenolate Mofetil.
DISCUSSION
The classic triad of ILD, myositis, and arthritis accounts for up to 90% of ASS. 1 However, recent evidence suggests that single-organ involvement at presentation is far more common than originally described. 1 ILD is the most frequent and often dominant organ involvement in Jo1 antibodyassociated antisynthetase syndrome, as seen in our case. 2 The syndrome is generally considered to present in patients with an antisynthetase antibody plus one or two of F I G U R E 1 (A) A Computed tomography scan at presentation demonstrated confluent opacification with air-bronchograms consistent with pneumonic consolidation. (B) A higher slice showed patchy ground glass opacifications in both lung fields consistent with an interstitial process. Breast prostheses are visible on both slices F I G U R E 2 (A) The chest radiograph on admission illustrating bilateral lower zone pneumonic consolidation. (B) A chest radiograph following steroid therapy and later IV immunoglobulin therapy showing marked resolution (an iatrogenic pneumothorax was also present) the following features: ILD, inflammatory myopathy or inflammatory polyarthritis. 3 Raynaud's phenomenon, mechanic's hands, and fever are other features that are less frequent clinical findings. ASS associated ILD often confers a poor prognosis and can be fatal despite aggressive treatment with corticosteroids.
The co-occurrence of anti-Jo1 and anti-Ro52 antibodies in the antisynthetase syndrome has been related to more severe ILD and acute onset respiratory failure that is associated with a poor prognosis. 2 At present, eight antiaminoacyl tRNA-synthetase antibodies (anti-ARS) have been identified, 1-5 including anti-Jo1 (most common), anti-PL7, anti-PL12, anti-EJ, anti-OJ, anti-KS, anti-ZO, and anti-YRS/HA. 5 There is a lack of evidence to guide treatment strategies in ASS-ILD, and most suggested therapies are based on case reports and retrospective studies. 5,6 The proposed mechanisms of action of IVIG include alterations in the function of innate and adaptive immunity, as well changes in inflammatory cytokine production and gene expression. 7 Its role in inhibiting autoantibodies and lack of immunosuppressive effect, make IVIG the preferred initial therapeutic agent in many centres. 5 It can also be used as a potential salvage therapy in patients with active progressive ASS-associated ILD who are not responding to corticosteroids. Furthermore, IVIG has be shown to have a steroid-sparing effect in the management of ASS-associated ILD.
According to a meta-analysis in 2017, other available treatment options include corticosteroids, calcineurin inhibitors, azathioprine, cyclophosphamide, and Rituximab, with cyclophosphamide having the highest survival rate at 3 months (72.4%). 8 Of note, no studies included in this meta-analysis addressed the use of mycophenolate mofetil.
In conclusion, this case highlights the importance of considering ASS in an otherwise unexplained rapidly progressive ILD resulting in respiratory failure.
AUTHOR CONTRIBUTIONS
All authors were involved in the clinical management of the patient. MA and CK wrote the manuscript, that was critically reviewed by all co-authors.
CONFLICT OF INTEREST STATEMENT
Coenraad FN Koegelenberg is an Editorial Board member of Respirology Case Reports and a co-author of this article. They were excluded from all editorial decision-making related to the acceptance of this article for publication. Coenraad FN Koegelenberg is an Associate Editor for Respirology Case Reports. The other authors have no conflict of interest to declare.
DATA AVAILABILITY STATEMENT
Data available on request from the authors ETHICS STATEMENT | 2023-04-15T05:11:54.922Z | 2023-04-12T00:00:00.000 | {
"year": 2023,
"sha1": "249fdd649f8d85f221284764f4d26bb1a162197a",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "249fdd649f8d85f221284764f4d26bb1a162197a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
8788838 | pes2o/s2orc | v3-fos-license | Prognostic Value of Immunohistochemical Staining of p53, bcl-2, and Ki-67 in Small Cell Lung Cancer
Small cell lung cancer (SCLC) is one of the most fatal cancers in humans and many factors are known to be related to its poor prognosis. Immunohistochemical (IHC) stainings were done on SCLC specimens in order to investigate the prognostic value of the apoptosis-related gene expression and the tumor proliferative maker, and the relationships among these IHC results and patients clinical characteristics, chemoresponsiveness, and survival were analyzed. The medical records of 107 patients were reviewed retrospectively. IHC stainings for p53, bcl-2 and Ki-67 expressions were performed in the 66 paraffin-embedded biopsy samples. Sixty-six out of the 107 patients were evaluable for response rate and survival. The overall response rate was 75% (95% Confidence Interval=74-76%) and the median survival time was 14 months. The median survival time of limited stage was 16 months and that of extensive stage was 10 months. The prevalence of p53, bcl-2 and Ki-67 expression was 62%, 70%, and 49%, respectively. There were no correlations among the immunoreactivities of p53, bcl-2 and Ki-67 with clinical stage, chemoresponsiveness or overall survival. The clinical stage was the only prognostic factor influencing survival. The expression rates of p53, bcl-2, and Ki-67 were relatively high in SCLC without any prognostic significance. The exact clinical role of these markers should be defined through further investigations.
INTRODUCTION
Lung cancer is the most common fatal cancer worldwide and has become the leading cause of cancer deaths in Korea. The number of new cases will continue to rise (1). The risk of human cancer can be associated with environmental, occupational, and recreational exposures to carcinogens (2). Oncogenesis is related with epigenetic changes, oncogenes, tumor suppressor genes, apoptosis, and genetic changes associated with DNA repair. There have been many investigations on the prognostic role of p53, bcl-2, and Ki-67 expression in non-small cell lung cancer (NSCLC). Early reports of p53 mutation suggested a variable relationship to survival (3,4). Another study showed that p53 mutations detectable in tumor tissues had been shown to be an independent marker for the poor prognosis in resectable stage I NSCLC (5). Data on bcl-2 expression from a study on NSCLC patients showed positive correlations with longer survival (6). There were reports on an inverse relationship between bcl-2 and p53 in NSCLC (7,8). However, another studies did not support a relevant prognositic role for p53, bcl-2, or Ki-67 immunohistochemical markers in NSCLC regardless of stage (9,10). No relationship was observed between the expression of Ki-67 and that of bcl-2. The relationship between a positive rate for Ki-67 and prognosis remains unclear (11). Small cell lung cancer (SCLC) has a poor prognosis. Most of the patients carry a large burden at the time of diagnosis. SCLC has been studied less than NSCLC. There was a report that bcl-2 expression did not influence survival in SCLC (12). We conducted a retrospective study on the value of mutant p53, bcl-2, and Ki-67 expressions in SCLC patients from Korea Cancer Center Hospital.
MATERIALS AND METHODS
All of the patients were primarily diagnosed as SCLC at the Department of Internal Medicine of Korean Cancer Center Hospital between February 1997 and December 2002. Seventy-five of 107 SCLC patients were treated. Immunohistochemical (IHC) stainings for mutant p53, bcl-2 and Ki-67 expressions were performed in the 66 paraffin-embedded biopsy samples among the 75 member treatment group. The study group included patients (57 males and 9 females, 61 yr mean age) with cytologically or histopathologically diagnosed SCLC. Histopathological diagnoses were done by bronchoscopic biopsy, lymph node biopsy, and percutaneous lung gun biopsy. Staging procedures included physical examination, chest radiography, chest computed tomography (CT) scan, and bone scintigraphy. Brain CT/magnetic resonance image (MRI), bone marrow biopsy and other studies were optional in asymptomatic patients but mandatory in those with symptoms suggesting disseminations. Patients were staged as either limited, with disease confined to one hemithorax, or extensive, with disease beyond one hemithorax. All patients were administered one of the three chemotherapy regimens: cisplatin and etoposide (EP); cyclophosphamide, adriamycin, and vincristine (CAV); etoposide and carboplatin (EC). We established the principle that all limited disease patients would be treated by radiotherapy if the chemotherapy were effective. And then 40 patients out of all limited disease patient were treated by radiotherapy. Response categories i.e. complete response (CR), partial response (PR), stable disease (SD), and progressive disease (PD) were evaluated according to new response evaluation criteria in solid tumor (RECIST) guidelines (13). The CR and PR patients were considered responsive. Survival was defined as time lapse from the day of first chemotherapy course until the day of death.
Five-micrometer thickness, paraffin-embedded, tissue sections were deparaffinized in xylene and hydrated in a graded ethanol series. Endogenous peroxidase activity was blocked with 3% hydrogen peroxide in methanol. Tissue sections were heated in 10 mM sodium citrate, pH 6.0, in a microwave oven for 10 min to expose the antigens. Sections were then washed with Tris-buffered saline (TBS). The streptavidinbiotin-peroxidase complex technique (universal LSAB kit, DAKO, Glostrup, Denmark) was used for immunohistochemical stain. The sections were incubated overnight at 4℃ with the three monoclonal antibodies: p53 (1:50, DAKO, Glostrup, Denmark), bcl-2 (1:40, DAKO) and Ki-67 (1:50, Zymed, San Francisco, CA, U.S.A.). Sections were then washed with TBS and incubated with biotinylated secondary antibody for 30 min at room temperature. After washing, the sections were incubated with peroxidase-labelled streptavidin at room temperature for 30 min. Diaminobenzidine/hydrogen peroxidase was used as a chromogen and sections were counterstained with hematoxylin. When the cell nuclei were stained strongly with dark brown color, the cells were considered to be positive for p53 and Ki-67. Bcl-2 expression was noted in the cytoplasm. If more than 10% of the tumor cells were positively stained, it was considered as positive for p53 and bcl-2. In the case of Ki-67, the labeling index was determined by scanning areas with uniformly stained cells at a low magnification, followed by counting of the cells at high power field (×400). Appropriate positive and negative controls were used for all the procedures.
Median values and ranges were used to calculate patient characteristics and to compare the factors. To examine factors affecting survival, the following variables were analyzed: age, sex, disease extent, performance status, and Ki-67, p53, and bcl-2 expression. Each of these variables was divided into two categories as follows: age was divided by mean age, i.e., less than 60 yr vs. more than or equal to 60 yr; sex, male vs. female; disease extent, limited disease (LD) vs. extended disease (ED); performance status (PS), good (PS 0-1) vs. poor (PS 2-4); Ki-67, equal to or greater than 50% vs. less than 50%; p53 and bcl-2, positive vs. negative. To compare the response rate, chisquare test was used. Survival was assessed by the Kaplan-Meier method and compared with log-rank test. Hazard ratio and its 95% confidence interval for each variable were estimated by Cox proportional hazard model. A p-value lower than 0.05 was considered statistically significant. Chi-square test with Fisher exact test were used as appropriate. SPSS for Windows, 11.0 standard version, was used for all analysis. Table 1. Overall response rate for evaluable 66 patients were 72%. There was no difference of the response rate among three regimens (EP, 69%; CAV, 65%; EC, 75%; p=0.83). Forty-one (62%) out of the 66 patients were positive for p53 antibodies; 25 (40%) with limited disease (LD) and 16 (24%) with extensive disease (ED) (p=0.36). The response rate for the positive p53 group was 68%, that for the negative group was 84% (p=0.15). Survival data were available for 66 patients. Median survival was 13 months for the positive p53 group, and 16 months for the negative group (p=0.50). There were no significant differences in sex, age, disease extent, performance status, and survival between patients with and without mutant p53 antibodies ( Table 2). According to the analysis of prognostic factors which influenced chemotherapeutic response rate and survival, only the extent of the disease was significant for patient survival by univariate and multivariate analysis ( ths; 16 months for LD patients, and 10 months for ED patients (p=0.04). We also analyzed the survival between positive and negative p53 groups in LD patients or ED patients. In LD patients, median survival was 15 months for the positive p53 group, and 18 months for the negative group (p= 0.37). In ED patients, median survival was 10 months for positive p53 and 12 months for negative (p=0.87). No correlation was found between survival and p53 expression in either LD or ED patients. Therefore p53 was not a useful prognostic indicator of SCLC. For bcl-2, 45 (70%) out of 64 patients were positive; 26 (40%) with LD and 19 (30%) with ED (p=0.15) ( Table 2). The response rate was 75% for the positive bcl-2 group, and 73% for the negative group (p=0.72). Median survival was 16 months for positive bcl-2, and 15 months for the negative group (p=0.46). In LD patients, median survival was 15 months for the positive bcl-2 group and 19 months for the negative group 19 months (p=0.37). In ED patients, median survival was 9 months for positive bcl-2, and 12 months for negative (p=0. 16). No correlation was found between survival and bcl-2 expression in either LD or ED patients. Like mutant p53, there were no significant differences in sex, gender, disease extent, performance status, and survival between patients with and without bcl-2 expression.
Patient characteristics are shown in
We separately analyzed Ki-67 antigen expression both above and below 50%. Thirty-two (49%) out of 65 were in the above 50% group, 22 (34%) with LD and 10 (15%) with ED (p=0.60) ( Table 2). The response rate was 69% for the above 50% group, and 79% for the below group (p=0.55). Median survival was 16 months for the above 50% group, and 15 months for the below group (p=0.21). In LD patients, median survival was 16 months for the above 50% group, and 16 months for the below group (p=0.37). In ED patients, median survival was 12 months for the above 50% group and 10 months for the below group (p=0.65). No correlation was found between survival and Ki-67 expression in either LD or ED patients. There were no significant differences in sex, gender, disease extent, performance status, and survival according to Ki-67 expression.
DISCUSSION
The p53 protein is recognized as an important cell regulatory factor that arrests the growth of cells containing damaged DNA. A reversible arrest in the G1 phase of the cell cycle enables DNA repair before DNA synthesis. When appropriate repair is not possible, p53 expression may trigger apoptosis, a reversible process culminating in cell death. If normal, wild type p53 function is lost, the treatment is relatively resistant, as a result of deficiency of p53-dependent apoptosis (14,15). The p53 mutation is the most common genetic mutation in cancer. Thus, the mutated gene loses its natural tumor suppressor function allowing damaged cells to divide unchecked and finally to become malignant cells. There are still controversies concerning the prognosis and survival in lung cancer patients. Some studies reported that p53 mutation had been associated with poor prognosis and shorter survival in NSCLC (5,8,16,17). However, some others reported no such correlation in NSCLC (3,9,10,18). Others reported a favorable prognosis in NSCLC (4,19). There have been fewer studies in SCLC than in NSCLC. Some studies showed that bcl-2 expression and p53 mutation had no relation to survival in SCLC (9,10,12,20). Another study divided their SCLC patients into limited and extensive-stage disease, after which p53-antibody positivity emerged as an independent marker of poor prognosis in LD but had no relation to survival (21). One study indicated that p53 played an important role as a determinant of chemosensitivity in SCLC and that p53 immunostaining could be used in clinical practice to determine the presence of tumor-chemoresistance (22). Kenichi et al. reported that patients with expression of mutant p53 protein showed lower response rate than those having p53-negative tumors and were less sensitive to anticancer drugs (23). Our study, however, indicated that p53 antibody had no relation to survival.
The bcl-2 proto-oncogene is encoded by a 230-kb gene that gives rise to a 24-to 26-kDa protein that is localized in the inner mitochondrial membrane, and, to a lesser extent, in the cell membrane (24). The major function of bcl-2 appears to be the inhibition of apoptosis or programmed cell death, whereas bax, bad, bak, and others promote cell death (25). It is well documented that bcl-2 becomes deregulated in tumor cells as a result of translocation into the immunoglobulin heavy-chain locus, and is therefore constitutively activated in follicular lymphoma (26). In epithelial tumors, no genetic change of bcl-2 has been demonstrated, in contrast to lymphocytic neoplasia. However, bcl-2 expression has been described in a series of solid tumors, particularly in NSCLC and in breast cancer (27). An in vitro study showed that bcl-2 expression may be related to chemoresistance due to inhibition of drug-induced apoptosis (28). Thus multidrug resistance is probably linked, at least in part, to high levels of bcl-2 expression. Bcl-2 blocks the cell death pathway (apoptosis) and is not directly associated with cell proliferation (29). Studies examining the association of bcl-2 expression with survival in NSCLC have been contradictory, with some reporting bcl-2 as an indicator of better survival (6,7), while other showed no survival differences according to bcl-2 status (30,31). There have been few studies in SCLC. From our results, bcl-2 expression was not an independent predictor of survival in SCLC.
BrdU (bromodeoxyuridine), PCNA (proliferating cell nuclear antigen), Ki-67, and others have been used as cell proliferation markers. PCNA, a 36-kilodalton, nuclear polypeptide that is related to cell proliferation, is identical to cyclin, which is a protein that appears in the proliferative phase of cells (32). Being synthesized during the late G1 to S phase, PCNA is an auxillary protein for DNA polymerase . Ki-67, a more reliable proliferating marker, reacts with nuclear antigen that is present only in proliferating cells (G1, S, G2, and M phase), not in resting (G0) cells (33)(34)(35), and thereby provides a reliable method for evaluating tumor growth fraction in many malignant tumors, including lung cancer (36). It appears to be a useful prognostic marker of NSCLC, especially in the early stage (37). Compared to p53 and bcl-2, the relationship between Ki-67 and survival has been studied less extensively in NSCLC, and especially so in SCLC. Our results showed that Ki-67 expression had no relation to survival in SCLC.
From our results, the apoptosis-related genes, mutant p53 and bcl-2, and the proliferative marker, Ki-67, were found to be useful in the diagnosis of SCLC patients because of their relatively high prevalence (38,39). However, as they did not show clinical significance for prognosis or survival, further investigation will be required to confirm the clinical role of these markers. | 2018-04-03T05:32:57.038Z | 2006-02-01T00:00:00.000 | {
"year": 2006,
"sha1": "ce6e43612db2e076145148c46c854992dc7cbc62",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc2733975?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce6e43612db2e076145148c46c854992dc7cbc62",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
228964266 | pes2o/s2orc | v3-fos-license | Hydrogeological System of Injana Formation in Salahaddin Governorate/ Iraq
Injana Formation is the most extended geological formation in Salahaddin Governorate/ Iraq. About 10% of the studied area is covered by the outcrops of the formation as a recharge area. The formation is a subsurface within the unsaturated zone in 5% of the total studied area, while it exists within the saturated zone in about 85%; it is a major confined groundwater aquifer. Therefore, the hydrogeological system of the layers needs to be re-evaluated to describe the successions of aquifers and confining layers and their relation with each other. The lithology, depths, water table, saturated thickness, hydraulic characteristics of the aquifers, and the lateral and vertical variations of these characteristics were adopted to classify the hydrogeological system. The lithological composition is mainly composed of alternating successions of claystone, siltstone and sandstone with some differentiation within the studied area. The Quaternary and, occasionally, the Mukdadiya Formations are dry or of secondary aquifer, except in limited areas of the governorate. Injana Formation represents the major upper aquifer in the area, especially in the western bank of Tigris River. The outcrops of the formation are adjacent to Makhul and Hamrin anticlines; while Al-Tharthar valley represents a recharge area for the groundwater. In the remaining parts of the studied area, the formation represents the main deeper of a confined to semi-confined groundwater aquifer. The general direction of the groundwater movement in this hydrogeological system is towards the discharge area represented by Tigris River and Tharthar Lake, which is compatible with the topographic slope. The formation is classified as a multi-layer aquifer hydrogeological system.
Introduction
Salahaddin governorate is located in the north of central Iraq, 175 km north of Baghdad governorate on the two banks of the Tigris River, between the UTM coordinates of 269966-496883 E and 3722989-3951057 N. The governorate includes the towns Tikrit, Sharqat, Baiji, Samarra, Balad, Dujail, Dor, Alam, and Toz. The western parts (west Makhul and Hamrin) of the governorate lie within the stable shelf zone, while its eastern part (east Makhul and Hamrin) lie within the unstable shelf [1].
The geological formations of Fatha, Mukdadiya and Bai Hassan represent the well-known secondary groundwater aquifers within Salahaddin area, as well as the Quaternary deposits, while the deposits of the Injana Formation layers are the main aquifer in the western parts of the Tigris River in the governorate. Mukdadiya Formation layers represent the secondary upper groundwater aquifer in the area between the Hamrin mountain range to the north and east and the Tigris River to the west ( Figure-1). Al-Basrawi and Al-Muslih [2] reported that the groundwater aquifers in the area are confined and unconfined, in addition to the presence of perched aquifers that can be invested by hand dug wells in the governorate.
Injana Formation (Upper Miocene) is the largest extensive geological formation in the governorate of Salahaddin. It is outcropped in about 10%), which plays a role of recharge area, and it represents the main groundwater aquifer in 85% of the total area of the governorate. It is mainly consisted of a sequence of sandstone, siltstone and claystone layers. The maximum thickness of Injana Formation in the area of study is about 550 m. Figure 2). This formation is a clastic alternation of sequence fluvial cycles in lenticels, which pass to each other medium to coarse sandstone, siltstone and claystone, and deposited in fluvial-tidal environment [3,4]. The sandstone of Injana Formation is composed primarily of rock fragments (sedimentary, igneous and metamorphic), quartz (monocrystalline and polycrystalline) and feldspars (orthoclase, microcline and plagioclase). The matrix is subordinate and the cement is mostly carbonate [5].
The claystone beds in the Injana Formation are reddish brown, brownish, conchoidally fractured, calcareous, occasionally silty and containing lenses of siltstone and sandstone. Thickness of individual beds is in the range of 5-20 m, while the sandstone beds are reddish brown, grey, thin bedded to massive and the thickness range of individual beds is 0.5-15 m. Few lenses of limestone and marly limestone occur at different levels in the lower part, and they are greenish and fossiliferous [4]. The transmissivity and hydraulic conductivity are the main significant hydrogeological data required for managing groundwater resources; transmissivity describes the general ability of an aquifer to transmit water over the entire saturated thickness, while hydraulic conductivity measures this ability by unit area [6]. Hydraulic conductivity is defined as the quantity of water moved through the porous media within the time unit under the effect of hydraulic gradient in one unit through one area unit measured vertically on flow direction (length unit/time unit) [7].
The layers of geological formation can be classified, as related to their hydrogeological properties, to aquifers or confining layers. An aquifer is water bearing geologic formation or stratum capable of transmitting water through its pores at a significant rate for economic extraction by wells [8]. It has sufficient permeability and thickness to transmit groundwater within its layers [9].
As for the confining layers, the aquitard is a partially permeable formation or layer(s) though which only seepage is possible and thus the yield is insignificant compared with aquifers, while the aquiclude is a formation which is essentially impermeable to the flow of water [10]. There is a recent trend toward the use of the terms aquifer and aquitard, while neglecting the term aquiclude [11]. Based on the data taken from the theological description archive of the wells drilled in the area within Injana formation, the sandstone, siltstone, sandy silt and silty sand water-bearing layers are considered as aquifers of groundwater, while the clayey layers is considered as aquitard confining layers.
Most of the wells drilling works in Salahaddin Governorate are conducted to invest the groundwater from the Injana aquifer for irrigation purposes. In most cases, the drilling is random, without an accurate prior geological database, leading to the failure of many wells, with additional costs and efforts. This study will be a useful database that benefits the decision maker of groundwater sector in the governorate to choose suitable places and conditions of drilling.
The objective of the present study is to describe the lithology, hydrogeologic and hydraulic conditions, extensions, depth, thickness, and groundwater origin in the study area. We also estimate the recharge and discharge areas and the general direction of groundwater movement in Injana water bearing layers within Salahaddin governorate.
Materials and Methodology
The archived data from the General Commission of Groundwater/ Ministry of Water Resources and the General Establishment of Geological Survey and Mining were used to extract the lithological description and hydraulic properties of the hydrogeological system of Injana Formation. The data of 2322 wells were used to derive the water table of the aquifer, followe4d by the flow directions and depth to groundwater in the studied area. The field and laboratory descriptions of the lithological sequences from 165 wells, based on the data of Table-1 within Salahaddin governorate, were studied to conclude the extensions, thicknesses of clay beds, produced thickness, lateral variations, and geometrical properties of the water bearing layers. The pumping test data were used to analyze and estimate the transmissivity, hydraulic conductivity, and maximum drawdown of the aquifers.
Results and Discussion Aquifer System
Depending on the section that is derived from the lithological description made on the layers of Injana Formation in the study area, (Figure-2) demonstrates the dominance of clayey sand, sand, and sandy clay sequences, which always represent the groundwater bearing layers. Clay plays the major role of the confining layers (aquitards), while gravel, clayey gravel, gravely clay and sandy gravel layers are mostly dry layers above the groundwater table. It is evident that the groundwater aquifer within the layers of Injana Formation is generally a confined aquifer, except for one case where it could be a semi-confined aquifer due to the existence of aquitards of clay which produce a multi-layer aquifer.
Piezometric levels, flow direction and depth to groundwater
The annual recharge of groundwater aquifer, its hydraulic characteristic, groundwater exploitation, and natural groundwater flow are the main factors affecting the groundwater piezometric levels. In general, the groundwater table in Injana aquifers in Salahaddin governorate, depending of the data of 2322 wells, is highest in the north. The elevation reaches 200 meters a.s.l. and gradually decreases to the south, reaching 115-150 meters a.s.l. within the boundaries of Baiji area. The decrease continues southward and eastward in the center of the governorate and the area adjacent to the Hamrin anticline east of Al-Alam area, as well as the strip adjacent to Al-Tharthar Lake in the southwest of the governorate where the elevation ranges between 80-115 meters a.s.l.. To the south of the central Salahaddin governorate, adjacent to Tigris River in Samarra and Balad areas, the levels are as low as possible (40-80 meters a.s.l.). These differences may be due to several reasons, including the topography of the area, recharge, natural discharge or exploitation of groundwater (Figure-3). In the far east of the governorate, in Doz area, the unconfined aquifer of Quaternary deposits and Mukdadiya formation, while Injana Formation is a confined deep aquifer. The investment of groundwater is limited to the upper aquifer and does not reach the formation under study. In the far west of Salahaddin, west and northwest of Al-Siniyyah, Injana Formation is completely absent under the effect of erosion. The main aquifer in the area is represented by Fatha Formation which is out of our interest in the present study.
The levels of groundwater mainly control the direction of groundwater flow, which moves in Injana aquifer in the north of the governorate to the south where the center of governorate is located. An exception is the belt between Makhul and Tigris River, where the groundwater flows towards the river which represents the final discharge area of the groundwater in the governorate. In the center and south of the governorate, the groundwater in this multi-layer aquifer flows towards both banks of Tigris River which represents the discharge area of the aquifer. Depth of groundwater is a key determinant of the possibilities of exploitation and extraction because it relates to the costs of drilling, wells casing, and pumping equipment, which control the types of wells and drilling techniques.
The data of 165 wells mentioned in (Table-1) were used to map the spatial distribution of the depths to groundwater in Salahaddin governorate. The depth falls in the center of the north of the area until it reaches 3 meters (Figure-4), but it increases toward the east and west of this part of the governorate until it reaches about 20 meters, which is a relatively low depth. This situation is repeated in the extreme central south of the governorate in Balad town, as the depth reaches 3 meters near Tigris and begins to increase to the east and west.
In the east of the area, specifically east and northeast of Al-Alam area, the depths are the largest possible in the studied area (40-80 meters).
In the central and southwest of the governorate, adjacent to the eastern bank of the Tharathar, the depths range between 25-35 meters). The depths of the groundwater are within the contact between Quaternary deposits and Injana Formation which represents the main aquifer. These differences in the depths may be due to several reasons, including the topography of the region, recharge, natural discharge or exploitation of groundwater.
Saturated thickness of Injana aquifer
All the wells in the study area were drilled for exploitation purposes. There are no standard experimental deep wells to be used in the lithological description. Thus, the saturated thickness calculated in these wells, in fact, represents the productive thicknesses that contributes to the supply of groundwater to the wells, which is actually less than the true saturated thicknesses. The thickness of Injana saturated produced beds in each well (produced thickness), was calculated between the ground level and the bottom of the well, after the removal of the impermeable clay layers described in the wells. The total thickness of Injana clay layers varied from a well to another, as in the map below which describes clay layers thicknesses (Figure-5). Accordingly, the produced thicknesses in the wells drilled within Injana aquifer (after the removal of clay layers) ranged between 50-60 meters throughout the studied area. However, these thicknesses increased in the west of Tigris River in the center of the area, reaching 70 meters (Figure-6), while it was reduced at the east of the river to about 40 meters, due to the presence of the unsaturated layers of Quaternary sediments or Mukdadiya Formation, or both, above the layers of Injana Formation.
Transmissivity and Hydraulic Conductivity
Data of experimental tests of 165 boreholes were adopted to calculate the transmissivity according to the formula described by Raghunath [8], in terms of well productivity (Q), S w and the drop of water level achieved during the pumping period until reaching the stability of the water table (Table 1).
T=(1.2*Q/1000)*S w ) …………………………………………………..(1) where: Q=Well productivity (l/sec) S w = The difference between the level of the stable water and the level of moving water (m) T= Transmissivity in m 2 /sec which was converted later to m 2 /day. The spatial distribution of transmissivity values (T) of the wells in the study area are presented in (Fig. 7) which illustrates low transmissivity within the northern part of the area toward the north and west of Baiji and to the west of Sharqat, ranging between 2-20 m 2 /day. Then it increases within the southwest (west of Tigris River) where it ranges 20-40 m 2 /day. On the other hand, the transmissivity value is increased significantly in the southeastern part of the area (east of Tigris River) to reach more than 40-700 m 2 /day. The hydraulic conductivity can be calculated by dividing the value of T on the thickness of saturated bed (b). K = T / b ………………………………………………………. (2) where: T= Transmissivity factor (m 2 /day) K= Hydraulic conductivity (m/day) b= Saturated thickness (m) Because the drilled boreholes do not penetrate the whole saturated thickness within Injana Formation, they have replaced the productive thickness which represents the thickness located between the water table and the bottom of the borehole, after excluding the thickness values of the clay. Then, hydraulic conductivity was calculated and the spatial distribution was represented (Figure-8). The spatial distribution of hydraulic conductivity was very similar to that of transmissivity, being low in the west of Sharqat and west to north west of Baiji (0.03-0.17 m/day), then increasing in the south west of the governorate, i.e. west of Tigris River (0.17-0.58 m/day). On the other hand, it increased significantly in the east of Tigris River (0.58-6 m/day , despite that the produced thickness (as referred by Fig. 6) in the east is less than that in the west of the river. The reason is the high transmissivity which characterizes Injana layers in the east of Tigris River due to the increasing and coarsening of sand within Injana Formation.
The ranges of hydraulic conductivity are within the limits of siltstone and fine grained sandstone reported previously [12]. These sediments are the main component of Injana aquifer.
Pumping rate and drawdown
The response of drawdown to the pumping rate gives an idea about the hydraulic characteristics, especially the specific capacity of the aquifer [13], if it is usual to install pumping equipment proportional to the capability of the well. From this point of view, an equipment of high pumping rate is installed where the high hydraulic properties exist. However, the drawdown below the static water is proportionally low, while the opposite occurs in the aquifers of low hydraulic properties, where the drawdown is high despite the low pumping rate.
This is exactly what happened in the study area, where in the northern part of the study area, west of Sharqat as well as west and northwest of Baiji, the hydraulic characteristics, including transmissivity and hydraulic conductivity, seemed to be low. Thus, in spite of the low pumping rate (3-4.5 liters / sec) (Figure-9), it was found that the total drawdown of water table is high (Figure-10). This interprets the low hydraulic characteristics, that reveal low specific capacity, and thus the aquifer is considered as confined.
In the south-west of the region (west of the Tigris), where the hydraulic characteristics are relatively high, a relative increase in the pumping rate was observed (4.3-6.3 l/sec), followed by a relative decrease in the drawdown (16-29 meters). In the southeast part of the governorate (east of the Tigris River), where Injana aquifer has higher hydraulic characteristics than the other parts of the study area, the pumping rate was also higher (6.3-9.7 l/sec) and the rate of drawdown is lower (5-11 meters), below the static water. From this point of view, the last part of the study area is more promising and encouraging for the investment of groundwater, because of the encouraging pumping rates and low drawdown, which indicates that the depletion will be low.
Conclusion
The aquifer within Injana Formation is generally a confined, except for some cases where it could be semi-confined due to the existence of aquitard layers of clay which lead to a multi-layer aquifer. The Piezometric level in Injana multi-layer aquifer is highest in the north of the area. It gradually decreases to the south and east, in the center of the governorate, and adjacent to Hamrin anticline, as well as near the strip adjacent to Al-Tharthar Lake and Tigris River. Groundwater flows within the aquifer from the north of the governorate to the south. While, in the center and south of the governorate, it flows towards both banks of Tigris River which represents the discharge area of the aquifer. The depths to groundwater fall in the center of the north of the governorate, but increase towards the east and west of this part, with relatively low depths. This situation is repeated in the extreme central south in Balad near Tigris River and begins to increase east and west. In the east of the area, specifically east and northeast of Al-Alam, the depths are the deepest in the studied area. The spatial distribution of transmissivity appears to be low within the northern part the area towards north and west Baiji and to the west of Sharqat, then it increases within the southwest of the governorate (west of Tigris River). On the other hand, the transmissivity is increased significantly in the southeastern part of the governorate, where the spatial distribution of hydraulic conductivity seemed similar to that of transmissivity. In the northern part of the study area, west of Sharqat and west and northwest of Baiji, the low pumping rate and high drawdown indicated the low storativity and confining conditions, while in the southwest and southeast of the region (west and east of the Tigris), there was a relative increase in the pumping rate that accompanied with a relative decrease in the drawdown. Thus, the southeast part of the governorate (east of the Tigris River) is more promising and encouraging for drilling and investment of groundwater, which indicates that the depletion will be low. | 2020-11-05T09:08:37.567Z | 2020-10-28T00:00:00.000 | {
"year": 2020,
"sha1": "8f21656d21b0ccc51e4e08e0da9a8eb04d3e82d0",
"oa_license": null,
"oa_url": "https://doi.org/10.24996/ijs.2020.61.10.19",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0b1ae485737b28252abd7cdc9bf70462c00f9e25",
"s2fieldsofstudy": [
"Geology",
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
26594608 | pes2o/s2orc | v3-fos-license | Systems-level modeling of mycobacterial metabolism for the identification of new (multi-)drug targets
Systems-level metabolic network reconstructions and the derived constraint-based (CB) mathematical models are efficient tools to explore bacterial metabolism. Approximately one-fourth of the Mycobacterium tuberculosis (Mtb) genome contains genes that encode proteins directly involved in its metabolism. These represent potential drug targets that can be systematically probed with CB models through the prediction of genes essential (or the combination thereof) for the pathogen to grow. However, gene essentiality depends on the growth conditions and, so far, no in vitro model precisely mimics the host at the different stages of mycobacterial infection, limiting model predictions. These limitations can be circumvented by combining expression data from in vivo samples with a validated CB model, creating an accurate description of pathogen metabolism in the host. To this end, we present here a thoroughly 13 curated and extended genome-scale CB metabolic model of Mtb quantitatively validated using C measurements. We describe some of the efforts made in integrating CB models and high-throughput data to generate condition specific models, and we will discuss challenges ahead. This knowledge and the framework herein presented will enable to identify potential new drug targets, and will foster the development of optimal therapeutic strategies. © 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-SA license (http://creativecommons.org/licenses/by-nc-sa/3.0/). . The rise of multi-resistant Mycobacterium tuberculosis nd the need for new intervention strategies Mycobacterium tuberculosis (Mtb) is the etiological agent of uberculosis (TB) and has re-emerged as a serious threat for human ealth. In 2012, TB claimed the lives of 1.3 million people [1]. The apid appearance of multi, extensively and totally drug-resistant trains, emphasizes the adaptability of Mtb and has raised concerns f its impact to human health. Furthermore, due to the diverse Abbreviations: CB, constraint-based; Mtb, Mycobacterium tuberculosis; TB, uberculosis; gdw, grams of cell dry weight; BCG, Mycobacterium bovis Bacillus almette-Guérin. ∗ Corresponding author at: Laboratory of Systems and Synthetic Biology, ageningen University and Research Centre, Dreijenplein 10, Wageningen 6703 B, The Netherlands. Tel.: +31 317 482865. E-mail address: vitor.martinsdossantos@wur.nl (V.A.P. Martins dos Santos). ttp://dx.doi.org/10.1016/j.smim.2014.09.013 044-5323/© 2014 The Authors. Published by Elsevier Ltd. This is an open access article un genetic predisposition of the infected subjects, uncertainties on long-term adverse effects and other safety concerns regarding the rise of drug resistant strains, the development of new, effective and affordable TB drugs has been slow [2]. New (combined) therapeutic strategies are urgently required to combat these drug-resistant strains [3]. In vitro studies have revealed sets of genes that are essential for growth and survival under laboratory growth conditions [4,5]. Due to the differences between the in vivo and the in vitro environments this does not automatically imply that these sets of genes are suitable drug targets. Besides, given all cellular components from different types of networks, genes (and their products) that may be not essential on their own can be indispensable in combinations not immediately obvious. A vital improvement would be the expansion of these studies to in vivo or ex vivo models, such as animal models, which would as faithfully as possible mimic the onset and progression of the infection, as well as the strategies against it [6]. An alternative and complementary method to der the CC BY-NC-SA license (http://creativecommons.org/licenses/by-nc-sa/3.0/).
identify suitable drug targets is to use mathematical descriptions of the metabolism of Mtb under in vivo conditions, circumventing experimental difficulties that arise with in vivo and ex vivo studies.
Approximately one-fourth of the annotated mycobacterial gene pool encodes structural proteins known to be involved in its metabolism presenting a wealth of enzymes and metabolites as potential drug targets. Stoichiometric genome-scale models of metabolism are essential to identify possible metabolic drug targets, as they provide a holistic view on metabolism. Drug targets in the form of enzymes encoded by their specific genes, have been identified by gene essentiality predictions based on modeling the in vivo environment [7]. Recent insights have clarified the picture of available metabolites to Mtb inside the host and shed new light on in vivo gene essentiality predictions [8][9][10][11].
Predictions on gene essentiality can be done using constraintbased (CB) metabolic models by simulating the effect of total loss of an enzyme function in a metabolic network. This black and white scenario where a drug is able to completely shut down an enzymatic reaction is not fully realistic. In most cases, drug effects are subtler, leading to only a partial loss of function [12]. Furthermore, and owing to the network structures in which they are embedded, genes may code for proteins that are not essential per se, but which do become so if equally non-essential proteins to which they are connected become dysfunctional or absent. A reliable metabolic network topology, knowledge of the available metabolites in the host, in vivo growth and survival requirements and strategies, and reliable and quantitative predictions of metabolic activity are important and thus far overlooked.
A stoichiometric genome-scale CB metabolic model that is experimentally validated, not only qualitatively for the correct network topology, but also quantitatively for predicting fluxes, provides many opportunities to further identify metabolic bottlenecks and weak spots. Instead of using only qualitative, topology based, methods such models can be explored for new drug targets and novel synergistic drug combinations using more realistic quantitative approaches. For example, in addition to simulating the effect of a knock out of given genes or combinations thereof, the effect of a partial loss of function induced by a drug can also be simulated. Simulating the effect of decreasing the function of enzymes that can be targeted with known drugs can highlight alternative metabolic escape routes that become more relevant under these conditions paving the way to the development of more efficient therapeutic strategies.
Here we present a new genome-scale CB model of Mtb metabolism, sMtb (in silico Mycobacterium tuberculosis), which builds upon three previously published models and which is experimentally validated in great detail. Our model also includes recently discovered or annotated reactions and pathways, has undergone extensive manual curation and outperforms its predecessors in terms of both qualitative and quantitative predictions. We discuss the applications of this model for the identification of possible drug targets, to the unraveling of potentially unknown interconnections and for the development of future intervention strategies.
Mathematical models of metabolism
There are different types of metabolic models, all of them based on networks of metabolites that are interconnected through enzymatic, spontaneous, or transport reactions. These metabolic networks are reconstructed from literature and annotated genome data.
CB metabolic models are stoichiometric, mass, charge and energy-balanced scaffolds that describe steady-state kinetics, whereas dynamic metabolic models are explicitly time-dependent and enable to determine the changes in the concentration of metabolites over time. Thus, dynamic metabolic models enable more accurate descriptions of metabolism, but require many detailed kinetic parameters, such as rate-constants of every enzyme. Such kinetic parameters are often unknown and obtaining them experimentally is often difficult or impossible. Therefore, for a genome-scale dynamic model, many of these parameters are unavailable and many of them would have to be fitted to the model, which would diminish its predictive power. In addition, simulations with these models are computationally costly, making dynamic models thus far unsuitable to describe metabolism on a genome-scale.
Genome-scale CB metabolic modeling provides a holistic view on metabolism and transport. A metabolic network forms the foundation of a CB metabolic model (Fig. 1). The stoichiometry of each reaction is written in a stoichiometric matrix where negative numbers represent the consumption of metabolites and positive numbers represent the formation of metabolites. This stoichiometric matrix ensures that the system is in steady-state, as for every reaction no metabolite can accumulate. Through the application of constraints, hence they are called 'constraint-based', the number of possible metabolic states can be lowered, to best predict the actual metabolic state of an organism under given genetic and environmental conditions [13]. Applying too many constraints can result in an infeasible model where no possible metabolic state can be found. CB metabolic models can be used to predict genes [14] and metabolites that are essential to synthesize precursors for growth [15]. A major advantage of genome-scale CB metabolic models as compared to dynamic models is that few parameters are required to describe the entire known metabolism of an organism. On the other hand, CB metabolic models are not easily adapted to describe the dynamics of the system, since they contain a stoichiometric matrix and are thus designed to operate in steady-state conditions where uptake and secretion fluxes are constant and there is no net accumulation of metabolic intermediates, which is only valid if the time scales under consideration are different enough. These metabolic models are based on optimization principles and need one or more optimization objectives to function. Optimization objectives in CB metabolic models can be multiple and describe what the organism Fig. 1. Constraint-based model creation and functioning. A scaffold metabolic network is constructed from an annotated genome and completed after a rigorous survey of organism specific databases and literature. This metabolic network represents all the different possibilities for metabolites to travel through the network (metabolic states). After this network has been constructed, a stoichiometric matrix is created that encompasses the stoichiometry of all metabolic reactions under steady state conditions. Constraints on uptake and/or secretion rates are subsequently set, and the optimization of one or multiple objectives leads to the prediction of a metabolic state. 'aims' for. Examples of frequently used metabolic objectives are: maximizing the speed at which an organism grows, maximizing the production of energy carrying metabolites (such as ATP), and minimizing the overall usage of enzymes [16].
Flux predictions
Flux is a commonly used concept in physics where it is defined as the rate of flow of a magnitude or property through a defined area [17]. In the realm of CB metabolic models, this term is used to indicate the rate of conversion of one metabolite to another per unit of biomass (usually given in mmol gdw −1 h −1 , where gdw denotes grams of cell dry weight). For transport reactions, there is no metabolite conversion and the term flux refers to the rate of transportation between cellular or sub-cellular compartments. Fluxes can have positive or negative values in CB metabolic models, depending on whether a forward or reverse reaction is predicted. The metabolic state, flux state or flux distribution of an organism is defined as the whole of all fluxes throughout metabolism [18,19]. Constraints can be placed on some of these fluxes (e.g. the uptake and secretion rates) to limit the model. These constraints reflect the limitations of enzymes, transport proteins or nutrients and lead, upon optimization of one or multiple objectives, to meaningful flux distributions.
Objective functions
An important assumption of CB metabolic models is that optimization principles underpin metabolic states. In other words, the model assumes that a cell 'strives to achieve a metabolic objective' [20]. CB metabolic models are underdetermined and can be solved mathematically, which requires the optimization of one or multiple objective functions. Most genome-scale CB metabolic models contain one or multiple biomass functions. A biomass function is an integral part of a CB metabolic model and entails the amount (in mmol) of metabolites that are required to form 1 g dry weight of biomass and as such represents growth of the organism. The amounts of the individual metabolites are usually based on literature about the organism and vary for different reconstructions. Maximization of the flux through the biomass function thus leads to a prediction of the metabolic state when maximal growth is achieved, given a defined set of available nutrients.
Schuetz and colleagues [16] used a model of the central carbon metabolism of Escherichia coli to systematically compare flux distributions, resulting from 11 objective functions, to 13 C-determined in vivo flux distributions from six growth conditions. They concluded that no single objective best describes all conditions and the most relevant objective for each condition has to be identified.
Solution space
The solution space of a CB metabolic model (represented as a dashed cube in Fig. 1) is defined as the range in which fluxes can vary while leading to the optimal value of the objective function. An inherent property of CB metabolic models is the fact that, even after optimizing a given objective function, the solution space remains largely undetermined. This region of feasible metabolic flux distributions grows larger with increasing model size and reflects the metabolic flexibility of living organisms. Once the solution space has been defined, Markov chain Monte Carlo sampling [21] or variations thereof [22] can be used to obtain probability distributions for the fluxes and extract descriptors (such as means and standard deviations) for these distributions. Such an approach gives an indication of which fluxes can be accurately determined under a given set of constraints, and which cannot. Moreover, an estimation of the significance of the change of each flux between different conditions can be provided.
Predictions of specific growth rates
CB metabolic models can quantitatively predict specific growth rates, or growth yields. Therefore, a comparison between predicted and experimentally determined values, such as the specific growth rate, provide the means to test the accuracy of the model. Constraints are set on the set of experimentally measured uptake and/or secretion rates while the uptake of other available metabolites (if any) is left unconstrained. Subsequently, the biomass function is set as the objective to maximize, which results in a predicted maximal specific growth rate.
These quantitative validations are limited since only one predicted 'flux' value, the specific growth rate, is compared to experimental data. Due to the inherent uncertainty provided by the size of the solution space, not all metabolic fluxes can be predicted with equal accuracy. However, many of these fluxes can still be predicted within a narrow range. Comparing multiple predicted fluxes to experimentally measured or experimentally inferred fluxes provides a much more solid and quantitative validation of CB metabolic models.
The importance of updating models
An example that illustrates the importance of updating CB metabolic models is the conversion of fructose-6-phosphate to fructose-1,6-bisphosphate catalyzed by PfkA and/or PfkB in Mtb. Within different models, the enzymes and their interaction in catalyzing this ATP driven reaction are annotated differently. In one Mtb model this reaction can only occur if PfkA and PfkB are both present, while in another this reaction can occur if either PfkA or PfkB is present [23,24]. However, Phong and colleagues [25] showed that only PfkA catalyzes the conversion of fructose-6phosphate to fructose-1,6-bisphosphate whereas PfkB does not. Thus, clearly both models should be updated. This is one of the examples that show that it is important not only to create consistent models, but also to continuously update them. CB metabolic models organize and integrate the knowledge on metabolism and transport into a well-defined network. Therefore, CB metabolic models enable to systematically explore the metabolic capacities of organisms under a broad range of conditions and allow assessing the effect of perturbations (genetic or environmental) on the underlying metabolic network. On the basis thereof, these analyses subsequently enable generating experimentally testable hypothesis, making predictions over a range of conditions and provide invaluable insights that cannot be obtained if not from a systems perspective.
Merging of metabolic models
Two or more independently created CB metabolic models of the same organism will likely contain many common reactions and metabolic pathways. Owing to the specific emphasis and expertise of the model builders, it is also likely that both models would describe different parts of metabolism or the same pathways with different detail level. To preserve the knowledge in these models, a logical step is combining them into one comprehensive or consensus model. Merging two or more CB metabolic models describing the same organism might seem, at first sight, a straightforward task. Nevertheless, it can prove quite time consuming and full of unexpected challenges, such as those associated with the so-called namespace problem, derived from using different names for the metabolites [26]. This complicates the automatic identification of compounds common to both models. This implies that manual curation is still required to identify similar reactions and remove discrepancies.
Topological and elemental balancing inconsistencies
In a CB metabolic model all reactions must be stoichiometrically balanced so that there is no net internal production of any metabolite. Software tools, such as the COBRA Toolbox [27] include functionalities to inspect the model and detect unbalanced reactions. These tools require all metabolites in the model to be annotated with their chemical formula, which is the case for iNJ661 but not for GSMN-TB 1.1. Moreover, GSMN-TB 1.1 does not explicitly contain water or protons (apart from transport reactions and respiration), making it impossible to verify whether the reactions are elementally balanced.
Futile cycles are metabolic routes with no net gain. The existence of futile cycles in a metabolic network expands the solution space and complicates flux predictions. In some cases, these cycles are inherent to the biology of the studied organism. However, they can also appear as a result of an overlooked doubling of reactions, or by wrongly assigned reaction directionality. These types of futile cycles are harmless from the model point of view, as long as they do not lead to net production or degradation of metabolites. Otherwise, they render the model unbalanced and model predictions can become unreliable. Fig. 2 shows the timeline of the successive CB models of Mtb metabolism that have been reconstructed since 2005. The very first CB metabolic model described the synthesis of triacylglycerol from glucose in human adipose tissue in 1986 [28]. Nearly two decades later, in 2005, the first CB metabolic model of Mtb appeared [14]. This model (MAP) was a detailed description of the mycolic acid synthesis pathway. Mycolic acids are long chain fatty acids that are unique to mycobacteria and essential for their survival [29]. In 2007, two genome-scale CB metabolic models of Mtb were independently published. Even though both models, GSMN-TB (Genome Scale Metabolic Network Tuberculosis) [30] and iNJ661 (in silico Neema Jamshidi, 661 genes) [23], describe the same organism, there are a number of substantial differences between them. GSMN-TB is arguably more complete than iNJ661, as it contains more genes (726 as compared to 661) and it also accounts for the methylcitrate cycle, which is critical for intracellular growth of Mtb [31]. iNJ661 has a more detailed annotation containing chemical formulas for each metabolite (except for some groups of metabolites that are lumped together and protein-metabolite complexes) and it is topologically more consistent, since it contains no duplicated reactions or metabolites. In 2009, Colijn and colleagues [32] metabolically interpreted gene expression data to predict the impact of 75 different drugs, combinations of drugs and media compositions on the mycolic acid synthesis capacity of Mtb. The mycolic acid synthesis pathway is described with greater detail within MAP than in GSMN-TB. Therefore, all mycolic acid reactions in GSMN-TB were replaced with the mycolic acid reactions from MAP creating a more comprehensive model (indicated by MMF-RmwBo in Fig. 2). In the beginning of 2010, Fang and colleagues [33] used a semi-automatic method to create a model more compatible with in vivo conditions, iNJ661v, which optimally reproduced in vivo gene essentiality measurements. For completeness, this model was supplemented with reactions and metabolites from GSMN-TB and with the methylcitrate cycle. In the same year, Bordbar and colleagues [7] created the first macrophage-Mtb combined model. This dual model combined iNJ661 with a cell-specific alveolar macrophage model derived from the first human metabolic reconstruction [34]. High-throughput host gene expression data from ex vivo infected macrophages were integrated in the model to distinguish three different forms of tuberculosis: latent, pulmonary and meningeal. In 2011 Chindelevitch and colleagues developed MetaMerge, an algorithm to combine two CB metabolic models, and used it to merge iNJ661 and GSMN-TB [35]. The joining of both models by MetaMerge is an automated process, therefore manual curation is still required to select correct reactions from highly similar reactions derived from both models and to identify metabolites that could not automatically be assigned to a database identifier, or whose chemical formula could not be determined. In 2013, an improved and extended version of GSMN-TB, GSMN-TB 1.1 appeared [24]. GSMN-TB 1.1 contains the cholesterol degradation pathway and additional corrections to the original GSMN-TB model.
Biomass functions for in vitro Mtb
iNJ661 and GSMN-TB 1.1 are reconstructed independently and therefore not only differ in network topology, but also differ in the biomass functions. The chemical formulas of all biomass precursors in a CB metabolic model, multiplied with their stoichiometric coefficients, should add up to 1 g dry weight of biomass. This is the case for the biomass functions of iNJ661 and sMtb and the contribution of each subgroup of metabolites to the total biomass can be calculated (Table 1). However, the weight percentage of nucleic acids in iNJ661 seems to be up to five-fold higher than those used in GSMN-TB 1.1. This difference can be attributed to differences in two studies reporting on nucleic acid dry weight percentages [36,37]. Unfortunately, there are no chemical formulas provided in GSMN-TB 1.1, which complicates the identification of the exact nature of some compounds, such as 'DIM' (dimycocerosate) or 'PIMS' (phosphatidyl myo-inositol mannosides), and such a classification of the relative contribution of subgroups of metabolites to biomass cannot be obtained. Biomass functions are often used to validate CB metabolic models by comparing predicted specific growth rates with experimentally obtained specific growth rates. iNJ661 was validated in such a way, for growth on three media differing in carbon and nitrogen sources [23]. Similarly GSMN-TB, the predecessor of GSMN-TB 1.1 was validated using experimentally measured specific growth rates for various measured glycerol uptake rates [30].
For sMtb, the biomass composition is based on the average composition measured at two different growth rates [36], although adaptations would be required for those conditions where experimental evidence shows altered compositions. Objective functions for dormant mycobacteria are most likely very different from those of actively replicating mycobacteria.
Biomass functions for in vivo Mtb
Biomass composition of Mtb is not constant over different conditions. For instance it is known that Mtb accumulates triacylglycerol under in vitro conditions that produce a state which mimics the dormant state in the host [38], and that the synthesis of a specific class of iron chelating molecules, called mycobactin siderophores, is required for iron acquisition [39]. These adaptations effectively change the biomass composition. Moreover, in vivo Mtb is under constant stress caused by the host immune system, in particular oxidative stress by reactive oxygen and nitrogen species produced by the host [40]. The damaging effects of these reactive species must be compensated, again changing the growth requirements, which should be reflected in the optimization objective(s) when in vivo metabolic states are simulated.
A consensus metabolic model of Mtb (sMtb)
The mere existence of eight different genome-scale metabolic models of Mtb, of which most are extensions of previous ones, reflects the importance of keeping CB metabolic models up to date. Two major independently created CB models of Mtb metabolism have thus far not been merged and manually curated. These two models: GSMN-TB 1.1 and iNJ661 differ in size and cover partly overlapping parts of Mtb metabolism. Metabolites are annotated differently for both models. Model iNJ661 contains for the metabolites: abbreviations, full names, chemical formulas and charges, whereas GSMN-TB 1.1 only contains abbreviations and full names. Both models use different abbreviations and few metabolite names appear the same in both models. Neither model contains references to persistent chemical databases, such as ChEBI [41], PubChem [42] or KeGG [43] or database-independent identifiers, such as SMILES [44]. There are large parts of metabolism covered by GSMN-TB 1.1 that are not covered by iNJ661 and vice versa. In addition, the mycolic acid synthesis pathway is described in more detail by model MAP than either iNJ661 or GSMN-TB 1.1. Therefore we have constructed sMtb, a manually curated merged model of MAP, iNJ661 and GSMN-TB 1.1 that is currently the most comprehensive genome-scale metabolic model of Mtb. sMtb is provided in the supplementary material in SBML formats, level 2 and 3 and as a spreadsheet. Unlike previously published CB metabolic Mtb models, sMtb contains chemical formulas, references to KeGG, PubChem, ChEBI and SMILES for all metabolites. These references permit automated reasoning and allow all reactions to be elementally balanced. The metabolic network of sMtb contains 1192 reactions, 915 genes, and 929 metabolites. It includes a number of important extensions to previous models, such as the mycolic acid synthesis [29], dimycocerosate ester biosynthesis [45] and cholesterol degradation [8] pathways that have been updated according to the latest insights. In sMtb 84% of the reactions are associated with the corresponding genes, whereas in GSMN-TB 1.1 and iNJ661 these percentages are only 75% and 77%, respectively. A high percentage of gene-associated reactions in a CB metabolic model is a signature of a reliable network topology. However, it is not a guarantee, because the gene essentiality predictions of GSMN-TB 1.1 are better than those of iNJ661 (Table 2). This does not necessarily mean that the network topology of GSMN-TB 1.1 is better than that of iNJ661, it could also be due to the more accurate biomass objective of GSMN-TB 1.1 that is designed to describe in vitro growth.
Prediction of gene essentiality
Gene essentiality predictions depend, among other factors, on the available nutrients, the topology of the metabolic network, the quality of the annotation and the chosen objective function. These predictions are suitable to test the topology of a metabolic network, however, they are by no means a quantitative validation of flux distribution predictions. Genes are deleted from the model one at the time and all the reactions that are dependent on the enzyme encoded by the gene are constrained to carry no flux. If the value of the objective function (often maximization of biomass production) is significantly or totally reduced by these constraints, the gene is predicted to be essential. These predictions are thus condition specific and differ for the various models. We have used iNJ661, GSMN-TB 1.1 and sMtb to predict genes that upon in silico deletion would result in a decrease of the specific growth rate by 95% or more (see supplementary methods). Those genes were said to be essential and compared to an in vitro gene essentiality dataset generated via deep sequencing [5]. It can be seen in Table 2 that sMtb performs best in predicting in vitro gene essentiality, with an accuracy of 80% as compared to 75% for GSMN-TB 1.1 and 64% for iNJ661.
However, as the chosen threshold changes, so do the sensitivity (also called true positive rate) and the false positive rate (1specificity). The relationship between the false positive rate and the true positive rate for the gene essentiality predictions by the various models for different threshold values is given in a Receiver Operating Characteristic (ROC) curve ( Supplementary Fig. 2). The corresponding Area Under the Curve (AUC) represents the chance that a randomly chosen experimentally observed essential gene is predicted as such and is commonly used for model comparison. For iNJ661, GSMN-TB 1.1 and sMtb this chance equals 0.65, 0.78 and 0.80 respectively. In all three cases the p-values (all lower than 10 −5 ) associated with the AUC show that these areas are significantly different from 0.5, which would correspond to a random prediction.
Central carbon metabolic flux predictions compared to 13 C data
To validate CB metabolic models, ideally the predicted metabolic states would be compared to measured metabolic states. Although metabolic states cannot directly be measured, they can be inferred by isotopic labeling experiments. Flux distributions obtained from Mtb CB genome-scale metabolic models have thus far not been compared to in vitro 13 C inferred fluxes as has been done for other organisms, such as E. coli [16,46].
We compared the ability to correctly predict metabolic flux distributions for the three CB metabolic models: iNJ661, GSMN-TB 1.1 and sMtb. In vitro results for Mtb and the attenuated TB vaccine strain Mycobacterium bovis Bacillus Calmette-Guérin (BCG) were obtained from Beste et al. [47]. BCG has a high degree of genome identity to Mtb and is therefore often used as an Mtb surrogate [48][49][50]. The three CB metabolic models GSMN-TB 1.1, iNJ661 and sMtb all contain biomass functions that are based on both BCG and Mtb biomass composition. Therefore, metabolic fluxes from both Mtb and BCG are used. Beste and colleagues measured the specific glycerol consumption rate, the specific Tween 80 consumption rate and the specific CO 2 production rate at two different dilution rates: 0.01 h −1 and 0.03 h −1 for BCG and 0.01 h −1 for Mtb [47]. These experiments were done in a chemostat, therefore the dilution rate equals the specific growth rate. Tween 80 is a fatty acid ester of sorbitan polyethoxylate. Mycobacteria have phospholipase A activity that release fatty acids from Tween [51]. In the case of Tween 80, oleic acid is released. Therefore, the specific consumption rate of Tween 80 can be simulated as the specific consumption rate of oleic acid (for more details see supplementary methods).
Non-growth associated maintenance is expressed as a conversion of ATP to ADP and quantifies the energy required by Mtb to maintain itself in a given environment. All models gave the best specific growth rate prediction when the non-growth associated maintenance was set to 0 mmol gdw −1 h −1 ( Supplementary Fig. 1). However, a small amount of energy for maintenance is always required to sustain an organism in its environment, therefore a small arbitrary maintenance flux of 0.1 mmol gdw −1 h −1 was included in each model before predicting the optimal specific growth rate to compare with the measured values ( Table 3).
As can be seen in Fig. 3, predicted fluxes and 13 C inferred in vitro fluxes in general do not completely agree. The different pathways in central carbon metabolism are separated in Fig. 4 and the predictions of the different models are given. Metabolic pathway representations of the metabolic state predictions are given in Supplementary Figs. 3-5. All models predict a low flux through the pentose phosphate pathway, even though 13 C inferred fluxes show otherwise for BCG at a specific growth rate of 0.03 h −1 , but show completely different behaviors for the tricarboxylic acid cycle and the glyoxylate shunt (Fig. 4). The discrepancies between 13 C inferred fluxes and the flux predictions by the various models show that the predictions of the models become worse as the distance (i.e. the number of reactions) from the glycerol entry point, where glycerol is converted to glycerol-3-phosphate, increases. The predictions for pathways such as the TCA cycle and glyoxylate shunt are worse than those for glycolysis and glycerol uptake, because they are further 'downstream' of the glycerol entry point in the models and thus more options exist for the flux to be rerouted toward alternative parts of the metabolic network that are not shown in the network depicted in Fig. 3. Model sMtb does relatively well at flux predictions for glycolysis and the TCA cycle. In contrast to iNJ661 and GSMN-TB 1.1, it is the only model that predicts a flux from pyruvate to acetyl-CoA for BCG at a specific growth rate of 0.03 h −1 and Mtb at a specific growth rate of 0.01 h −1 . The standard deviations for most predicted fluxes are relatively small (given by error bars in Fig. 4), implying that the predictions are precise but not accurate. This could be partly due to the applied sampling method to determine means and standard deviations (see Supplementary methods), but it could also be caused by a bimodal distribution of flux solutions instead of a normal distribution, which would limit the usefulness of concepts such as means and standard deviations. Another point to consider regarding flux predictions is that although the flux predictions of all three models can be improved, 13 C fluxes are also inferred from a model, using measured metabolites, which makes it more complicated to point out whether the predicted fluxes, inferred fluxes, or both can be improved.
Nevertheless, sMtb shows the highest agreement between inferred and predicted fluxes, closely followed by iNJ661 ( Table 4). The more accurately reflected cellular behavior under in vitro conditions by sMtb as compared to iNJ661 and GSMN-TB 1.1 increases the confidence of predictions of cellular behavior under in vivo conditions by sMtb. Therefore, sMtb provides a more accurate platform for drug target discovery than was available before.
Drug-phenotype predictions
We tested the three models on their ability to assess the effectiveness of anti-TB drugs with known metabolic targets. Table 5 provides an overview of the predicted phenotypes after drug application by inactivating the specific enzyme and the corresponding reaction(s) in silico (see supplementary methods). sMtb predicts the highest number of non-viable phenotypes caused by anti-TB drugs, closely followed by iNJ661 and GSMN-TB 1.1. Nevertheless, these predictions are based on growth on Roison's minimal medium [47], which does not represent in vivo conditions. Moreover, in vitro biomass functions are used for both GSMN-TB 1.1 and sMtb. Setting the models such that they simulate in vivo conditions would alter these drug-phenotype predictions. However, this is complicated due to the fact that iNJ661 does not contain a cholesterol degradation pathway, which has been shown to be important for intracellular growth and survival [8,[52][53][54][55][56]. Mtb infection is a complex interplay between the pathogen and its host that involves cellular changes in both organisms [57]. Therefore, modeling both host and pathogen metabolism simultaneously is required for an accurate representation of infection. While CB metabolic models unfortunately cannot directly predict which molecules are effective drugs, they can predict which metabolic enzymes make for suitable drug targets. Whether or not such enzymes can be effectively inhibited depends on the characteristics of the enzyme itself. Databases such as TuberQ [58] can provide a druggability analysis for an enzyme predicted to be a suitable drug target, thereby verifying if the enzyme can effectively be targeted. An approach to select suitable drug targets will be more effective if essentiality analysis is combined with additional systems level information such as information on the accumulation of stable toxic intermediates. For example, the cholesterol degradation pathway in Mtb [8] contains a large number of enzymes, many of them essential for cholesterol degradation and thus possible drug targets. However, stable toxic intermediates such as cholest-4-en-3-one and catechol derivatives accumulate if the enzymes HsaC, KshA, Cyp125 and Cyp142 are non-functional [59,60]. The accumulation of such intermediates can be fatal to Mtb, increasing the potential of these enzymes as drug targets. A similar approach can Table 5 Drugs with known metabolic targets [6,[83][84][85] and the percentage of the specific growth rates obtained after in silico gene knockouts of these targets.
Perhaps one of the biggest advantages of using CB metabolic models to find drug targets is that it enables the prediction of metabolic rearrangements after constraining the flux through reactions that are known to be affected by a given drug. This can highlight the possible 'escape routes' that Mtb possesses. Bhat and colleagues [12] used such an approach which is further discussed in Section 4.7.
sMtb overall performance
Model iNJ661 predicts metabolic states relatively well as compared to GSMN-TB 1.1 (Table 4, Figs. 3 and 4), but on the other hand the gene essentiality predictions of GSMN-TB 1.1 are better (accuracy of 75%) than those of iNJ661 (accuracy of 64%). The consensus genome-scale CB metabolic model sMtb is the most comprehensive, manually curated genome-scale CB model of Mtb to date. It represents the strengths of iNJ661 and GSMN-TB 1.1 and not only gives accurate qualitative predictions, such as gene essentiality predictions (Table 2) and drug-phenotype predictions (Table 5), but also accurate quantitative predictions, such as the specific growth rate (Table 3) and the metabolic states ( Fig. 4; Supplementary Figs. [3][4][5]. The overall improved performance of sMtb is essential for obtaining meaningful and accurate predictions of the metabolic state in conditions that are experimentally inaccessible. Moreover, the improved annotation of sMtb regarding its metabolites is a critical point, as it enables future refinements and extensions by other researchers with relative ease.
However, even though sMtb performs better in overall predictions of in vitro metabolic states, there is room for improvement, especially regarding the metabolic state predictions of the pentose phosphate pathway and the glyoxylate shunt. Options to achieve these better predictions would be to supply a more accurate objective, or to improve the underlying metabolic network of sMtb.
Understanding Mtb metabolism and designing intervention strategies: challenges and outlook
In an attempt to mimic metabolic states of Mtb in various environments more accurately, CB metabolic models can be constrained with various types of -omics data. Unlike flux measurements, gene expression data can be relatively straightforwardly obtained using RNA sequencing or micro array technologies. CB metabolic models can also act as scaffolds for other types of -omics data, such as proteomics. These data types have the added advantage of being (almost) genome-scale and can be integrated into CB metabolic models, creating condition-specific models with increased predictive power. Such condition-specific models are important to provide reliable metabolic state predictions in in vivo conditions where uptake rates and metabolic objectives are unclear, with the ultimate goal of designing novel intervention strategies.
Integration of expression data
Alternative methods have been developed to integrate either gene or protein expression data into CB models, see [20,[62][63][64] for recent reviews. A systematic evaluation of these methods, comparing performance and robustness using alternative models and data sets [65] shows that no method outperforms the others in all the tested scenarios. Here, we will focus on the methods that have been applied to explore mycobacterial metabolism. E-Flux [32] constrains the maximum flux through a reaction using the measured gene expression levels. Whenever the expression level of an enzyme-coding gene is low, tight constraints are imposed on the maximal flux through the corresponding reaction. The rationale is that mRNA levels can be used as an approximation to the amounts of protein available, and these in turn can be used as an approximation to the upper bound on reactions rates. This algorithm was tested using two models, MAP and MMF-RmwBo. The Boshoff Mtb gene expression compendium [66] contains over 400 microarray experiments measuring the transcriptional adaptations of Mtb to 75 different drugs, drug combinations and growth conditions. E-Flux was used to predict the impact of each of these conditions and drugs on the biosynthesis of mycolic acids. This approach correctly predicted the specificity of seven of the eight known inhibitors of mycolic acid biosynthesis included in the data compendium. Additionally, it was also able to identify a small number of non-specific potential inhibitors and enhancers of mycolic acid biosynthesis.
While E-Flux uses transcript data to improve the predictions of metabolic fluxes, Fang and colleagues [67] proposed an in silico approach to create state-specific models by integrating gene expression data. Their method relies on comparing gene expression levels between a metabolically well-characterized reference state and the perturbed state of interest. This method uses the flux distribution in the reference state and imposes soft constraints on the fluxes according to the observed changes in gene expression to characterize the perturbed metabolic state. Changes in gene expression data for wild type Mtb H37Rv, as well as for the dosR deletion mutant, associated with the transfer from normoxic to hypoxic conditions were combined with iNJ661v to produce condition specific models for both strains. These models correctly Glyoxylate shunt and others predicted the essentiality of dosR for the adaptation to hypoxia. Additionally, the model also predicted the altered biomass composition of Mtb in hypoxic conditions (linked to the increased production of cell-wall metabolites) and the critical contribution of the reductive side of the tricarboxylic acid cycle to the adaptation to low oxygen environments. The condition specific models can also serve to specifically identify drug targets for the latent stages of the disease. The algorithms described so far provide as primary output models of metabolism with altered constraints that can be used to further characterize the metabolic responses. Differential Producibility Analysis (DPA) [68] on the other hand, aims at extracting metabolic signals from expression data. DPA uses the model to identify genes affecting the production of each metabolite in the network, then expression data is used to obtain and average expression values of each set of metabolite associated genes. These values are then used to identify the metabolites associated with increased and decreased gene expression. DPA was used to analyze the metabolic state of Mtb in vivo (with expression data obtained from sputum samples of TB patients and from pathogens replicating in mouse macrophages) [69,70] and in various in vitro conditions (such as growth on different carbon sources or exposure to different stress sources) [66,71]. The analysis showed that one of the main adaptations to the macrophage environment is the down regulation of genes influencing metabolites in central metabolism, and the simultaneous upregulation of genes linked to cell wall synthesis.
Integration of regulatory information
Probabilistic regulation of metabolism (PROM) [19] is an algorithm that attempts to link regulatory and metabolic networks. The transcriptional regulatory network of Mtb [72] and the Boshoff Mtb compendium [66] were used to build a probabilistic model of gene regulation. The probabilities were then integrated into the iNJ661 model as constraints on reactions of which the flux could vary according to the state of the transcription factor regulating the expression of the enzyme-coding gene. PROM correctly predicted the phenotype of 23 out of the 24 studied transcription factor knock out mutants. The increased knowledge on the regulatory networks in Mtb [73] opens new ways to consider not only genes primarily related to metabolism but also to their regulators, thereby increasing the potential to discover new drug targets.
Growth related ATP coefficients and non-growth associated maintenance
The biomass reaction describes the assembly of biomass precursors into new cells. Each biomass precursor has a defined coefficient denoting the amount (in mmol) required to form 1 g dry weight of biomass. The assimilation of these precursors requires energy, in the form of ATP to ADP conversion that is introduced through a growth related ATP coefficient in the biomass function (also called growth associated maintenance). This coefficient is very similar for iNJ661, GSMN-TB 1.1 and sMtb ( Table 3). The growth related ATP coefficient of iNJ661 equals 60 mmol gdw −1 and that of GSMN-TB equals 47 mmol gdw −1 plus an additional 8.8 mmol gdw −1 associated with protein formation. Both models thus have a similar value for growth-associated maintenance. Unlike the growth related ATP coefficient, non-growth associated maintenance is independent of the biomass composition. Instead, it depends on the environment and on the metabolic pathways utilized for growth [74]. It is assumed that non-growth associated maintenance, in the form of ATP to ADP conversion, is a fixed value independent of the specific growth rate. Here, we have set the non-growth associated maintenance to a small value so that the three models give the best predictions of the specific growth rate (see Supplementary Fig. 1).
Non-growth associated maintenance is a useful parameter when trying to simulate in vivo, e.g. phagosomal, growth. The phagosome is a hostile environment and the energy required for non-growth associated maintenance will be relatively high, compared to in vitro growth conditions. Moreover, the specific growth rate will be limited in the phagosome. A high non-growth associated maintenance requirement and a low specific growth rate cannot be simulated effectively using a model that contains a regular biomass function, which includes a growth related ATP coefficient, but no non-growth associated maintenance cost.
Objective and constraints for Mycobacterium tuberculosis in the host
When using CB genome-scale metabolic models of Mtb as opposed to non-pathogenic microorganisms grown in an in vitro condition, it is not straightforward to select an optimization objective. The primary objective of the pathogen might be focused on survival instead of growth. In addition, the host-pathogen interaction is a complex and time-dependent dynamic process, where they mutually influence each other. Hence, CB metabolic models, which rely on the steady state assumption, might not be realistic for many pathogens. Mtb is known for its ability to remain dormant in the host for years. In those cases, the host's immune system prevents the pathogen from spreading and Mtb is contained within solid granulomas [2]. It is estimated that 2 billion people worldwide are latently infected [1]. The relative metabolic activity at the latent infection stage however, is very low. There is thus a stark need to understand the mechanisms underlying dormancy and predict its dynamics and the switch to active state. Modeling accurately and realistically this infection stage is hence of utmost importance. A key factor determining the accuracy of CB metabolic models in an infection setting is the identification of a suitable objective function representing dormant Mtb. Shi and colleagues created an objective function representing non-growing cells, based on the minimal cell wall composition deduced from gene expression data [75]. They compared predicted flux changes between growing and nongrowing cells with qPCR data and found consistency between fluxes and gene expression for critical pathways of central metabolism. A limitation of this approach is that the metabolic model is based on transcript abundance data [75]. A leap forward would be to investigate the biomass composition of Mtb in an in vivo or ex vivo situation.
Knowledge (or the lack thereof) of the availability of nutrients in the host environment is another factor that determines the quality of the model predictions. Bordbar and colleagues constructed a macrophage-Mtb model, iAB-AMØ-1410-Mt-661, they estimated that the carbon sources available in the phagosome were glycerol and even long chain fatty acids (myristic acid, palmitic acid and stearic acid) [7]. Recent insights have changed this picture and highlighted the importance of cholesterol [8], aspartate [76], and other nutrients [9] in the phagosomal environment. Knowing the precise composition and availability of such nutrients will enable to make much more accurate predictions of the in vivo metabolic state and of the in vivo essentiality of gene products.
Annotation of combinatorial proteins
Little is known about transport proteins of Mtb despite the abundance of genomic data [77]. Transport proteins are at the boundaries of the metabolic networks and therefore function as gatekeepers for fluxes. Not only is it important to know which compounds Mtb can take up, but it is also important to know whether these transporters are channels, symporters or antiporters. In addition, quantitative predictions also require the knowledge of the energy requirements of the transporters. A better annotation of transport proteins of Mtb is therefore required.
Cofactor limitation
Beste and colleagues mimicked the cofactor requirements of the enzymes by forcing the reactions catalyzed by these enzymes to use a small arbitrary amount of cofactor [30]. Quantitative predictions are most likely not accurate due to the arbitrarily chosen amount of cofactor used in any reaction. Nevertheless, such an approach could be extended to simulate cofactor limitations. Iron availability is assumed to be reduced in the phagosome [69], thus introducing ways to mimic this iron scarcity in the CB models, will lead to more accurate descriptions of the bacterial metabolism during the infection process.
Discovering new drug targets and combinations of drugs
Fang and colleagues integrated a dynamic cell population growth model and an enzyme inhibition model with a modified version of iNJ661 [78]. The integrated model was able to reproduce in vitro experimentally measured dose-response curves of 3-nitropropionate, an inhibitor of the glyoxylate shunt and the methylcitrate cycle.
Simulating single or double gene knock out mutants to discover potential drug targets and synergistic combinations, greatly depends on the network topology, the objective function, and the substrate(s) available to the bacteria. The difference in specific growth rate predictions between the wild-type and simulated single or double knock out mutants, is mainly attributable to the rates at which substrates are taken up and metabolites are secreted, and not to the compounds available. Synergistic combinations of drug targets can also be found by gradually decreasing flux through the first potential target, which can be found for example, through a classic gene essentiality approach, and afterwards identifying those parts of the metabolism that are forced to carry a relatively higher flux. Bhat and colleagues applied a similar strategy and studied the effect of varying inhibition by isoniazid, a front-line drug, on the metabolic state [12]. By gradually limiting the flux through the target of isoniazid, InhA, they found that the flux through various pathways was induced compared to the unperturbed state. These pathways could then potentially be analyzed to identify suitable targets for drugs administered in combination with isoniazid.
These examples show the potential of using CB models to systematically probe the metabolic space of Mtb, generate novel insights and pin-point possible targets for interventions, with drugs or otherwise.
Combinatorial models and host drug targets
The integrated human alveolar macrophage-Mtb model iAB-AMØ-1410-Mt661 combines the Mtb metabolic model iNJ661 and the first reconstruction of human metabolism, Recon 1. Recently, the human model was updated to the consensus reconstruction, Recon 2 [79], which in turn can be combined with sMtb to create an updated macrophage-Mtb model. It is crucial for such a model to contain an accurate description of the phagosomal environment and its contents, as this provides the framework for the host-pathogen interaction and can have a large impact on the predictions of the metabolic state for both organisms. Although drug target discovery is generally focused on the pathogen, there are also opportunities to look at the host metabolism for drug targets. An example of a host-targeted drug is thioridazine, which is postulated to inhibit efflux of potassium and calcium from the phagolysosome required for its acidification [80]. The phagosomal environment steers the pathogen metabolism, thus drugs targeting primarily the host and altering this environment will result in metabolic changes in Mtb as well. This could result in a state that renders the bacteria more susceptible to subsequent anti-TB drugs. A combined model could provide additional host drug targets, however a thorough understanding of the functioning and composition of the phagosome is required. An experimentally validated and accurate macrophage-Mtb model has much potential for drug target discovery, especially for the identification of synergistic drug targets, both in the host and Mtb itself or a combination of both.
Conclusions
The quality and predictive power of genome-scale reconstructions of the metabolism and transport of Mtb is gradually increasing. Our current model, sMtb, outperforms considerably previously published models in in vitro metabolic state predictions (Table 4, Figs. 3 and 4) and specific growth rate predictions (Table 3) as well as in vitro gene essentiality predictions (Table 2) and drug-phenotype predictions (Table 5). However, there is still ample room for improvement. The predictions of flux through the pentose phosphate pathway can be improved for all models, while flux through the glyoxylate shunt is still best predicted by iNJ661. Better metabolic state predictions can be obtained through an improved network topology, by improving the determination of the biomass composition under different conditions, and by defining more accurately the objective function, as Schuetz and colleagues did [16] for a small E. coli model. Different combinations of the growth related ATP coefficient and the non-growth associated maintenance also have an impact on the metabolic state predictions, but these are hard to measure and their values can vary even for well-known organisms [81,82]. Nevertheless they can be valuable parameters to fit CB metabolic models to 13 C data, thereby improving their predictive power.
A CB metabolic model with sufficient in vitro predictive power forms the foundation for reliable in vivo metabolic state predictions. Nevertheless, the in vivo metabolic state of Mtb is arguably not in steady state and relatively little is known about the 'objective' of Mtb in the host. Efforts on both the experimental and modeling side of Mtb metabolism continuous to shed light on its in vivo metabolic state(s) and paves the way for the discovery of new (synergistic) drug targets and possible new intervention strategies. The long term vision is that such a metabolic model will be one of the modules of a larger multi-scale modeling framework that connects a variety of models at different scales, each describing a particular subset of the behavior of Mtb in infection settings. This will thus ultimately contribute to the grander vision of a model-based 'Virtual Patient', with enormous potential to Health and Medicine. and sucrose pathways. We also thank Brett G. Olivier (VU Amsterdam) for his help in converting sMtb in an SBML format. This work has been supported by Framework Program 7 of the European Research Council, SysteMTb Collaborative Project (HEALTH-2009-2.1.1-1-241587) and by the Netherlands Consortium for Systems Biology (NCSB) which is part of the Netherlands Genomics Initiative/Netherlands Organization for Scientific Research. | 2018-04-03T00:48:02.372Z | 2014-12-01T00:00:00.000 | {
"year": 2014,
"sha1": "0166710902093f04ea1ffc009ee3dc82d6d491bd",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.1016/j.smim.2014.09.013",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0166710902093f04ea1ffc009ee3dc82d6d491bd",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
97054191 | pes2o/s2orc | v3-fos-license | Healing of Double-Oxide Film Defects in Commercial Purity Aluminum Melt
The possibility of the formation of bonding between the two layers of a double-oxide film defect when held in a commercial purity liquid Al alloy was investigated. The defect was modeled experimentally by maintaining two aluminum oxide layers in contact with one another in a commercial purity Al melt at 1023 K (750 C) for times ranging from 7 minutes to 48 hours. Any changes in the composition and morphology of these layers were studied by scanning electron microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDX). The results showed that the oxide layers started to bond to one another after approximately 5 hours, and the extent of the bonding increased gradually by the holding time. The bonding is suggested to form because of the transformation of cto a-Al2O3. A complete bonding formed between the layers only when the oxygen and nitrogen trapped between the two layers were consumed, after approximately 13 hours. The results also confirmed that the nitrogen within the atmosphere of an oxide film defect reacts with the surrounding Al melt to form AlN at the interface of the defect and the melt.
I. INTRODUCTION
CAMPBELL [1] described the concept of an entrained double-oxide film and its deleterious effects on the properties of an aluminum casting. Each time the surface of the metal folds on itself, the surface oxide film becomes entrained in the bulk liquid. This occurs as a doubled-over oxide film in which the internal surfaces are not bonded but have a layer of gas from the local atmosphere (presumably composed predominantly of air) trapped between them. Consequently, it leads to a crack in the solidified casting. Therefore, the defect necessarily resembles and acts as a crack that not only deteriorates the mechanical properties of the solidified casting but also could act as an initiation site for the formation of other defects (e.g., hydrogen pores [2][3][4][5] and Fe-rich phases [6] ) before the solidification.
The first oxide to form on the liquid of commercial purity Al alloy is reported to be an amorphous alumina layer. [7,8] The two amorphous layers of a newly formed and submerged double-oxide film transform to a crystalline c-Al 2 O 3 and then to a-Al 2 O 3 crystals after incubation times of approximately 5-10 minutes [9] and 5 hours [10] (at 1023 K [750°C]), respectively. The transformation to a-Al 2 O 3 is accompanied by a 24 pct decrease in volume of the oxide, and the tensile stresses induced by this volume change could fracture the oxide layers. [10,11] Nyahumwa et al. [12,13] suggested that a double-oxide film defect could consume its internal atmosphere of oxygen and nitrogen after the incubation time associated with the transformation of c-to a-Al 2 O 3 , when the fracture of the oxide layers bring the internal atmosphere of the defect and melt into contact. The consumption of the internal atmosphere has been verified recently by Raiszadeh and Griffiths, [14] who monitored the change with time of the volume of a trapped air bubble in different Al alloy melts using real-time X-ray radiography. Their results showed that, first, the oxygen of the trapped air bubble reacted to form Al 2 O 3 , and second, the nitrogen reacted to form AlN. These reaction processes were continuous, with no incubation time required. They suggested that the cracks that formed on the oxide layer around the air bubble during its movement in the liquid metal provided the necessary paths for the contact of the internal atmosphere of the bubble and the surrounding melt.
It was speculated [12] that after the consumption of the oxygen and nitrogen, the sides of the film defect would be forced into contact, at least at some points, and the films might bond together because of the changes that might occur in the nature of the oxide layers with time. The defect might then be deactivated partially, and its deleterious effect as a crack might be reduced.
The first evidence for this hypothesis was presented by Nyahumwa et al., [13] who studied the effect of hot isostatic pressing (HIP) on the fatigue life of an A356 alloy. They found that subjecting the turbulently filled castings to HIP treatment at a temperature close to the eutectic temperature of the alloy caused the cracks and pores in the network of oxide films to collapse and their surfaces to bond together. They attributed this bonding to the transformation of Al 2 O 3 to MgAl 2 O 4 , which involves a volume change and atomic rearrangement of the crystal structure.
More direct evidence for the possibility of the formation of bonding between the two layers of a double-oxide film defect was presented recently by Aryafar and Raiszadeh. [15] The authors modeled the defect experimentally by maintaining two aluminum oxide layers in contact with one another in an A356 liquid alloy (containing 0.3 wt pct Mg) at 1023 K (750°C) for times of 7 minutes to 48 hours. Their results demonstrated that the two layers of a doubleoxide film defect, when held in the A356 liquid alloy, might bond to each other by two different mechanisms: First, during the transformation of Al 2 O 3 to spinel (MgAl 2 O 4 ) after relatively short holding times of a few minutes, which would cause the layers to bond at several points, and second, during the gradual transformation of spinel to MgO after longer holding periods of 13 minutes to a few hours, which would cause strong bonding between the layers. Their findings also indicated that bonding could take place essentially only after the oxygen and nitrogen of the atmosphere within the defect were consumed.
Another criterion for the bonding of the two layers of a double-oxide film defect when held in a liquid Al alloy was found by Najafzadeh-Bakhtiarani and Raiszadeh, [16] who adopted the technique used by Aryafar et al. [17] to study the possibility of healing of an oxide film defect in a liquid Al-4.5 wt pct Mg alloy. Their results showed that in contrast to Al-0.3 wt pct Mg, no bonding took place between the two oxide layers when held in the Al-4.5 wt pct Mg melt, even after a holding time of 16 hours. They realized that the oxide layers that formed in the liquid metal were MgO layers, and no transformation occurred in the surface of the layers during the holding time. They suggested that the bonding did not take place between the two oxide layers because of the lack of a transformation, which involves the rearranging of the atoms, at the surface of the MgO oxide layers.
Commercial purity Al alloy does not contain a significant amount of Mg, and therefore, the bonding behavior suggested for Al-4.5 wt pct Mg and A356 alloys does not apply to it. In this work, the same experimental method was used to investigate the possibility of bonding between the two oxide layers of an oxide film defect in commercial purity Al alloy.
II. EXPERIMENTAL PROCEDURE
A commercial purity Al melt, with the composition shown in Table I, was prepared in a resistance-heated furnace. It was subsequently poured into silica sand molds with 5 pct sodium silicate and CO 2 gas as a binder in the shape of bars. The bars were machined to dimensions of 100 mm in length and 19 mm in diameter. Two bars were then placed in a seamless extruded steel tube (which is made specifically for gas industries), with dimensions of 210 mm in length and 20 mm in internal diameter ( Figure 1). The bases of the bars that were in contact with one another in the steel tube were polished to 9 lm before the bars were inserted in the tube, so that the naturally formed oxide layers resembled the two layers of a newly formed double-oxide film defect.
The steel tube was then transferred to a cylindrical electric furnace with a sliding door at the top. The temperature of the furnace was set to 1023 K (750°C) prior to the start of the experiment. The temperature of the Al bars increased at an average rate of 3.2°C s À1 (measured in a separate experiment using a K-type thermocouple inserted at the center of the steel tube), and the bars finally melted in the tube in 480 seconds. The only possible leak path from the trapped atmosphere between the two oxide layers to the ambient atmosphere was through the gap between the oxide layer around the Al bars and the wall of the steel tube. To eliminate this leak path, the oxide layer around the top of the upper Al bar was removed with a sharp tool beneath the surface of the melt to remove the oxide separating the melt from the steel tube, ensuring direct contact between the melt and the tube. The Al bars were held in the liquid state for varying lengths of time, between 8 minutes and 48 hours, before the steel tube raised in the furnace and held at the upper part of the heating chamber to let the liquid metal inside the tube solidify at a relatively slow rate (in approximately 40 seconds). The slow solidification of the metal was essential to prevent any thermal cracks from forming on the two oxide layers inside the melt. After solidification, the steel tube was cut and the Al bars were removed from it. In some experiments, the bonding that formed between the two oxide layers during the experiment joined the two bars to one another. In this case, the two oxide layers were separated by pulling the two Al bars apart, using a Zwick 1484 tensile testing machine (Zwick/Roell, Ulm, Germany) at a strain rate of 1 mm min À1 . The surfaces of these two oxide layers were then examined using optical microscopy, a Camscan scanning electron microscope (SEM; CamScan, Cambridgeshire, UK) fitted with an Oxford Inca EDX (Oxford Instruments, Oxon, UK) for microanalysis, and an Philips Xpert X-ray diffraction device (Philips Analytical, Almelo, The Netherlands).
Each experiment was repeated at least three times to confirm the repeatability of the results. More details of the experimental procedure can be found elsewhere. [15] III. RESULTS Figure 2 illustrates a photograph of the two oxide layers that were held in the liquid for 8 minutes. In addition to the detachment between the oxide layer and the edge of the test bar (which probably occurred because of a slight movement of the mould during its removal from the furnace), the two oxide layers were attached at a few points (two of which are indicated by arrows).
The SEM micrograph obtained from this layer and a higher magnification of this micrograph are shown in Figures 3 and 4, respectively. These micrographs exhibited some discontinuities in the oxide layer, which probably formed because of the thermal stresses induced to the oxide layer during the solidification of the specimen. The EDX spectrum obtained from the point denoted P1 on Figure 4 (shown in Figure 5) revealed that the material inside the crack contained about 3.2 wt pct O. The SEM micrograph shown in Figure 6 was also obtained from the surface of the oxide layer that was held in the liquid for 7 minutes. It appears that the liquid metal has exuded through the oxide layer at one point.
When held in the liquid metal for 1 hour, the appearance of the oxide layers did not change significantly compared with those held for 8 minutes (see Figure 7).
The two oxide layers were attached at a few points, but the bonding was not strong and the two bars detached from one another easily when they were removed from the steel tube. The SEM micrograph highlighting one of these points is presented in Figure 8. The tip of the raised feature in this figure (denoted P1) was flat, as if pressed against an object. The concentration of oxygen at the tip (point P1), determined by EDX to be 8.4 wt pct, was much lower than those of the hillside (point P2) and the background (point P3) (49.4 and 21.6 wt pct, respectively). These observations imply that the Al melt exuded through a discontinuity that formed on the oxide layer, contacted the opposite layer, but could not wet and bond to it. Figure 9 shows the photograph of the oxide layers that were held in the liquid metal for 5 hours. The layers became darker in color, which implied their growth in thickness by time. Such change in the color of the oxide films when held in the liquid metal was also observed in the previous works of this research team on Al-0.3 wt pct Mg [17] and Al-4.5 wt pct Mg [16] alloys. This photograph shows that the number of the points bonded to one another increased considerably compared with the layers that were held in the liquid metal for 1 hour.
The SEM micrographs obtained from these oxide layers indicated areas in which the oxide layer seemed to be peeled off from the oxide surface. Two examples of such areas are shown in Figures 10 and 11. The area denoted B on Figure 10 also seems to be a part of the opposite oxide layer that was bonded to the oxide surface during the holding and was then peeled off from the opposite oxide layer when the two bars detached. The strength of the bonding that formed between the oxide layers was very low, and the two bars detached from one another easily during the removal of the bars from the steel tube. The EDX spectrum obtained from the dendrites inside the cracks that formed on the oxide layer (point P1 on Figure 12), which is shown in Figure 13, revealed that no oxygen was left in the atmosphere trapped between the two oxide layers at this holding time.
The photograph obtained from the oxide layers that were held in the liquid metal for 16 hours (shown in Figure 14) revealed vast white areas on one side, which matched with vast darker areas on the other side. An SEM micrograph obtained from one of these white areas is shown in Figure 15. The EDX spectra obtained Fig. 9-Photograph of the oxide layers that were held in the liquid metal for 5 h. The number of points bonded to one another increased considerably compared with the previous specimens. from points P1 and P2 on this figure, which are presented in Figure 16, revealed the microstructure in point P1 to be AlN, which lay on a matrix of Al dendrites (point P2). Another SEM micrograph showing the AlN phase on the Al dendrites is presented in Figure 17. The matching darker areas on the opposite oxide layer were found to be alumina by SEM and EDX studies (not presented here, for brevity). The EDX studies also revealed that the concentration of Fe in the melt increased from 0.07 wt pct to approximately 3 wt pct during the 16 hours of holding the liquid metal in contact with the steel tube. However, this increase in the Fe content of the melt should not have any significant influence on the oxidation behavior of the melt. Figure 18 shows the photograph of the oxide layers that were held in the liquid metal for 48 hours. This photograph also shows white and dark matching areas on the two oxide layers. The white areas are brighter and more distinct than those observed in the photograph of the oxide layers that were held in the liquid for 16 hours. The SEM micrograph obtained from these layers (presented in Figure 19) shows an oxide layer over an AlN layer. The identity of these layers was confirmed by EDX analysis (shown in Figure 20). The oxide layer seems to be peeled off from the AlN layer in some parts. The microstructure observed in the SEM micrographs of the oxide layers held for 16 hours (i.e., fragments of AlN over a matrix of Al dendrites, see Figure 15) was also observed in this specimen (see Figure 21). Figure 22 illustrates another SEM micrograph taken from the oxide layer that was held for 48 hours. This micrograph shows Al dendrites that remained on the AlN layer when the two Al bars were pulled apart by the tensile test machine.
None of the bondings that formed between the two oxide layers in any of the specimens with different holding times ranging from 8 minutes to 48 hours showed any significant strength, and the force needed to separate the Al bars after the solidification was negligible.
IV. DISCUSSION
The two previous works of this research team [16,17] suggested two essential criteria for the formation of a complete bonding between the two oxide layers when held in a liquid Al alloy melt. The first criterion is the (almost) complete consumption of oxygen and nitrogen of the trapped atmosphere. If satisfied, then the two layers come in to contact with one another. In this case, if the second criterion, i.e., the occurrence of a transformation at the surface of the two layers that involves the rearrangement of the atoms, is satisfied, then the two layers may bond to one another.
The EDX spectra obtained from the dendrites inside the cracks that formed on the oxide layers ( Figures 5 and 13) revealed that the oxygen of the trapped atmosphere between the two oxide layers was consumed in approximately 5 hours. Comparing this time with the consumption rates obtained by Raiszadeh and Griffiths [14,18] suggests that the nitrogen within the trapped atmosphere would be consumed in approximately 13 hours. The SEM micrographs obtained from the oxide layers that were held in the liquid metal for 16 hours or longer confirmed the formation of AlN from the reaction of the nitrogen within the trapped atmosphere and the surrounding liquid aluminum. This is the first direct evidence confirming the hypothesis proposed by Nyahumwa et al., [12,13] who suggested that a double-oxide film defect could consume its internal atmosphere of oxygen and nitrogen after the incubation time associated with the transformation of c-to a-Al 2 O 3 . However, Nyahumwa et al. suggested that the AlN layer forms inside of the double-oxide film defect, but the SEM micrographs obtained in this work (see Figures 15,19,21,and 22) revealed that the AlN layer formed at the interface of the oxide layers and the melt (i.e., outside the defect).
The exact mechanism of the exudation of liquid metal from some points at the holding times of less than 5 hours (see Figures 6 and 8) is not clear. Such exudation is only possible if the oxide layer either cracks or spalls because of stresses generated in the oxide layer. The presence of stresses in oxide layers and their effects on cracking, spalling, and decohesion of oxide layers has been recognized for some time. [19] Krishnamurthy and Srolovitz, [20] who have presented a continuum model for the growth of an oxide film, found that significant stress gradients can be developed across the oxide layer during the growth of the layer and that the oxide/substrate interface can experience relatively high compressive stress. Drouzy and Mascre [21] reviewed the principal features of oxidation phenomena and the data concerning several metals. These researchers stated that a variety of parameters (e.g., interfacial tension, the mechanical effect of vibrations, falling dust, gaseous diffusion, and mechanical properties of the oxide layer) can produce discontinuities or defects in the oxide layer, particularly once it has attained a certain thickness. The metal sometimes can penetrate into such discontinuities by a capillary effect. Figure 8, however, clearly indicated that the liquid metal that exuded from one of these discontinuities did not bond to the opposite oxide layer. This was because of the inability of the exuded metal to wet the opposite layer at the presence of oxygen in the trapped atmosphere. It has been reported in the literature, [22] using a sessile drop technique, that aluminum does not wet alumina below 1173 K (900°C). Subsequent studies (for example, References 11 and 23) revealed that in the sessile drop experiments, it is the presence of a thin surface alumina layer that accounts for the nonwetting of alumina by aluminum, and once this oxide layer is eliminated, Al melt can wet the alumina.
The photograph obtained from the oxide layer that was held in the liquid metal for 5 hours (Figure 9) and the related SEM micrographs (Figures 10 and 11) revealed that after 5 hours of holding, several discrete parts of the oxide layer bonded to one another. This bonding caused the oxide layer on one side to peel off from the layer and remain on the opposite oxide layer (see Figure 10 and particularly the area denoted B on this figure) during the separation of the two bars by the tensile machine. These discrete bondings implied that the process by which the two oxide layers could bond to one another activated at this holding time. Comparing the holding time of this specimen (i.e., 5 hours) with the incubation times necessary for the transformations of different alumina phases, the transformation that began to take place at this holding time appears to be c-to a-Al 2 O 3 . This transformation occurs at the surface of the oxide layers and involves the rearrangement of atoms. [11] Hence, it satisfies one of the two criteria [16] for the formation of bonding between the two layers.
The other criterion is the almost complete consumption of the trapped atmosphere within the defect. Despite the complete consumption of the oxygen, nitrogen was still present in the trapped atmosphere at the holding time of 5 hours. The presence of the nitrogen prevented the two oxide layers from being in complete contact. However, the distance between the two oxide layers is not suggested to be greater than a few tens of nanometers at most. Using Stokes law, [24] Raiszadeh [25] and Raiszadeh and Griffiths [26] predicted that the thickness of the trapped atmosphere within an oxide film defect (the oxide layers of which are 10 nm thick) could not be greater than 10 nm; otherwise, its buoyancy causes the defect to float out of the metal (or attach to the upper surface of the gating system) in a short time. Campbell [1] speculated that this thickness is only a few nanometers. Therefore, it is likely that despite the presence of nitrogen, the unevenness of the oxide surfaces causes the two layers to be in contact in a few small, discrete areas. Once in contact, both criteria for the bonding of the two layers are satisfied, and the two layers bond to one another in these areas.
The nitrogen was not present in the trapped atmosphere at the holding time of 16 hours, and the two oxide layers were in an almost complete contact with one another. The photograph obtained from the oxide layers that were held in the liquid for 16 hours ( Figure 14) and the related SEM and EDX analysis (Figures 15-17) implied that the two oxide layers bonded in vast areas during this time. This implies that the transformation that began at the holding time of approximately 5 hours was gradual and continued to the holding time of 16 hours (and perhaps, longer). The SEM micrographs also revealed that the strength of the bonding that formed between the two Al 2 O 3 layers was greater than that of the AlN layer and the melt. Many micrographs revealed that the AlN layer detached from the oxide defect during the separation of the Al bars ( Figure 23(a)). In this case, the micrographs either showed areas of AlN phase on the Al dendrites (such as Figures 15 and 21), or Al areas attached to the AlN The two oxide layers did not bond to one another in some parts; however, the alumina layer was detached from the AlN layer in the areas that the two oxide layer did bond. layer (such as Figure 22). A few micrographs revealed also that the bonding between the two oxide layers was not complete over the entire oxide surface. However, in those areas in which the bonding formed (such as point P1 on Figure 19), the alumina was peeled off from the oxide layer during the separation of the bars, and the AlN structure underneath became visible in the micrographs. These two cases are illustrated schematically in Figures 23(b) and (c), respectively.
It is suggested [14] that the oxygen and nitrogen trapped in a real double-oxide film defect would be consumed in a short time because of the cracks that form on the oxide layers during its movement in the liquid metal, which would bring the trapped atmosphere and the surrounding liquid metal into contact. One study [18] estimated this consumption time to be in the range of a few seconds to 3 minutes at most, depending on the assumptions made about the dimensions of the defect.
However, hydrogen has been shown to diffuse into the trapped atmosphere of an oxide film defect and inflate it [3,14] if its concentration in the liquid is higher than the equilibrium amount associated with the ambient atmosphere. Therefore, the satisfaction of the ''being in contact'' criterion would depend on these two main phenomena: the consumption of gases trapped in the atmosphere of the defect and the diffusion of hydrogen into this atmosphere. However, if the concentration of dissolved hydrogen in the liquid metal is low, then the incubation time necessary for the transformation of c-to a-Al 2 O 3 could be expected to be one of the most important parameters that controls the rate of formation of bonding between the two oxide layers in a commercial purity Al alloy.
V. CONCLUSIONS 1. The results obtained in this work showed that the two oxide layers of a double-oxide film defect began to bond together when maintained in commercial purity Al melt for 5 hours. The comparison of this holding time to the incubation times reported in the literature for the transformation of different allotropies of alumina suggested that the bonding formed between the layers because of the transformation of c-to a-Al 2 O 3 . The extent of the bonding increased gradually by the holding time so that the oxide layers almost completely bonded to one another after 48 hours. 2. The two criteria that this research team suggested previously as necessary for the bonding formation between the two oxide layers when held in the liquid Al alloys (i.e., the almost complete consumption of the trapped atmosphere, and the occurrence of a transformation involving rearrangement of the atoms at the surface of the oxide layers) were also observed to be valid for the commercial purity alloy.
3. The results obtained in this study also confirmed directly, for the first time, that the nitrogen within the atmosphere of the oxide film defect reacts with the surrounding Al melt to form AlN. In the current study, this AlN layer formed at the interface of the oxide defect and the melt. 4. The incubation time necessary for the transformation of c-to a-Al 2 O 3 is expected to be one of the most important parameters that controls the rate of formation of bonding between the two oxide layers in a commercial purity Al alloy, providing that the concentration of dissolved hydrogen in the melt is low. | 2019-04-06T00:43:45.347Z | 2011-02-01T00:00:00.000 | {
"year": 2011,
"sha1": "4fbcd6d5d17618efbb839a7463c5fe6a3deb9365",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11663-011-9480-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "13085e1e49b3e17f7cff5a70cc47e980712452c3",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
222832665 | pes2o/s2orc | v3-fos-license | The association of ABO blood group with indices of disease severity and multiorgan dysfunction in COVID-19
Key Points COVID-19 patients with blood group A or AB are at increased risk for requiring mechanical ventilation vs those with blood group O or B. COVID-19 patients with blood group A or AB appear to exhibit a greater disease severity than patients with blood group O or B.
• COVID-19 patients with blood group A or AB are at increased risk for requiring mechanical ventilation vs those with blood group O or B.
• COVID-19 patients with blood group A or AB appear to exhibit a greater disease severity than patients with blood group O or B.
Studies on severe acute respiratory syndrome coronavirus 1 (SARS-CoV-1) suggest a protective effect of anti-A antibodies against viral cell entry that may hold relevance for SARS-CoV-2 infection. Therefore, we aimed to determine whether ABO blood groups are associated with different severities of COVID-19. We conducted a multicenter retrospective analysis and nested prospective observational substudy of critically ill patients with COVID- 19. We collected data pertaining to age, sex, comorbidities, dates of symptom onset, hospital admission, intensive care unit (ICU) admission, mechanical ventilation, continuous renal replacement therapy (CRRT), standard laboratory parameters, and serum inflammatory cytokines. National (N 5 398 671; P 5 .38) and provincial (n 5 62 246; P 5 .60) ABO blood group distributions did not differ from our cohort (n 5 95). A higher proportion of COVID-19 patients with blood group A or AB required mechanical ventilation (P 5 .02) and CRRT (P 5 .004) and had a longer ICU stay (P 5 .03) compared with patients with blood group O or B. Blood group A or AB also had an increased probability of requiring mechanical ventilation and CRRT after adjusting for age, sex, and presence of $1 comorbidity.
Introduction
Preliminary reports suggest a link between ABO blood groups and susceptibility to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection. [1][2][3] Specifically, individuals with blood group O have been reported to be less susceptible to SARS-CoV-2 infection. Similar associations were observed with SARS-CoV-1. 4 In vitro, the anti-A antibody, found in individuals with blood group O or B, appears to antagonize the interaction between SARS-CoV-1 and the receptor for angiotensin converting enzyme 2 (ACE2), which is expressed by host target cells. 5 Given that SARS-CoV-2 also binds to ACE2, 6,7 it is reasonable to consider that blood groups may also be determinants of susceptibility to SARS-CoV-2 infection. Because a large proportion of individuals infected with SARS-CoV-2 remain asymptomatic, and hospital admission data are limited to patients with symptoms, population screening would be necessary to accurately assess the relationship between blood group and susceptibility to SARS-CoV-2 infection. Therefore, early preliminary data may have better reflected the relationship between ABO group and SARS-CoV-2 infection severity, and the relationship between ABO blood group and COVID-19 severity has remained unresolved, despite more recent investigations. 8,9 Multiorgan tropism of SARS-CoV-2 has recently been reported. 10 This is consistent with multiple reports indicating that COVID-19 is a multisystem disease that includes renal 11 and hepatic 12 manifestations. If ABO blood groups play a role in determining disease severity, these differences would be expected to manifest within multiple organ systems 10 and hold relevance for multiple resource-intensive treatments, such as mechanical ventilation and continuous renal replacement therapy (CRRT). 13 Therefore, we conducted a retrospective analysis of a multicenter case series with a nested prospective substudy of inflammatory cytokines of critically ill COVID-19 patients. In addition, we acquired nationwide population ABO blood group distribution data. Our specific aims were to determine whether ABO blood group is associated with clinical indicators of COVID-19 severity (eg, mechanical ventilation, ventilator-free days, CRRT, length of intensive care unit [ICU] stay, and mortality) and to determine whether ABO blood group is associated with differences in serum biomarkers of organ dysfunction. Further, given the role of the host immune response as a potential determinant of COVID-19 severity, 11,14-21 we aimed to assess whether ABO blood type is related to the levels of serum inflammatory cytokines. We hypothesized that (1) . Fibrin D-dimer results were reported up to a maximum of 4000 mg/L. Laboratory results, including serum inflammatory cytokines, were reported as a baseline (day 1) or peak value within the first 3 days of ICU admission (see Table 1 for specific baseline/peak values). ABO blood type was determined with a standard group and screen (NEO Blood Bank Analyzer; Immucor, Norcross, GA).
Patients and management
All ICU patients with real-time reverse transcription polymerase chain reaction confirmed SARS-CoV-2 infection 22 on nasopharyngeal or tracheal samples were included. British Columbia provincial management guidelines for critically ill COVID-19 patients are in accordance with the Surviving Sepsis Campaign. 23 Admission to the ICU and endotracheal intubation were at the discretion of the attending intensivist for COVID-19 patients who required mechanical ventilation once noninvasively administered oxygen requirement .6 L/min was necessary to maintain a peripheral oximetry saturation .94%. The primary sedative of use across sites is IV propofol, with benzodiazepine or narcotics as second-line infusion agents. The decision to extubate was at the discretion of the attending intensivist with considerations of level of consciousness, fraction of inspired oxygen ,40%, positive end expiratory pressure #8 cm H 2 O, pressure support #8 cm H 2 O, and minimal preextubation tracheal secretions or moderate to strong cough. For routine thromboprophylaxis, enoxaparin (30-40 mg) was administered subcutaneously every 12 hours. CRRT was initiated as medically indicated for acute kidney injury with refractory metabolic acidemia, hyperkalemia, uremic encephalopathy, or positive fluid balance. Clinical decisions for ICU discharge were made at the discretion of the attending intensivist.
National and provincial blood group distributions
The distribution of national and provincial ABO blood groups for Canada and British Columbia, respectively, were acquired from Canadian Blood Services. These data encompass all unique Canadian and British Columbian blood donors over the period from 1 May 2019 to 30 April 2020.
Inflammatory cytokine analysis
In a subcohort of patients, which was a consecutive sample of patients enrolled at Vancouver General Hospital, we prospectively collected serum samples for research purposes. These samples were analyzed for serum concentrations of interleukin-1b (IL-1b), IL-6, IL-10, and tumor necrosis factor-a (TNF-a) using the Quanterix Single Molecule Array (Simoa) HD-1 analytical platform. 24
Outcomes
Our primary outcome was the proportion of patients requiring mechanical ventilation. Our secondary outcome was the probability of requiring mechanical ventilation during hospital stay. Tertiary outcomes included other clinical indices of disease severity (ie, proportion of patients requiring CRRT, probability of requiring CRRT, ventilator-free days, ICU length of stay, probability of extubation, probability of ICU discharge, all-cause hospital mortality, and overall hospital survival), clinical laboratory serum biomarkers of multiorgan dysfunction (eg, AST, ALT, creatinine), and, within our subcohort, peak serum inflammatory cytokines. The primary outcome of mechanical ventilation was chosen because it is a clinically relevant and reproducible end point associated with disease severity and an event that would occur frequently enough to ensure reasonable power given a smaller sample size. Ventilator-free days were defined as the number of days alive and successfully weaned from mechanical ventilation within the first 28 days following ICU admission.
Statistical analyses
Descriptive statistics were used to summarize baseline demographics, clinical characteristics, and laboratory values. Continuous variables are presented as median and interquartile range (IQR), and categorical variables are presented as total number (proportions; %). Differences in the blood group distribution between Canadian data and British Columbia data and our cohort were assessed by a 2-tailed x 2 goodness-of-fit test. To assess whether ABO blood group influences clinical indices of disease severity (see "Outcomes"), patients were split into 2 groups based on previous data demonstrating that the anti-A antibody may impact SARS-CoV-2 interaction with its cell entry receptor ACE2 5
Results
A total of 125 critically ill COVID-19 patients were admitted to the ICU from 1 March 2020 to 28 April 2020. Of these 125 patients, 95 had ABO blood group data available from an ICU admission group and screen and were included in the analyses. Baseline characteristics and outcomes from 117/125 patients (87/95 with blood group data) have been summarized previously 25 ; however, this investigation addressed novel hypotheses that did not share any overlap with previous reporting.
Demographics and clinical laboratory results are presented in Table 1. There were no differences in age or sex between groups or overall comorbidities. ABO blood group data from all unique blood donors nationally (N 5 398 671) and provincially (n 5 62 246) from 1 May 2019 to 30 April 2020 were acquired from Canadian Blood Services. Our ICU cohort's blood group distribution was not different from the national blood group distribution (P 5 .38) or the provincial blood group distribution (P 5 .60; Table 2); this reflects the regional demographics from which our patient population is drawn. Indeed, ;85% of provincial patients with COVID-19 were admitted to 1 of the 6 hospitals included in this investigation. 25
Clinical indicators of disease severity
Of the 95 ICU patients included in the analyses, 57 were blood group O or B, and 38 were blood group A or AB. A greater proportion of A or AB patients (32; 84%) required mechanical ventilation compared with O or B patients (35; 61%; P 5 .02) and had a greater probability of requiring mechanical ventilation after adjusting for sex, age, comorbidity status (yes/no), and treating death as a competing risk (adjusted sHR, 1.76; 95% CI, 1.17-2.65; P 5 .007; Figure 1A; Table 3). A greater proportion of A or AB patients (12; 32%) required CRRT compared with O or B patients (5; 9%; P 5 .004) and had a greater probability of requiring CRRT after adjusting for age, sex, comorbidity status (yes/no), and treating death as a competing risk (adjusted sHR, 3.75; 95% CI, 1.28-10.9; P 5 .02; Figure 1B; Table 3). Median ICU length of stay was longer in A or AB patients (13.5 days; IQR, 7-26) than in O or B patients (9 days; IQR, 5-18; P 5 .03), but there were no differences in the probability of ICU discharge after adjustment for age, sex, comorbidity status (yes/no), and death as a competing risk (adjusted sHR, 0.63; 95% CI, 0.39-1.03; P 5 .06; Figure 1C; Table 3). The median number of ventilatorfree days was not different between A or AB patients (7 days; IQR, 0-19) and O or B patients (13 days; IQR, 0-21; P 5 .50). Further, there were no differences in the probability of extubation between groups after adjustment for age, sex, comorbidity status (yes/no), and death as a competing risk (adjusted sHR, 0.92; 95% CI, 0.52-1.62; P 5 .78; Table 3). Overall hospital length of stay did not differ between groups (P 5 .13; Table 3). A total of 67 (71%) patients had been discharged from the ICU at the time of study completion, with no statistically significant differences between the 2 groups (43 [75%] of O or B vs 24 [63%] of A or AB; P 5 .20; Table 3). For discharged patients, ICU length of stay was longer in A or AB with death as a competing risk and are adjusted for age, sex, and the presence of $1 comorbidity status (binary, yes/no). sHR ratio .1 indicates an increased probability of an event occurring during the study period, whereas a ratio ,1 indicates a decreased probability. *Statistically significant P value for a difference between groups. patients (P 5 .03), whereas there was no difference in hospital length of stay (P 5 .08; Table 3). There was no difference in the proportion of O or B (8; 14%) and A or AB (9; 24%) patients who died during their hospital stay (P 5 .23).
Clinical laboratory results
Clinical laboratory results for both groups are shown in Table 1. For all 95 ICU patients, ICU admission white blood cell count (P 5 .02; Figure 2A), highest recorded value for fibrin D-dimer (P 5 .05; Figure 2D), AST (P 5 .02; Figure 2E), ALT (P 5 .01; Figure 2F), and highest recorded value for serum creatinine (P 5 .03; Figure 2H) were lower in O or B patients than in A or AB patients.
Discussion
We present a comprehensive assessment of the effect of ABO blood group on clinical indices of COVID-19 severity in critically ill patients. Although the distribution of ABO blood groups in our sample of ICU patients did not differ from the national and provincial ABO blood group distribution, a greater proportion of blood group A or AB patients required mechanical ventilation and CRRT compared with blood group O or B patients. Similarly, biomarkers of renal and hepatic dysfunction were higher in blood group A or AB patients. In our subcohort there were no differences in serum inflammatory cytokines. Collectively, our data indicate that critically ill COVID-19 patients with blood group A or AB are associated with an increased risk for requiring mechanical ventilation, CRRT, and prolonged ICU length of stay compared with patients with blood groups O or B.
As of 28 July 2020, .16 000 000 people worldwide have been infected with SARS-CoV-2, resulting in .650 000 deaths. 26 Yet, apart from age and sex, 13,27 and more recently blood group, 9 there is a striking lack of knowledge of the clinical or demographic risk factors that influence susceptibility to SARS-CoV-2 infection and the severity of subsequent COVID-19. Recently, Li and colleagues observed that blood groups A and O are present at higher and lower proportions, respectively, in hospital-admitted patients with COVID-19 (n 5 265) compared with the general population. 3 This finding is consistent with SARS-CoV-1 data. 4 The similarities between the SARS-CoV-1 and SARS-CoV-2 receptor binding domains, 6 coupled with the observation that anti-A antibody inhibits the interaction between SARS-CoV-1 and the ACE2 receptor, 5 suggest that ABO blood groups could influence SARS-CoV-2 infection and resultant COVID-19 severity. 1-3 However, the relationship between ABO blood group and COVID-19 severity may be more complicated and involve such factors as the specific anti-A titers, 28 the immunoglobulin isotype of anti-A antibodies, 29 and ABO group differences in von Willebrand factor. 30 Importantly, the proportion of asymptomatic infection ranges from 10% to 40%, 31-33 not including patients with mild symptoms who do not require hospitalization. Further, variability in international regulatory guidance as to when patients should self-isolate or seek medical attention will influence hospital admission as an outcome variable to determine SARS-CoV-2 susceptibility. Our data in ICUadmitted patients did not reveal a difference in ABO blood group distribution compared with the normative national or provincial blood group distribution (Table 2). For the reasons explained above, we are unable to conclude that there is no relationship between ABO blood group and susceptibility to SARS-CoV-2 infection; rather our findings are an incentive for further research to assess this potential relationship using study designs (eg, population screening 32 ) that are more inclusive of the full range of disease severity.
Our study demonstrates a link between ABO blood groups and mechanical ventilation, as well as the requirement for CRRT and ICU length of stay (Figure 1), reproducible outcome indices of the severity of COVID-19 across countries and jurisdictions. These factors are also directly relevant for risk stratification. Further, we demonstrated higher levels of AST, ALT, and peak serum creatinine in patients with blood group A or AB, which may indicate a multiorgan protective effect conferred by the anti-A antibody or other factors related to blood type (eg, von Willebrand factor); these data align with the multiorgan involvement of SARS-CoV-2 infection. 10 Interestingly, we found a significant, albeit modest, elevation in fibrin D-dimers in patients with blood group A or AB compared with O or B. Because pulmonary vasculopathy and coagulopathy are increasingly recognized clinical pathophysiologic sequelae of COVID-19, this may hold relevance to manifestations of respiratory failure and morbidity, 34 independent of the influence of the anti-A antibody on viral cell entry. Central to this observation, patients with blood group O have reduced levels of factor VIII and von Willebrand factor, 35 which may account for an underlying protective effect against the development of vasculopathy within the pulmonary vasculature 36 and other vital organ vascular beds. 30 Indeed, ABO is a "histo-blood group," and the associated antigens are present on many cell types, including endothelial cells and platelets, which relates to the overall function of these cell types. 37 Although speculative, this mechanism requires further detailed prospective evaluation in COVID-19 patients.
Our study's findings are congruent with a recent study of 1980 patients that demonstrated a link between ABO blood type and disease severity, 9 although retrospective data have emerged to suggest otherwise. 8 However, when comparing our study to that of Latz and colleagues, 8 it is important to consider differences in mortality that may confound comparison between our cohorts. Indeed, our jurisdictional mortality rate for ICU-admitted patients was ;15% (18% in the 95 patients assessed herein), 25 whereas Latz et al had 123 patients admitted to the ICU and reported 89 deaths in the overall study cohort (if all deaths are attributable to ICU patients, this is a mortality rate of 72%). 8 In light of these vastly different mortality rates, there may be valid concerns regarding the comparability of previous cohorts and the similarity of the underlying populations from which they came. It is unknown whether these differences are due to disease-specific factors, system-level public health factors, or a combination of both.
Although the host immune response to SARS-CoV-2 infection appears to be related to COVID-19 severity, 11,14-21 we did not demonstrate group differences in serum inflammatory cytokines within our subcohort. Biologically, this may reflect a lack of blood group effect, because the juncture at which blood group and disease severity intersect is putatively at the level of cell entry. 5 Whether immunological factors subsequent to SARS-CoV-2 infection differ by blood group remains unknown. Conversely, our negative finding may be a consequence of inadequate statistical power. This lack of statistical difference notwithstanding, the sixfold difference in IL-6 group medians between the A or AB and the O or B blood groups suggests that it is too early to disregard the host immune response as a factor that may differentially impact COVID-19 severity between ABO blood groups.
Our study has several limitations that warrant consideration. First, it is a retrospective analysis of observational data, precluding the ability to infer causality. Second, ;25% of ICU-admitted patients did not have ABO blood group data from an admission group and screen. An important and unaddressed confounder in our study is the relationship between ethnic ancestry and outcomes in patients with COVID-19. 38 Our analysis included 85% of all COVID-19 patients admitted to the ICU in our province during the study period, lending strength to our comparisons of blood type distributions with the larger provincial population (sample population). However, we acknowledge that our disease-severity comparisons made between A or AB and O or B patients in the ICU may be confounded by ethnic ancestry. Our subcohort consisted of a small sample; therefore, the resultant lack of statistical power precludes strong inferences related to serum inflammatory cytokines. These data should not be taken as conclusive; rather, they should be considered in future hypothesis generation. It is also important to consider that the titer of anti-A antibodies may affect COVID-19 severity. Therefore, another limitation of our study is that anti-A levels were not analyzed, but inferred, from patient blood group data. Future research will aim to address these limitations and unresolved questions.
In conclusion, we demonstrate that critically ill COVID-19 patients with blood group A or AB are associated with an increased risk for requiring mechanical ventilation, CRRT, and prolonged ICU length of stay compared with patients with blood groups O or B. Further research is required to delineate the biological mechanisms underpinning these findings. | 2020-10-17T13:06:24.480Z | 2020-10-14T00:00:00.000 | {
"year": 2020,
"sha1": "f86d94dfb0e1d07110c53e81bcddaeb098f8610d",
"oa_license": null,
"oa_url": "https://doi.org/10.1182/bloodadvances.2020002623",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "76bc7cadec5967081d62ff826aa64bf28b1c2294",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235212727 | pes2o/s2orc | v3-fos-license | Dual Modal Imaging-Guided Drug Delivery System for Combined Chemo-Photothermal Melanoma Therapy
Purpose Malignant melanoma is one of the most devastating types of cancer with rapid relapse and low survival rate. Novel strategies for melanoma treatment are currently needed to enhance therapeutic efficiency for this disease. In this study, we fabricated a multifunctional drug delivery system that incorporates dacarbazine (DTIC) and indocyanine green (ICG) into manganese-doped mesoporous silica nanoparticles (MSN(Mn)) coupled with magnetic resonance imaging (MRI) and photothermal imaging (PI), for achieving the superior antitumor effect of combined chemo-photothermal therapy. Materials and Methods MSN(Mn) were characterized in terms of size and structural properties, and drug loading and release efficiency MSN(Mn)-ICG/DTIC were analyzed by UV spectra. Photothermal imaging effect and MR imaging effect of MSN(Mn)-ICG/DTIC were detected by thermal imaging system and 3.0 T MRI scanner, respectively. Then, the combined chemo-phototherapy was verified in vitro and in vivo by morphological evaluation, ultrasonic and pathological evaluation. Results The as-synthesized MSN(Mn) were characterized as mesoporous spherical nanoparticles with 125.57±5.96 nm. MSN(Mn)-ICG/DTIC have the function of drug loading-release which loading ratio of ICG and DTIC could reach to 34.25±2.20% and 50.00±3.24%, and 32.68±2.10% of DTIC was released, respectively. Manganese doping content could reach up to 65.09±2.55 wt%, providing excellent imaging capability in vivo which the corresponding relaxation efficiency was 14.33 mM−1s−1. And outstanding photothermal heating ability and stability highlighted the potential biomedical applicability of MSN(Mn)-ICG/DTIC to kill cancer cells. Experiments by A375 melanoma cells and tumor-bearing mice demonstrated that the compound MSN(Mn)-ICG/DTIC have excellent biocompatibility and our combined therapy platform delivered a superior antitumor effect compared to standalone treatment in vivo and in vitro. Conclusion Our findings demonstrate that composite MSN(Mn)-ICG/DTIC could serve as a multifunctional platform to achieve a highly effective chemo-photothermal combined therapy for melanoma treatment.
Introduction
Melanomas are among the most life-threatening skin malignancies. Concretely, although they account for only 4% of all types of skin cancers, 1 the odds of full recovery for melanoma patients is extremely low, and therefore melanoma accounts for 80% of all skin cancer mortalities. It is crucial to improve the success of patient treatment. Although numerous studies have explored alternative treatments for melanoma, few of them have achieved satisfactory effects to improve the overall survival of patients. 2 Therefore, novel and more effective therapeutic platforms are essential for melanoma treatment.
Chemotherapy is the earliest but still important treatment option. Dacarbazine (DTIC) is the only chemotherapeutic agent for melanomas that has been approved (in 1976) by the US Food and Drug Administration (FDA). 3 However, its low water solubility results in a low and partial absorption rate, which limits its effectiveness in melanoma therapy. 4 Moreover, DTIC is a non-specific drug and therefore also exhibits toxicity to healthy cells. The balance between efficiency and safety in chemotherapeutic agents is particularly challenging to achieve. Moreover, the lack of imaging guidance for therapeutic procedures can reduce efficiency. To address these drawbacks, a nanoparticle-based drug delivery system with multitherapeutic and imaging properties can be used as an effective approach. 5 Various drug delivery nano-systems such as liposomes, 6 copper particles, 7 graphene oxide nanoparticles, 8 and silica nanoparticles have been actively developed to improve the in vivo stability and pharmacokinetics of therapeutic compounds. Among these, mesoporous silica nanoparticles (MSNs) possess unique properties, such as controllable particle size and volume, outstanding loading capability, easy functionalization, and good biocompatibility, which makes them ideal nanocarriers for drug delivery and imaging. 9,10 Photothermal therapy (PTT) causes irreversible damage to cancer cells via the heat generated from the near-infrared (NIR) absorption of PTT materials, and therefore many studies have recently focused on this promising therapeutic strategy. 11,12 Additionally, photothermal heating is also known to improve chemotherapy efficacy by enhancing the cellular uptake of chemotherapeutics and triggering intracellular drug release. 13,14 Photodynamic therapy (PDT), employing a NIR laser to excite a photosensitizer to generate reactive oxygen species (ROS) and kill tumor cells. The codelivery of chemotherapeutics and heat for synergistic melanoma treatment has great potential. Indocyanine green (ICG) is an ideal near-infrared light absorber for PTT and photosensitizer for PDT. Therefore, our study sought to load ICG and DTIC onto MSNs to assess the chemo-photothermal effects of this combined therapy. Moreover, the encapsulation of ICG in the MSNs could improve its stability and reduce its degradation rate. On the other hand, various imaging methods for physiological functions have attracted wide attention for guided therapy. [15][16][17][18] Among these, magnetic resonance imaging (MRI) has several advantages, including its excellent temporal resolution, contrast, sensitivity, and safety. Most traditionally, gadolinium-based MRI contrast agents applied in clinics feature relatively low proton relaxation efficiency even at high concentrations (eg mM), thus resulting in toxicity risk. 19,20 To address this problem, integration of metal ions, such as Mn, Zn, and Fe, in a nanoparticle system has recently been explored in vivo to enhance the quality of deep bioimaging. [21][22][23] Manganese ion-doped mesoporous silica nanoparticles (MSN(Mn)) can either be applied for drug delivery or for imaging guidance.
Therefore, our study sought to develop an effective platform based on MSN(Mn), which provides excellent magnetic resonance and photothermal dual-mode imaging performance, thereby constituting a promising chemophotothermal therapy for melanomas (Scheme 1). ICG (an NIR dye) and DTIC (a chemotherapeutic agent) were successfully loaded onto MSN(Mn), after which the photothermal effect and imaging capability of this combined system was tested, and the combination therapy was verified in vitro and in vivo. Importantly, our system features two unique characteristics compared to other previously reported nanocarriers for melanomas: (1) Mn-doped mesoporous silica exhibited a uniquely high r1 value which was beyond our expectations; (2) the MSN(Mn)-ICG/DTIC platform exhibited a remarkable synergistic therapeutic effect compared with ICG or DTIC treatment alone. Guided by MRI and PI, a combined chemo-photothermal melanoma therapy was successfully achieved. Thus, our findings demonstrate the potential applicability of multifunctional drug delivery systems as effective drug delivery and imaging agents for the combined treatment of melanoma.
Materials and Methods
Material streptomycin and 5-diphenyltetrazoliumerma bromide (MTT) assay kit are received from Nanjing KeyGEN Biotech Co., Ltd. (Nanjing, China). Annexin V-FITC and propidium iodide (PI) double staining cell apoptosis kit is received from BD Pharmingen (CA, USA). All reagents are of analytical reagent grade or the highest purity available and directly used without further purification. Deionized water of 18 MΩ cm is used throughout the experiments.
Animals
Female Balb/c nude mice with 5-6 weeks old were provided by the Beijing Vitong Lihua Laboratory Animal Technology Co., Ltd. (Beijing, China) and housed under specific pathogen-free conditions by the Experimental Animal Center, Weifang Medical University (Weifang, China). All of the animal experiments were conducted in strict accordance with the Guide for the Care and Use of Laboratory Animals published by the Weifang Medical University. The project was approved by the Animal Experimental Ethics Committee of Weifang Medical University. The treatment of experimental animals followed the 3Rs principle. All experiments follow the ethical principles of experimental animal welfare and make every effort to minimize suffering.
Cells Line and Cells Culture
The A375 human melanoma cells line was obtained from Procell Life Science & Technology Co. Ltd. (Wuhan, China) and cultured in DMEM supplemented with 10% fetal bovine serum, 100 U/mL streptomycin and 100 U/mL penicillin. The cells were maintained at 37°C with 5% CO 2 in a humidified incubator. The medium was replaced every 2-3 days.
Loading and Release
Photosensitizer ICG Loading ICG, a NIR dye, is an FDA-approved photothermal agent. To achieve ICG loading, different volumes of ICG solution
Photosensitizer ICG Release
The release behavior of ICG was evaluated in 2.0 mL PBS (pH 7.4). The obtained MSN(Mn)-ICG was shaken at 37 ° C in the dark, and 0.5 mL of the supernatant was recovered at different time points (3,6,12,24,48, and 72 h) after centrifuging. The release efficiency of ICG was calculated by determining the 780 nm absorbance of the supernatant filtrate. The MSN(Mn)-ICG was re-suspended with 0.5 mL of fresh PBS buffer for further releasing tests.
DTIC Release
To study the DTIC release behavior, the MSN(Mn)-ICG/ DTIC complexes were dispersed in 2.0 mL PBS at pH 7.4 and pH 5.5. The obtained complex was shaken at 37 °C in the dark, and 0.5 mL of the supernatant was moved out at different time points (3,6,12,24,48, and 72 h) after centrifuging. The absorbance of the supernatant was then detected at 323 nm with a spectrophotometer, after which the ICG release efficiency was calculated. The MSN(Mn)-ICG was then resuspended with 0.5 mL of fresh PBS buffer for further releasing tests.
Photothermal Properties, Photodynamic Effect and Photothermal Imaging
Photothermal agents can convert NIR light to heat energy, which is crucial to study the NIR absorption properties of ICG and evaluate its photothermal effects. Concretely, our study characterized the photothermal heating curves of ICG. Aqueous solutions of MSN, MSN(Mn), and MSN(Mn)-ICG/DTIC (1.0 mg mL −1 ; pH 7.0) were placed in centrifuge tubes and analyzed with an infrared camera coupled with 808-nm laser at 0.8 W/cm 2 . Equal volumes of PBS and ICG solution were used as controls. The temperature signals recorded at different time intervals (0, 1, 2, 3, 4, and 5 min) were analyzed with a photothermal imaging system. To verify the thermostability of ICG, MSN(Mn)-ICG/DTIC was irradiated for five cycles (15 min per cycle) with the aforementioned laser, after which the temperature was recorded.
NIR-laser with an 808 nm wavelength is used to perform the singlet oxygen generation ( 1 O 2 ) by the effect of photodynamic therapy (PDT). The generation of 1 O 2 is detected by using DPBF as a 1 O 2 sensor. Two millilitres of MSN(Mn), ICG and MSN(Mn)-ICG (60 μg mL −1 for MSN(Mn) and 1 μg mL −1 for ICG) in DMSO are mixed with 20 μL of DPBF (8 mM, DMSO), respectively. After stirring and irradiating using a NIR laser (808 nm; 0.8 W/cm 2 ) for 10 min, the UVvis absorption spectra of these mixtures are recorded.
To evaluate the photothermal effect of MSN(Mn)-ICG/ DTIC, tumors were exposed to laser radiation, after which photothermal imaging was obtained by recording the temperature variation of the tumor. MSN(Mn)-ICG/DTIC (10 mg/kg for DTIC, 2.3 mg/kg for ICG) were injected into the tumor sites of mice, following irradiation with the 808-nm laser at 0.8 W/cm 2 for different time intervals (0, 1, https://doi.org/10.2147/IJN.S306269
MR Imaging
Different amounts of MSN(Mn) nanoparticles were dispersed in DI water to obtain different Mn 2+ concentrations (0.0225, 0.045, 0.09, 0.18, 0.36, and 0.72 mM). T1weighed imaging of MSN(Mn)-ICG/DTIC suspensions was conducted under a 3.0 T MRI scanner (GE Signa, USA). Next, we explored the effect of MRI on the drug delivery system in mice. Nude BALB/C mice with subcutaneous A375 melanoma xenografts were used as models, and an MSN(Mn)-ICG/DTIC suspension (2 mg·kg −1 ) was intravenous injected into the mice (n = 3). T1weighted animal MRIs were performed using an MRI scanner equipped with an animal coil. Images were collected before and after injection.
Cell Toxicity and in vitro Chemo-Phototherapy
The cell toxicity of MSN(Mn)-ICG was assessed by testing cell viability via the MTT assay. Various concentrations (0, 2.5, 5, 10, 20, 40, 80, and 160 μg mL −1 ) of nanomaterials were incubated with A375 and HaCaT cells for 24 h. The untreated cells were selected as control groups. Afterward, 10 μL of MTT solution was added to each well, followed by 100 μL of dimethyl sulfite after 4 h. The absorbance at 490 nm was measured with a microplate reader after shaking for 10 min. The absorbance of the plate itself was recorded as a blank to assess the survival rate of cells in each experimental group [Cell survival rate = (OD experimental group-OD blank)/(OD control group-OD blank) 100%].
Given the efficient phototherapy effects of MSN(Mn)-ICG/DTIC in vitro, the MTT assay was conducted to investigate its chemo-phototherapeutic effects in A375 cells. To achieve this, five groups were exposed to the following conditions: DTIC, DTIC + Laser, MSN(Mn)-ICG/DTIC, MSN(Mn)-ICG + Laser, MSN(Mn)-ICG/DTIC + Laser. The irradiation group was irradiated with NIR laser at different power levels (808 nm; 0.2, 0.4, 0.6, and 0.8 W/cm 2 ) for 20 min. Afterward, A375 cell viability was assessed via the above-described MTT method.
Combined Therapy for Cancer Treatment in vivo
To evaluate the efficacy of the combined therapy in vivo, male Balb/c nude mice with A375 melanoma tumors were bred. The tumor models in which the tumor volume reached approximately 50 mm 3 were divided into five groups (details provided below), and each group included five mice to assess the mean and standard deviation of the data.
Blank group (1): mice with injection of PBS and 808-nm laser exposure (laser only). Control group (2) 4) and (5) were irradiated with 808-nm laser with a power density of 0.75 W/cm 2 for 10 minutes and then repeated irradiation once a day. After treatment, the sizes of the tumors were monitored every day for 14 days and determined by the following equation: Relative tumor volumes were calculated as V/V 0 (V 0 is the initial tumor volume). Additionally, the body weights and tumor weights of the mice were also evaluated for 14 days. The tumors were then stained with hematoxylin and eosin (H&E) to further evaluate toxicity. Finally, the therapeutic effect was further verified via ultrasound analysis by treating the tumor-bearing mice with MSN(Mn)-ICG/ DTIC using the same amount of PBS injected in the control group.
Statistical Analysis
All data were statistically analyzed with SPSS 18.0 software. All values are presented as mean ± SD. Parametric analyses performed by 2-tailed Student's t-test or one-way ANOVA and nonparametric analyses performed by Mann-Whitney U-test. P-value <0.05 was considered statistically significant in all cases (**p < 0.01; ***p < 0.001).
Results and Discussion Characterizations
Our study developed an effective MRI-and PI-guided nanoplatform for melanoma treatment via chemical-photothermal combination therapy. Scheme 2 illustrates the fabrication procedure for the MSN(Mn)-ICG/DTIC nanocomplexes. First, MSN was synthesized via a typical sol-gel approach. Mn 2+ was then doped onto the MSN framework under mild acidic and/or reducing conditions. 19 The obtained MSN(Mn) has a unique morphologic structure and contrast-enhanced T1-weighted MIR capability for drug loading and imaging. ICG and DTIC were then loaded onto the MSN(Mn) to further assess their applicability in chemo-photothermal combination therapy.
As-synthesized MSN was 154.33±5.06 nm with a polydispersity index (PDI) of 0.28, whereas that of the MSN(Mn) NPs was 125.57±5.96 nm with a PDI of 0.20. Zeta (ζ) potential was also used to monitor the preparation. After doping with Mn, the ζ potential of MSN(Mn) decreased from −9.36 ± 0.09 mV to −16.19 ± 0.20 mV ( Figure S1). As shown in Figure 1A and B, the morphology of MSN and MSN(Mn) was analyzed via TEM. The as-synthesized MSN particles were characterized as mesoporous spherical nanoparticles with a well-defined mesoporous structure ( Figure 1A). After hydrothermal treatment in an Mn precursor-containing solution, the MSN(Mn) surface became rough and exhibited large amounts of small nanobubbles on its surface, which provides more space for drug loading ( Figure 1B). The reason for the morphological changes of nanoparticles doped with manganese ions has been explored. [24][25][26][27] Active sites were generated on the surface of MSNs because a small amount of silica hydrolyzed to H 4 SiO 4 under hydrothermal condition, followed by adsorbing Mn 2+ and carboxylate species via disodium maleate decomposition. The absorption of Mn 2+ decreased the activation energy for the chemical decomposition of carboxylate, which decomposed into CO 2 and other gaseous species under hydrothermal treatment. In addition, the reaction between Mn 2+ and H 4 SiO 4 species resulted in Mn-doped silica deposition on the gas-liquid interface, forming solid nanospheres.
Then, the elemental distribution of MSN(Mn) was characterized via energy-dispersive spectrometer (EDS)elemental mapping ( Figure 1C As shown in the FTIR spectra in Figure 2A, the bands at around 1082cm −1 , 795 cm −1 and 463 cm −1 were attributed to the asymmetric stretching, symmetric stretching, and bending modes of Si-O-Si, respectively. 28 The peak at 960-970 cm −1 is generally attributed to the stretching vibrations of a SiO4 tetrahedron perturbed by the presence of a metallic group ensemble. 29 However, the MSN(Mn) band at 960 cm −1 disappeared with the doping of Mn, which might suggest the incorporation of Mn into the silica framework (Mn-O-Si). 30 Moreover, XPS analysis of MSN(Mn) was also conducted to verify the valence state of Mn. As illustrated in Figure S2, our results validate the existence of the desired elements (ie, Si and O for MSN, and Si, O, and Mn for MSN(Mn)). Moreover, as shown in Figure 2B, the Mn 2p orbital signal exhibited two peaks. The peak at 641.48 eV is attributed to the orbital of Mn 2p3/2, 31 whereas the one at 658.08 eV represents the orbit of Mn 2p1/2. The main peak of Mn 2p3/2 could be divided into three characteristic peaks at 641.2 eV, 642.3 eV, and 643.5 eV, corresponding to Mn 2+ , Mn 3+ , and Mn 4+ , respectively. 32 Therefore, the XPS results confirmed that Mn was successfully doped into the MSN structure.
According to the N 2 absorption desorption isotherm and pore size distribution showed in Figure 2C and D, the as-synthesized MSN exhibited a well-defined mesoporous structure with two distinct pore sizes of 2.1 and 3.2 nm. After doping with Mn, MSN(Mn) still exhibited a well-defined mesoporous structure. However, the 2.1 nm pores disappeared and the main pore size increased to 3.3 nm, which was consistent with the TEM images. Interestingly, the surface area, pore volume, and average pore size of MSN(Mn) increased, as summarized in Table S1. These results strongly suggest the occurrence of Mn components in the silica frame.
Loading and Release
Different drug ratios, ie, the mass ratio of the loaded ICG to MSN(Mn), can be easily obtained by mixing the components at different LRs and incubating the mixture in an aqueous solvent overnight. The loaded ICG is then steadily released from the complex. As shown in Figure 3A, the drug LR increases with the increase in the loaded amount of ICG. At an ICG concentration above 400 mg mL −1 , the drug LR reaches a maximum value of 34.25±2.20%. The cumulative release profiles of ICG from MSN(Mn) in PBS at 37 °C are presented in Figure 3B, showing only a very small amount of loaded ICG (ie, 8.27±1.44%) released from MSN(Mn) after incubation with PBS buffer. However, this small loss would not affect the photothermal performance of ICG. Similarly, the loading efficiency of DTIC to MSN(Mn)-ICG was assessed by mixing the components at different mass ratios and allowing them to react overnight. As shown in Figure 3C, the loading of DTIC markedly increased with higher DTIC concentrations. By means of electrostatic interaction and hydrophobic interactions, DTIC can be loaded into nanoparticles with a high loading capacity of up to 50.00±3.24% (DTIC/MSN(Mn)-ICG, w/w) at a mass ratio of 1:5. Loaded DTIC is then steadily released from the complex. Figure 3D illustrates the cumulative release profiles of DTIC from MSN(Mn) in PBS buffer with different pH values (7.4 and 5.5) at 37 °C. A portion of the loaded DTIC (32.68±2.10%) was released after incubation in PBS buffer for 48 h. However, the release behavior was barely affected by pH. As an insoluble drug, the dissolution of pure DTIC was also examined. Here, we found that 61.23±6.42% DTIC dissolved after 48 hours. These findings indicate that DITC was released from the MSN(Mn), thus highlighting its effectiveness as a drug delivery system.
Photothermal Properties, Photodynamic Effect and Photothermal Imaging
The temperature increase generated by NIR laser irradiation is the most important parameter to evaluate the photothermal effect of the multifunctional drug delivery system developed herein. In our system, the temperature of MSN(Mn)-ICG/ DTIC quickly increased by 25 °C after laser irradiation for 5 min, whereas the temperature of nanoparticles without ICG could only increase by 8 °C (Figure 4A). This result confirms that the photothermal conversion efficiency of this DovePress 3465 drug delivery system was greatly enhanced by the inclusion of ICG. In contrast, the temperature of pure ICG began to decrease after 5 min, indicating an unstable photothermal effect. As shown in Figure 4C, after five 15-min cycles of irradiation, photothermal heating efficiency remained robust after five cycles of laser exposure. Therefore, our results demonstrated that ICG encapsulation in the drug delivery system could improve stability and reduce degradation rates.
Encouraged by the above-described photothermal effects, photothermal imaging was then conducted in vitro and in vivo. As shown in Figure 4B, the color of the photothermal images changed from black (corresponding to low temperature) to white (corresponding to high temperature), revealing a significant exposure time-dependent temperature rise. Under the same conditions, MSN(Mn)-ICG/DTIC also exhibited a brighter image, indicating a higher temperature compared with MSN and MSN(Mn). As shown in Figure 4D, after injecting MSN(Mn)-ICG/DTIC into the tumors, the tumor temperature rapidly increased by 25 °C within 5 min, which was high enough to kill the melanoma cells but produced little damage to the surrounding normal tissues. Moreover, there was no significant difference between the mice injected with saline (control) after irradiation, thus demonstrating that the nanoparticles MSN(Mn)-ICG/DTIC played a crucial role in achieving photothermal heating and not the laser irradiation itself. This outstanding photothermal heating ability and stability highlighted the potential biomedical applicability of MSN(Mn)-ICG/DTIC as a PTT agent to kill cancer cells.
1,3-Diphenylisobenzofuran (DPBF) was selected as a probe to monitor the generated singlet oxygen to investigate the photodynamic effects of MSN(Mn)-ICG. As shown in Figure S3, after 10 min of irradiation, the UV absorption peak of NPs loading with ICG decreased obviously compared with blank nanoparticles. The absorption intensity of DPBF gradually decreased with
MR Imaging
Mn 2+ has been reportedly applied to MR imaging as a safe contrast agent. As shown in Figure 5A, MSN(Mn)-ICG/ DTIC shows a significant concentration-dependent brightening effect under T1-weighted MRI. The linear relationship is shown in Figure 5B, and the longitudinal relaxivity coefficient (r1) of MSN(Mn)-ICG/DTIC was calculated to be 14.33 mM −1 s −1 , which is four times higher than that of the commercially available agent Gd-DTPA (4.40 mM −1 s −1 ). 33,34 Therefore, our proposed system rendered superior MR contrast ability compared to other reported MRI agents, which are listed in Table 1. The excellent T1weighted MRI effect observed herein may be due to the higher surface-to-volume ratio of MSN(Mn)-ICG/DTIC after doping due to an increase in water molecules coordinated with the metal ions (Mn 2+ )in the doped material. 35 Encouraged by the excellent MRI properties of MSN(Mn)-ICG/DTIC in vitro, imaging capability was further evaluated in vivo. T1 images via intravenous injection of MSN(Mn)-ICG/DTIC in tumor-bearing mice are shown in Figure 5C. An obvious brightening effect was observed in the tumor site (marked by red circles) 4 h after injection compared with the pre-injected image. Our imaging results confirmed the specificity of the drug delivery system, as demonstrated by a localized accumulation in the tumor sites. Based on these positive results, we concluded that MSN(Mn)-ICG/DTIC was an effective nanoplatform for MRI and thermal imaging-guided tumor treatment.
Cell Toxicity and in vitro Chemo-Photothermal Therapy
To investigate the intracellular therapeutic effect of the MSN(Mn)-ICG/DTIC drug delivery system, MTT assays were conducted to test the viability of A375 cells and HaCaT cells after incubation with various concentrations of MSN(Mn)-ICG/DTIC. No obvious cytotoxicity was observed after incubating the A375 cells and HaCaT Figure 6A). The MSN(Mn)-ICG/DTIC + Laser group ( Figure 6B) exhibited remarkably lowest viabilities of A375 cells as the laser power densities increased from 0.2 to 0.8 W/cm 2 , resulting in a synergistic effect between chemotherapy and photothermal treatment. However, the chemotherapy-only group (treated with free DTIC at the same condition) exhibited low cytotoxicity, without any effect of laser exposure. Moreover, the photothermal effect from MSN(Mn)-ICG (ie, without chemotherapy) was also not as effective as the combined therapy. These results indicate that photothermal heating can not only kills cancer cells but also enhances chemotherapy efficacy, which is consistent with previous studies. 36 Therefore, we conclude that combined therapy based on MSN(Mn)-ICG/DTIC exhibits excellent promising therapeutic effects for melanoma treatment in vivo.
To further confirm the effect of the synergistic group on the necrosis and apoptosis of A375 cells, the annexin V-FITC/PI apoptosis detection kit was used to assess the effects of different treated groups on the induction of apoptosis via FACSCalibur. The presence of necrosis (annexin V−, PI+) and apoptotic (annexin V+, PI±) cells was evaluated through flow cytometry and is shown in Figure 6C. MSN(Mn)-ICG/DTIC nanoparticles and Laser formulation caused a greater number of apoptosis than the control. A375 cells treated with nanoparticles and laser exhibited an apoptosis ratio higher than 59% after incubation for 24 h. Increased observed apoptosis may be attributed to intracellular ROS production. The ROS was mainly generated by the ICG released from MSN(Mn)-ICG/DTIC within cells, ICG is a photosensitizer, used for photothermal therapy and photodynamic therapy of cancer. The apoptosis ratio was higher for the MSN(Mn)-ICG/ DTIC + Laser group than the DTIC or the MSN(Mn)-ICG + Laser group, implying the co-delivery of nanoparticles produced a synergetic effect and promoted apoptosis in A375 cells, as reported previously. 37,38 The result was consistent with the result of the MTT assay.
Combined Chemo-Photothermal Therapy in vivo
The in vivo antitumor efficacy of MSN(Mn)-ICG/DTIC was further investigated in melanoma tumor-bearing BALB/c mice. Mice with 50-mm 3 tumor volumes were divided into five groups as shown in Figure 7A. After treatment, the tumors were collected for further analysis DovePress on day 14, as shown in Figure 7B. The antitumor activity in mice was evident in terms of tumor size. The tumors in group 5 gradually shrank between days 2 and 14 at a faster rate than that of the other groups, demonstrating the outstanding antitumor activity of the proposed treatment. The tumor volumes in the blank group 1 increased throughout the entire experiment. The volume change curves of the tumors were measured once every 2 days during the 14day period as shown in Figure 7C. On day 14, the relative tumor volume of group 5 was only approximately 5.81%, compared with 52.47% for group 3 and 24.89% for group 4. These findings highlight the superiority of the combined therapy. Moreover, the tumor weight of the treatment group was approximately 22.92% (p<0.01) that of group 4 and 11.48% (p<0.001) that of group 5 ( Figure 7E), which also suggested that MSN(Mn)-ICG/DTIC had better therapeutic effects. At the same time, tumor growth inhibition (TGI) was calculated through tumor weight in Figure 7F, and the TGI of MSN(Mn)-ICG/DTIC reached values as high as 94.19%, representing the highest melanoma inhibition effects.
Systemic toxicity was evaluated based on body weight fluctuations. As shown in Figure 7D, small differences in body weight were observed between the treatment group and control groups, indicating that there was no significant systemic toxicity. The results of hematoxylin and eosin (H&E) staining also confirmed the therapeutic effect of the combined treatment ( Figure 7G). The combination treatment group ((MSN(Mn)-ICG/DTIC with laser irradiation) exhibited much higher damage to tumor cells, whereas the other four groups showed little to no damage to tumor cells, which exhibited normal membrane morphology and nuclear structures. These observations are in close agreement with previous results, 39 thus confirming the excellent efficacy of the combined MSN(Mn)-ICG/ DTIC treatment for melanoma inhibition.
Finally, the ultrasound results in Figure 8 show that the tumor treated with PBS + Laser was much larger than the tumor treated with (MSN(Mn)-ICG/DTIC + Laser, which was consistent with the above-described anticancer activities observed in vivo. These results further demonstrate that MSN(Mn)-ICG/DTIC exhibited a satisfactory therapeutic effect for melanoma treatment.
Conclusions
In summary, our study developed an innovative MSN(Mn)-ICG/DTIC-based strategy that combined MRI and PI-guided (ie, chemo-photothermal) therapy to treat melanoma. In this work, MSN(Mn) exhibited great biocompatibility and multifunctional loading properties. ICG as a photothermal agent was absorbed by MSN(Mn) obtaining ICG-and DTIC-loaded MSN(Mn) which showed greatly enhanced photothermal conversion efficiency compared with free MSN(Mn) nanoparticles, as well as better photostability compared with free ICG. At the same time, DTIC can be adsorbed by MSN(Mn)-ICG nanoparticles for chemotherapy obtaining MSN(Mn)-ICG/ DTIC which exhibits a superior antitumor effect compared with either chemotherapy or photothermal therapy, suggesting a synergistic effect in the combined treatment. In addition, chelation with Mn 2+ ions further offered those nanoparticles great contrasts in MR imaging. Therefore, guided by precise dual-modal imaging, the proposed drug delivery system MSN(Mn)-ICG/DTIC offers a promising platform for the development of efficient drug delivery vehicles for melanoma therapy. The importance of studying the overall biological effects of nanomedicines has been widely recognized in this field. At present, the main focus of preclinical research is to evaluate the therapeutic efficacy of nanomedicines for proof-of-concept. However, in-depth in vivo analysis, including their long-term toxicity, cannot be overemphasized. Better in vitro settings to predict the biological activity of nanomedicines and the selection of the best drug candidates for further in vivo studies also seem to accelerate their clinical transformation.
Ethics Statement
The A375 cells were purchased from Procell Life Science & Technology Co., Ltd. The animal protocol was conducted according to the Guideline for the Care and Use of Laboratory Animals (NIH publication 85-23), and was approved by the Experimental Animal Ethics Committee of the Weifang Medical University. | 2021-05-28T05:22:59.142Z | 2021-05-18T00:00:00.000 | {
"year": 2021,
"sha1": "9ae3b3daad704fd5e50de7186a276e20984d9456",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=69500",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9ae3b3daad704fd5e50de7186a276e20984d9456",
"s2fieldsofstudy": [
"Biology",
"Materials Science",
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245289444 | pes2o/s2orc | v3-fos-license | To Investigate the Effect of Glucocorticoids on Blood Loss during and after First TotalHipArthroplasty and Its SafetyMeta-Analysis
Objective. To evaluate the efficacy and safety of topical glucocorticoids for total hip arthroplasty by meta-analysis. Methods. A computerized search of the Cochrane Library, MEDLINE, EMBASE, and PubMed English databases, as well as Chinese Biomedical Literature Database, vipu Chinese Science and Technology Journal Database, Wanfang database, and Chinese Knowledge Net Database, was performed to include all randomized controlled trials (RCTs) regarding topical glucocorticoid therapy for postoperative bleeding after THA according to the inclusion criteria. /e quality evaluation criteria of RCTs, as stated in the Cochrane Handbook for Systematic Reviews of Interventions 4.2.5, were adopted for evaluation, and the meta-analysis was performed using RevMan 5.3. Results. A total of 10 articles were included, including 1,112 patients: 566 in the topical glucocorticoid group and 546 in the control group. /e transfusion rate was 8.43% for topical glucocorticoids and 30.05% for the control group (P< 0.001), and topical glucocorticoids reduced 317.89ml total blood loss and 76.82ml invisible blood loss, with statistically significant differences (P< 0.001)./e amount of intraoperative blood loss was reduced by topical glucocorticoids, but the difference was not statistically significant (P � 0.83), and the postoperative HB value was increased by topical glucocorticoids, although the difference was statistically significant (P< 0.001). /e incidence of DVT and PE after topical glucocorticoid application (3.03%) was greater than that of the control group (2.40%), the difference was not statistically significant (P � 0.54), and the incidence of infection after topical glucocorticoid application (3.03%) was greater than that of the control group (2.40%). /e difference was not statistically significant (P � 0.39). Conclusions. Topical glucocorticoids can reduce the transfusion rate and blood loss in THA patients without increasing their risk of thrombosis.
Introduction
Total hip arthroplasty (THA) is widely used in many hip diseases, but its blood loss tends to be large, even requiring blood transfusion. Moreover, massive blood loss adversely affects patients and aggravates the burden on multiple organs of the body [1]. In contrast, clinical transfusion may present serious potential risks such as immune reactions, HIV, HBV transmission, and intravascular hemolysis [2]. erefore, how to reduce the bleeding caused by THA surgery has become an increasing concern and urgent problem for orthopedic surgeons. Until today, several techniques to reduce bleeding have been applied in the clinic, including the treatment of iron, the application of EPO, controlled hypotension, autologous blood reinfusion, and the use of antifibrinolytic drugs [3]. Studies in the 1990s noted that glucocorticoids could effectively reduce surgical bleeding in THA [4]. However, much literature has studied the use of glucocorticoids in THA surgery. Most believe that glucocorticoids can reduce surgical bleeding and blood transfusion in THA and do not increase the risk of DVT. However, there are reports that glucocorticoids are not effective in reducing surgical blood loss in THA. Whether glucocorticoids can effectively reduce bleeding and transfusion in THA surgery and whether they do not increase postoperative deep vein thrombosis and pulmonary embolism remain controversial. is systematic review and meta-analysis aimed to determine whether the use of glucocorticoids in THA surgery is effective in reducing blood loss and transfusion and whether it increases the risk of deep vein thrombosis, pulmonary embolism, and other complications.
Inclusion and Exclusion
Criteria. Inclusion criteria are as follows: (1) types of studies: clinical randomized controlled trials (RCTs); (2) subjects: initial unilateral total hip arthroplasty with consistent baseline levels in both groups (demographic factors such as gender and age); ③ the test group received glucocorticoids intravenously or orally before, during, or after surgery; and (4) the evaluation indicators included the following: transfusion rate, invisible blood loss, postoperative drainage volume, total postoperative blood loss, hemoglobin (HB) drop value, Hb value at 24 h postoperation, incidence of postoperative deep vein thrombosis (DVT) and pulmonary embolism (PE), and incidence of wound infection.
Exclusion criteria are as follows: (1) nonrandomized controlled trials; (2) the patients studied were unicondylar replacement, hip arthroscopy, and hip revision; (4) studies with the above evaluation indicators were not included; (5) excluding literature published in the same duplicate (6); excluded reviews, meta-analyses, and reviews where data could not be extracted. [10]. e methodological quality assessment of the included studies was performed using the criteria for the quality assessment of RCTs in (Cochrane Handbook for Systematic Reviews of Interventions 4. 2. 5): (1) whether the random method selection was correct; (2) whether or not blinding was employed and was the choice of blinding correct; (3) whether allocation concealment; (4) with or without loss to follow-up or withdrawal and with or without loss to follow-up or withdrawal in an intention to treat analysis; and (5) whether the baseline was consistent. A literature review was conducted on the selection of randomized methods, whether blinding was used at the time of randomization, with or without loss to follow-up, or withdrawal.
Evaluation Index Extraction and Statistical Analysis.
A table was developed to extract evaluation indicators, and two investigators extracted data from the literature in accordance with the table content. RevMan 5.3 software was used for statistical analysis. e amount of postoperative drainage, total blood loss, invisible blood loss, intraoperative blood loss, 24 h postoperative HB value, and Hb reduction using topical glucocorticoids were expressed as weighted mean difference (WMD) with 95% confidence interval (CI). e incidence of blood transfusion, incidence of DVT and PE, and incidence of infection were expressed using relative risk (RR) with 95% CI. Heterogeneity between studies was tested using the I 2 and X 2 tests, and if P > 0.1 or I 2 ≤ 50%, there was no significant heterogeneity, data were pooled using a fixedeffects model (Mantel-Haenszel method, Multiple studies were considered significant if P < 0.1 or I 2 > 50%. ere was heterogeneity among the results, the source of heterogeneity was analyzed, subgroup analysis was performed for those with heterogeneity, when heterogeneity could not be eliminated and clinical consistency was present, and the data were pooled by using random effects model (inverse variant method) [11].
Quantity and Quality Assessment of Retrieved Literature.
After the search, 290 articles were identified: 115 in English and 175 in Chinese. After removing duplicates by endnote software, a total of 120 pieces of literature were screened. Sixty-five articles were excluded by reading the title and abstract of the articles, and the remaining by reading the full text, resulting in a total of 10 RCTs after exclusion [12][13][14][15][16][17][18][19][20][21]. A total of 1,112 patients were included: 566 in the topical glucocorticoid group and 546 in the control group. e pieces of literature were all published until 2013, the average age of patients was 50.00-76.90 years, the application dose of glucocorticoids was 1-3 g, the application mode was mainly through drainage tube injection [18,21] and intraoperative infiltration of glucocorticoids [13,15,19], the other was joint cavity injection [12,17,20], 2 studies did not use drainage tubes [12,17], 1 study did not mention whether a drain was used or not [12], the rest of the studies used a drain, and the patients' general conditions are shown in Table 1.
All studies clearly described the random sequence generation method and the method of allocation concealment. Except for Yuan Xiaowei's study [21], all other studies clearly described the blinding of participants and data statistics. In contrast, incomplete outcome data were clearly described in other studies, except for the study by Yue [15]. For publication bias and other bias, there is less literature that accurately describes it. A detailed description of the included literature quality reviews is presented in Table 2.
Drainage
Volume. e amount of drainage after THA was analyzed in six articles, with significant statistical heterogeneity between studies (x 2 � 3106.36, I 2 � 93%, P < 0.001), using a random effects model, topical glucocorticoid application was found to reduce the drainage volume of 140.21 ml when compared with the control group, with a statistically significant difference (WMD � −140.21, 95% CI: −189.19 to −91.22, P < 0.001); see Figure 3.
e HB Value 24 h after Operation as well as the HB Drop
Value. Five pieces of literature analyzed the HB value at 24 h after operation without statistical heterogeneity among studies (x 2 � 4.09, I 2 � 2%, P � 0.39). Using the fixed effect model, topical glucocorticoids could increase the postoperative HB value, and the difference was statistical.
Comparison of Incidence of DVT and PE Postoperative
Infection. e incidence of postoperative DVT and PE was analyzed in eight articles, with no statistical heterogeneity between studies (x 2 � 1.12, I 2 � 0%, P � 0.89), using a fixed effects model, the incidence of DVT and PE after topical glucocorticoid application (3.03%) was greater than that of the control group (2.40%), and the difference was not statistically significant (RR � 1.27, 95% CI: 0.59-2.74, P � 0.54); see Figure 5. Four articles analyzed the rate of postoperative infection in two groups of patients, the incidence of postoperative infection after topical application of glucocorticoids (3.03%) was greater than that in the control group (2.40%), and the difference was not statistically significant (RR � 2.30, 95% CI: 0.34 to 15.38, P � 0.39); see Figure 5.
Hip Function Score.
ree studies analyzed the postoperative hip functional score, two studies analyzed the Harris score [22], and one study analyzed the Oxford hip functional score (OHS). For Harris score, topical glucocorticoids could enhance hip function, and the difference was statistically significant (WMD � 1.12, 95% CI: 0.39-1.84, P � 0.003). For OHS, there was no significant difference in the effect of topical glucocorticoids compared with the control group (WMD � −2.80, 95% CI: −7.04-1.44, P � 0.20); see Figure 6.
Subgroup Analysis.
Because of significant heterogeneity in total blood loss and drainage volume, studies with general data found inconsistency in the doses of topically applied glucocorticoids, analyzed according to total doses of 1 g, 2 g, and 3 G, with 3 publications included at a total dose of 1 g [15,18,20], with statistical heterogeneity between studies (x 2 � 7.05, I 2 � 72%, P � 0.03). Topical application of glucocorticoids reduced 367.71. At a total dose of 3 G [12,14], there was no statistical heterogeneity between studies (x 2 � −1.52, I 2 � 34%, P � 0.22), and topical glucocorticoids reduced 358.28 ml total blood loss with a statistically significant difference (WMD � −358.28, 95% CI: −26 to −286.31, P < 0.001).
Discussion
Accelerated recovery after surgery (ERAS) refers to the provision of highly efficient and high-quality medical services after a series of simple and effective perioperative management measures that reduce surgical, traumatic stress reaction and postoperative complications, alleviate patient suffering, and shorten hospital stay [23,24]. Numerous studies have confirmed that eras are effective in reducing postoperative complications of joint arthroplasty, reducing the length of hospital stay, and improving patient satisfaction [25]. Moreover, one of the key points of eras application in joint arthroplasty is to reduce postoperative stress and inflammation [26]. e inflammatory response is one of the important mechanisms underlying the stress response after surgical trauma. It is closely related to postoperative pain, postoperative nausea and vomiting (PONV), fatigue, muscle strength loss, sleep disturbance, and so on. erefore, alleviating the postoperative inflammatory response can help reduce postoperative complications, shorten hospital stay, and achieve the goal of accelerating rehabilitation after joint replacement [27]. Glucocorticoids, as a commonly used and effective agent to inhibit inflammatory responses, have been widely used in various types of surgery and are proven to have good effects [28,29]. In recent years, domestic and Journal of Healthcare Engineering foreign scholars have conducted relevant studies on the systemic application of glucocorticoids in TKA, but highquality, relevant studies were few, the level of evidence was low, and the results were still controversial [30]. It is believed that systemic administration of glucocorticoids can effectively relieve pain, reduce PONV symptoms, and accelerate recovery in the perioperative period of total knee arthroplasty (TKA) after TKA [31]. ere have also been studies with reservations regarding the efficacy of systemic glucocorticoid use in TKA [32].
THA has become a more mature surgical technique in the clinic. However, it is a traumatic treatment, and postoperative fever, infection, deep vein thrombosis, prosthesis loosening, fracture, dislocation, limb malalignment, and heterotopic ossification often occur, especially with fever, infection, and deep vein thrombosis [33]. In the early postoperative period, appropriate rehabilitation training is necessary to promote venous return of the affected limb, reduce swelling, prevent deep vein thrombosis of the lower extremity, reduce adhesion to surrounding tissues, increase the strength of surrounding muscle groups, enhance joint stability and the weight-bearing capacity of bone, shorten rehabilitation time, and improve the functional status and quality of life of the limbs. Reducing the incidence of various types of complications and so on has a very important role [34]. However, patients often experience pain affecting normal rest, sleep, and diet, which can affect wound healing and postoperative functional exercises, severely affecting hip Test for overall effect: Z = 5.52 (P < 0.00001) functional recovery and patient satisfaction [35]. Currently, adjuvant therapy with glucocorticoids (GC) is applied to patients undergoing surgical procedures clinically, in the perioperative period to reduce pain and reduce complications [36]. GC has many physiological and pharmacological effects, which mainly include the following: (1) obviously suppressing the inflammatory response induced by various factors, inhibiting the movement of inflammatory cells to the site of inflammation, stabilizing the lysosomal membrane, reducing serotonin and IL-1β, releasing inflammatory mediators such as IL-6, TNF-a, and bradykinin, thereby alleviating symptoms such as redness, heat, and pain [37]; (2) moderating the body's response to endotoxin, inhibiting the hypothalamic response to pyrogens, inhibiting the production and release of leukocyte pyrogens, decreasing the sensitivity of thermoregulatory centers to pyrogens, and having an antipyretic effect on hyperthermia [38]; (3) antishock effect and the effect of increasing the body's stress capacity; and (4) inhibiting the proliferation of fibroblasts, reducing the production of collagen fibers, and so on. e results of the meta-analysis showed that topical glucocorticoids, which reduced the blood transfusion rate, total blood loss, and invisible blood loss; increased postoperative Hb; decreased HB drop values; and enhanced hip function, did not increase the incidence of postoperative DVT, the incidence of wound infection, or the OHS score and were not statistically significant for intraoperative blood loss. THA is associated with significant blood loss due to procedures such as intraoperative muscle transection plus reaming, which can cause significant overt blood loss. On the contrary, invisible blood loss is the focus of attention among orthopedic surgeons in recent years, with studies indicating that invisible blood loss accounts for 60% of total blood loss, and the mechanism responsible for invisible blood loss may have hemolysis caused by intraoperative RBC rupture. e Journal of Healthcare Engineering interstitial exudation doctrine [39], with studies suggesting a fibrinolytic activation period after 24 h of surgery, further increases postoperative blood loss [40]. Glucocorticoids, as a commonly used and effective agent to inhibit inflammatory responses, have been widely used in various types of surgery and are proven to have good effects [41]. Although glucocorticoids theoretically do not increase the amount of fibrin, Li et al. [42,43] suggested that intravenous glucocorticoids inhibit fibrinolysis throughout the body, combined with the fact that hip replacement patients, who are mostly elderly patients, would increase the incidence of postoperative DVT and PE. Topical application of glucocorticoids concentrates glucocorticoid drug concentrations around the joint cavity via various routes [44]. In this study, the transfusion rate of topical glucocorticoids was 8.43% compared with 30.05% in the control group (RR � 0.28, 95% CI: 0.20-0.39, P < 0.001), although Wang [45] concluded by meta-analysis of randomized controlled trials and retrospective comparative studies that topical glucocorticoid administration could reduce the transfusion rate without increasing the rates of DVT and PE. However, the Chinese databases were not searched, and only 4 RCTs were included, which did not compare invisible blood loss and postoperative hip function scores. e strengths of this study are that only RCTs were included; non-RCTs were excluded; and invisible blood loss, 24 h postoperative HB value, and hip function score were compared, which strengthened the evidence grade and further demonstrated that topical glucocorticoids could reduce perioperative total blood loss and invisible blood loss, increase postoperative HB value, and enhance the Harris score in hips.
Postoperative DVT and PE are more dangerous postoperative complications, which threaten the life safety of patients once they form a PE [46]. e results of the eight included articles showed no significant difference in the incidence of DVT and PE after topical glucocorticoids, which suggests that topical glucocorticoids do not increase the incidence of postoperative thrombus while reducing postoperative blood loss. Nevertheless, the results were consistent in the study of [47], which compared seven series of hip replacement patients with topical glucocorticoids. Topical glucocorticoids can reduce postoperative blood loss without increasing the chance of postoperative thrombosis in patients with total hip replacement. Compared with the intravenous application of glucocorticoids, the topical application of glucocorticoids can reduce the systemic blood concentration and avoid the occurrence of thrombus [48]. In conclusion, topical glucocorticoids can reduce postoperative blood loss and reduce the rate of blood transfusion without increasing postoperative thrombosis in THA patients. However, because various studies used different doses and different methods of topical application, the direction of future research should focus on the optimal dose of glucocorticoids and the efficacy of topical glucocorticoids compared with intravenous glucocorticoids to explore the most applicable volume and optimal application route.
Although long-term administration of glucocorticoids has an increased risk of infection, gastrointestinal bleeding, and so on, short perioperative glucocorticoid administration is not associated with such risks [49]. A meta-analysis involving 71 studies confirmed that short perioperative glucocorticoid administration does not increase the risk of Journal of Healthcare Engineering postoperative complications in patients [50]. Currently, although relevant studies on the systemic application of glucocorticoids in THA all agree that the glucocorticoid group has the same postoperative complication rate as the control group, some studies have found that local application of glucocorticoids in the hip joint cavity may increase the risk of postoperative infection [51]. erefore, we still need to pay more attention before definitive conclusions can be drawn. is study mainly has 3 deficiencies: (1) the included literature has a small sample size, which is difficult to cause bias to the results and generates uncertain conclusions [52]; (2) because the statistical results of some indicators such as hip function and thrombotic markers were few, a metaanalysis could not be performed and only descriptive statistics could be performed; and (3) the limited number of included studies in the literature was insufficient to perform subgroup analyses targeting high or low glucocorticoid doses. However, this systematic review and meta-analysis is the first to synthesize the relevant, high-quality literature to date, perform a normative meta-analysis according to PRISMA principles and processes, and comprehensively analyze the use value of systemic glucocorticoids in THA accelerated rehabilitation. We believe that the results of this study will be an important guiding value for clinical work [53].
e results of the present study suggest that topical glucocorticoids can reduce the transfusion rate and blood loss in THA patients without increasing their risk of thrombosis.
Data Availability
e original data could be obtained from the corresponding author.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2021-12-19T16:05:40.911Z | 2021-12-17T00:00:00.000 | {
"year": 2021,
"sha1": "4f36562057090992dfb1607c50d5883e1364db13",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jhe/2021/9681129.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "42832c42c422430ac691a75cb4e2ca87899c299d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53155608 | pes2o/s2orc | v3-fos-license | Three-Dimensional Transport Model for Intravitreal and Suprachoroidal Drug Injection
Purpose Quantitative understanding of the transport of therapeutic macromolecules following intraocular injections is critical for the design of efficient strategies in treating eye diseases, such as neovascular (wet) age-related macular degeneration (AMD) and macular edema (ME). Antiangiogenic treatments, such as neutralizing antibodies against VEGF or recently characterized antiangiogenic peptides, have shown promise in slowing disease progression. Methods We developed a comprehensive three-dimensional (3D) transport model for intraocular injections using published data on drug distribution in rabbit eyes following intravitreal and suprachoroidal (SC) injection of sodium fluorescein (SF), bevacizumab, and ranibizumab. The model then was applied to evaluate the distribution of small molecules and antiangiogenic proteins following intravitreal and SC injections in human eyes. Results The model predicts that intravitreally administered molecules are substantially mixed within the vitreous following injection, and that the long-term behavior of the injected drug does not depend on the initial mixing. Ocular pharmacokinetics of different drugs is sensitive to different clearance mechanisms. Effective retinal drug delivery is impacted by RPE permeability. For VEGF antibody, intravitreal injection provides sustained delivery to the retina, whereas SC injection provides more efficient, but short-lived, retinal delivery for smaller-sized molecules. Long-term suppression of neovascularization through SC administration of antiangiogenic drugs necessitates frequent injection or sustained delivery, such as microparticle-based delivery of antiangiogenic peptides. Conclusions A comprehensive 3D model for intravitreal and SC drug injection is developed to provide a framework and platform for testing drug delivery routes and sustained delivery devices for new and existing drugs.
N eovascular or wet age-related macular degeneration (AMD) is one of the leading causes of irreversible blindness, affecting over 1.75 million individuals in the United States alone. 1 The hallmark of AMD is the degeneration of retinal macula and retinal neovascularization. The fragile and leaking new vessels cause a buildup of blood and fluid in the retinal macula, scarring the macular tissues, resulting in a loss of central vision, and eventually leading to irreversible vision loss if left untreated. Studies of the pathophysiology of AMD have suggested that choroidal neovascularization is dynamically controlled by the balance of proangiogenic factors, such as vascular endothelial growth factor (VEGF), and antiangiogenic factors, such as pigment epithelium derived factor (PEDF) in the eyes. 2 The upregulation of VEGF promotes a pathologic state of retinal pigemented epithelium (RPE) that causes choroidal neovascularization. 3 Macular edema (ME), characterized by the buildup of fluid, usually is caused by increased vascular permeability and vascular leakage. A particular type of ME related to diabetes, called diabetic macular edema (DME), results from an increased VEGF level caused by hypoxia response of cells. Current standard of care of AMD and ME mostly involves intravitreal injection of anti-VEGF and anti-permeability drugs that reduce vascular leakage and neovascularization. Therapeutic agents delivered to the eye through intravitreal injection normally require a monthly or bimonthly treatment. Given that these frequent direct injections into the eye may cause discomfort and adverse effects, alternative strategies for delivering therapeutic drugs to the posterior segment of the eye have been studied and developed. The suprachoroidal space is a potential space between the sclera and choroid in the posterior segment of the eye. When injected, the suprachoroidal (SC) space becomes filled by the injected solution and opens up to approximately 200 to 300 lm of thickness. This technique is less invasive, better targeted to the layers in the back of the eye, and does not cause the injected solution to be diluted by vitreous humor. It provides a promising way of targeted delivery of antiangiogenic therapeutics. 4 The detailed mechanisms and characteristics of this technique, including the thickness and closure kinetics of the SC space following injection, and clearance routes have been studied thoroughly by Chiang et al. 5,6 This minimally-invasive method uses a microneedle to inject drugs to the SC space from which the active agents would diffuse to the surrounding tissues. Recently, SC injection has been studied extensively as a method of sustained delivery of drugs, such as triamcinolone acetonide (TA). 7,8 Anti-VEGF therapy for AMD and ME uses intraocular injection of VEGF-neutralizing antibodies, ranibizumab and bevacizumab, or a fusion VEGF receptor aflibercept, that have been shown to provide substantial benefits to patients; however, in a significant percentage of patients these therapeutic interventions are unable to eliminate neovascularization and edema, suggesting that other pathways and factors might be involved. 9,10 Antiangiogenic short peptides are emerging as new promising agents for the treatment of AMD and ME. Classes of antiangiogenic peptides have been discovered; 11 among them a serpin-derived peptide has been shown to be a promising potential therapeutic along with its biodegradable polymeric microparticle-based delivery system. 12 Recently, a collagen IV-derived peptide has been demonstrated in several animal models to significantly reduce neovascularization and vascular leakage. 13 To reduce the frequency of injections and to prolong the action of the antiangiogenic peptides, peptide-containing microparticles that slowly release peptide over an extended period of time can be injected to provide sustained inhibition of neovascularization and vascular leakage. Effective antiangiogenic agents including small molecule tyrosine kinase inhibitors, peptides, antibodies, siRNAs, or genes can be intraocularly delivered as treatment. Considering they can vary significantly in transport properties, characterization of ocular transport of different molecules is especially important to evaluate the efficacy of drug delivery strategies.
Several computational models have been developed for different drug delivery techniques to the posterior segment of the eye including systemic delivery, intravitreal injection, and ocular implants. Balachandran and Barocas 14 used a finite element method (FEM) to simulate the diffusion, convection, and active transport through the diffusion barriers of drugs delivered from systemic source. Jooybar et al. 15 developed a similar model with detailed geometry using FEM-based COMSOL Multiphysics for ocular drug transport following intravitreal injection and ocular implants. Other models focused on in silico investigations of effectiveness of different kinds of ocular implants. Kavousanakis et al. 16 simulated the delivery of an anti-VEGF fragment antibody to the posterior segment of the eye using a polymer gel implant. The pharmacokinetic model developed by Kotha et al. 17 studied a polymer patch-like implant placed on sclera, and the threedimensional (3D) model developed by Park et al. 18 simulated an implant drug release profile, both using FEM similar to the previous models. Other studies focused on using computational models, in combination with experiments to determine the transport properties of individual components of the eye. For example, Haghjou et al. 19 investigated the outward permeability of different ophthalmic drugs in the retinachoroid-sclera region of the eye, and Ranta et al. 20 analyzed the effect of the diffusion barriers on pharmacokinetics of these ophthalmic drugs. A more recent model developed by Hutton-Smith et al. 21 used a three-compartment model to assess the distribution of intravitreally delivered drugs across the retina, vitreous chamber, and anterior chamber. Despite a plethora of computational models dedicated to study ocular pharmacokinetics of drugs of different classes, there remains a lack of wide comparisons of computational models and experimental data. 22 Recent developments in novel drug delivery techniques, such as SC injection, also warrant anatomically-detailed models of ocular pharmacokinetics validated by experimental data.
We developed a physiologically-based computational 3D model for ocular diffusion of therapeutic drugs following injection into the vitreous cavity and SC space of the eye and performed simulations using COMSOL Multiphysics software. The model is validated with previously published experimental data for fluorophotometry scans in rabbit eyes following intraocular delivery of molecules of different molecular size including sodium fluorescein (SF), bevacizumab, 4 and ranibizumab. 23 The model then is used to predict the transport of small molecule SF, as well as larger antiangiogenic VEGF antibodies following intraocular injection into the human eye. Our model was able to characterize ocular transport of different therapeutic agents across diffusion and permeability barriers in the posterior segment of the eye and compute the local concentration in each layer of the eye to predict the efficacy of different ocular drug delivery strategies for a variety of therapeutic agents.
Model Geometry
A 3D model detailing anatomical structures of the vitreous and posterior segment of the eye was developed using COMSOL Multiphysics Modeling software (version 5.3; COMSOL, Inc., Burlington, MA, USA). The geometry of the model is shown in Figure 1a. The shape of the eye is assumed to be approximately spherical. This study focused on the transport of the injected species in the posterior segment of the eye only. In this model, the posterior segment was divided into sclera (S), SC space, choroid (C), retina (R), and vitreous (V). The RPE is modeled as a thin layer located between the C and R layers of the eye. The inner limiting membrane (ILM) as a thin layer between R and V separates V from the rest of the posterior segment. Each layer of the posterior segment is represented as a spherical shell. The model geometry and finite-element meshing of the geometry are shown in Figure 2.
Governing Equations
In the model, the five regions of the posterior segment of the eye (S, SC, C, R, and V) have different transport properties as illustrated in Figure 1b. RPE between C and R, and ILM between R and V also are modeled as thin layers that has different transport properties from their adjacent layers. SC is a fluid-filled space created by the injection in the potential space between the sclera and choroid. An aqueous solution containing an experimental or therapeutic molecule or biodegradable particles containing therapeutic molecules is injected into the vitreous or SC space. The general 3D concentration distribution of the molecule is described by the convection-diffusion equation with first order clearance: where j denotes the region in the eye, c j is the interstitial concentration, / j is the void fraction or the fraction of the volume containing the interstitial fluid where the molecules can diffuse freely, as introduced previously, 24 D j is the diffusivity, v is the convective velocity field, and K cl;j is the clearance rate. Convection in the back of the eye is driven by the difference in pressure between the hyaloid membrane, anterior to the vitreous humor, and the episcleral vein, posterior to the sclera. Convective flow driven by pressure gradient is modeled as a fluid flow through a porous, incompressible medium, using Darcy's law, as in computational models developed by Balachandran and Barocas 14 and Missel: 25 where K l is the hydraulic permeability of the material and ÀrP is the pressure gradient. The velocity field v is proportional to the pressure gradient. Assuming the fluid is incompressible, r Á v ¼ 0 , the pressure then can be computed by solving the partial differential equation: The velocity field then is calculated from Equation 2. RPE is known to actively transport molecules, such as fluorescein. 26 Active transport is modeled by a constant radially outward convective field in the RPE layer. Rate of active transport of fluorescein is adapted from the model developed by Balachan-dran and Barocas. 14 No active transport is assumed for antiangiogenic proteins.
Clearance Mechanisms
Intraocularly delivered drug clears from the eye through anterior and posterior clearance. In anterior clearance, drug is cleared from the vitreous humor through permeation to the anterior chamber across the hyaloid membrane. Existence of certain enzymes also suggests that a small amount of enzymatic degradation can take place within the vitreous. 22 In posterior clearance, drug is cleared through the choroidal vasculature and episcleral vein. Anterior clearance and loss to choroidal vasculature are modeled with first-order clearance according to the pharmacokinetic model developed by Hutton-Smith et al. 21 Clearance through episcleral vein is modeled with a constant flux boundary condition at the outer surface of sclera according to anatomically-detailed finite element models developed by Balachandran and Barocas 14 and Missel. 25
Boundary Conditions and Initial Conditions
Flux balances and concentration continuities are applied at all internal boundaries, ensuring that mass balance is maintained for the transport across all internal boundaries separating adjacent layers. At the outer boundary of the sclera, a constant flux Àk s Á c is applied to model the loss of drug to the episcleral vein. Zero-flux conditions are applied at all other exterior boundaries.
The injection into the SC space is assumed to be instantaneously mixed within the SC region and is modeled by specifying initial concentration c 0;SC in the SC region. Intravitreal injection is modeled by the assumption that immediately after injection, the injected solution is partially mixed in a subvolume of vitreal fluid and settles at the bottom of the eye due to its higher specific gravity (Campochiaro PA, unpublished observations). Sensitivity to the values of the mixed subvolume is presented below and this parameter is shown not to be important except for short time after injection.
Parameter Estimation
All parameters used in the model for rabbit and human eyes are presented in Supplementary Table S1. Scleral permeability in rabbit eyes has been shown to follow an exponential fit to the molecular radius of the molecule as demonstrated in in vitro experiments. 27 Least-square regression was performed on the values reported by the study, and used to predict the permeability for SF, 40 kDa Dextran, 250 kDa Dextran, ranibizumab, and bevacizumab. Diffusion coefficients for these molecules then are estimated by multiplying predicted permeability values by scleral thickness.
Diffusion coefficients of the other layers of the eye then are estimated by the empirical relation between diffusivity and void fraction as described by the following equation: 28 where / j ; D j are the void fraction, interstitial volume fraction, and diffusion coefficients of layer j, and / s ; D s are the void fraction and diffusion coefficients of sclera. Transport through RPE is characterized by permeability across the thin RPE layer. It has been shown that RPE has lower permeability for molecules of larger size, and that RPE permeability decreases exponentially with respect to the molecular radius. 29 Literature values for in vitro permeabilities of carboxylfluorescein, 4 kDa FITC-Dextran, 10 kDa FITC-Dextran, 20 kDa FITC-Dextran, 40 kDa FITC-Dextran, and 80 kDa FITC-Dextran are fitted exponentially to their Stokes-Einstein radius. Baseline RPE permeability of SF is estimated using the exponential fit and molecular radius. It has been shown that ILM would permit molecules smaller than 70 kDa to diffuse across it. 30 In the present model, ILM is not considered a resistive barrier for small molecules such as SF; for large molecules permeability across ILM is assumed to be lower than its adjacent layers. Estimate of permeabilities of ranibizumab and bevacizumab across ILM and RPE are adapted from an experimentally-validated compartmental model developed by Hutton-Smith et al. 21 Geometric parameters, including thicknesses and void fractions of layers in human eyes are adapted from Mac Gabhann et al. 24 Dimensions of rabbit eyes, thickness of RPE, and thickness of SC space after injection into SC space are obtained from experimental studies and optical images of posterior segment of rabbit eyes. 4,31 Thickness of ILM is adapted from experimental studies on ILM morphometry. 32 Parameters characterizing convective flow within the eye, including hydraulic resistivities, pressure at hyaloid membrane and sclera are adapted from the study of Balachandran and Barocas. 14 Pressure at hyaloid membrane, the intraocular pressure (IOP), is assumed to rise to 30 mm Hg immediately after injection and reduced stepwise to 15 mm Hg at 30 minutes after injection. 33
Initial Distribution of Drug Immediately Following Intraocular Injection
The initial condition is given by specifying the initial distribution of the drug immediately following intraocular injection. For intravitreal injection, the injected fluid is assumed to partially mix within the vitreous and be significantly diluted. Intravitreally administered solutions of fluorescently-labeled ranibizumab have higher specific gravity compared to mostly aqueous vitreous humor that has a density similar to that of water. 34 Therefore, due to the higher specific gravity of the injected fluid, it is further assumed that the mixture settles at the bottom of the vitreous humor; however, we showed that this assumption only affects the concentration distribution at the initial times, with the long-term behavior only dependent on the amount of injected species. Note that the hydrodynamics of mixing for fluids with different viscosities and specific gravities relevant to the problem of intravitreal injection is a complex phenomenon and has not been investigated to our knowledge; however, relevant studies for a fluid jet impinging on a fluid reservoir suggest that the mixing effect is significant. 35,36 The degree of the initial mixing that occurs following intravitreal injection is explored in more detail in the Results section. We also assume a similar initial distribution for SF solution. For SC delivery, the injected fluid is assumed to immediately fill the SC space homogeneously following injection.
Details of the numerical solution of the above transport equations are presented in Supplementary Materials.
Intravitreally Administered Solution is Mixed Significantly in the Vitreous
The degree to which intravitreally administered solution containing SF or fluorescently-labeled ranibizumab is mixed within the vitreous, as well as the long-term impact of the initial distribution were explored in silico by varying the volume that the solution initially occupies immediately after injection.
For intravitreal injection of 50 lL aqueous solution containing ranibizumab, the solution was assumed to be mixed with the vitreous humor to form a mixture that settles at the bottom of the vitreous as discussed above. To investigate the degree of mixing, the mixture was assumed to occupy 50 lL (same as injected volume), 250 lL (5 times the injected volume), and 400 lL (8 times the injected volume) in the initial condition. As shown in Figure 3, the concentration distributions at different time points following injection computed with different initial conditions were compared to previously reported experimental fluorophotometry data. 23 The simulation results showed that the assumption of the mixture occupying 400 lL was most consistent with experimental data. In Figure 3C, the concentration along the visual axis shows the greatest difference due to mixing 1 day after injection. Four days after injection and later, the concentration profiles from different degrees of mixing were within 5% of their average, showing that the concentration distribution of injected species was only affected at initial times, and that the long-term behavior only depended on the amount of drug injected. As shown in Figure 3D, the pharmacokinetic profile showing the average concentration of ranibizumab in the vitreous following injection also were independent of the mixing.
Similar results were obtained from the simulation of intravitreal injection of SF. Simulation results with the assumption of the injected mixture occupying 400 lL were most consistent with experimental data (Supplementary Fig. S1).
Fitting of Clearance Rates to Experimental Data
The clearance rates of intraocularly-administered SF, ranibizumab and bevacizumab were fitted to experimentally measured pharmacokinetic profiles. 4,23 To fit the model to the pharmacokinetic profiles, model sensitivity to parameters characterizing drug clearance (clearance rate in choroid and vitreous k cl;c , k cl;v , as well as scleral loss rate k s ) following intraocular injection was assessed. For intravitreally delivered sodium fluorescein and ranibizumab, vitreous concentration following injection was only sensitive to clearance rate in vitreous ( Supplementary Figs. S2A, S2C). Concentration of suprachoroidally injected sodium fluorescein in the SC space was sensitive to scleral loss rate k s (Supplementary Fig. S2B), while concentration of suprachoroidally injected bevacizumab in the SC space was sensitive to choroidal clearance rate k cl;c (Supplementary Fig. S2D). The sensitive parameters then were fitted to the experimental data. Representative pharmacokinetic profiles using different values for clearance parameters of SF transport following intravitreal and SC injection, ranibizumab transport following intravitreal injection, and bevacizumab transport following SC injection, along with experimental data are shown in Figures 4e to 4h. Fitted clearance rates then were used for 3D simulation of transport following intraocular injection in human eyes.
Intraocular Administration of Sodium Fluorescein in Rabbit Eyes: Comparison With Experimental Data
Concentration of SF along the visual axis as predicted by the model was compared to the published fluorophotometry measurements for validation 4 (Figs. 4a, 4b). 3D concentration distributions of SF at different time points following injection are shown in Supplementary Figure S3. The experiment and simulation show that SF cleared from the eye faster following SC injection. Following intravitreal injection, concentration of SF reached 10% of peak concentration approximately 7 hours after injection, and 90% cleared out 6 hours after injection, whereas following SC injection, SF reached 20% of the peak concentration approximately an hour after injection and 95% cleared out approximately 6 hours after injection, much faster than intravitreally administered SF.
Intraocular Administration of Antiangiogenic Proteins
Simulations of transport in the rabbit eyes of ranibizumab following intravitreal injection and bevacizumab following SC injection were performed using the model. Concentration profiles along the visual axis at different time points following injection were compared to fluorophotometry scans for validation (Figs. 4c, 4d). 23 3D concentration distributions of ranibizumab and bevacizumab at different time points following injection are shown in Figure 5. The simulation and experiment showed that ranibizumab was distributed homogeneously within the vitreous approximately two days following injection and that 90% of ranibizumab cleared from the vitreous 14 days following injection. For SC injection of bevacizumab (Figs. 4d, 4h), bevacizumab was concentrated mostly near the SC space, and 90% of bevacizumab cleared out 12 hours after injection.
Simulation of Intraocular Administration of Sodium Fluorescein and Antiangiogenic Proteins Into Human Eyes
The model was adjusted to human eye dimensions for prediction of transport of SF in human eyes following intravitreal and SC administration. Transport of molecules of similar size to SF, such as small molecules or short therapeutic peptides, following intraocular injection should be similar to that of SF, unless their distributions are affected by binding to their target receptors; simulations for therapeutic peptides using similar models will be presented elsewhere. Prediction of concentration distributions of SF in human eyes following intravitreal injection and SC injection is presented in Figure 6. For intravitreally administered SF, (Figs. 6a-c), the injected solution was significantly diluted in the vitreous. Retinal concentration was similar to the vitreal concentration. Choroidal and scleral concentrations were lower than the concentration in retina. Vitreal concentration of SF was at 20% of its peak value approximately 6 hours after injection and 90% of the injected SF cleared out from human eyes approximately 10 hours after injection. For SC injection of SF (Figs. 6d-6f), the injected solution was held at the SC space between sclera and choroid. The fluid was not diluted and the concentration of SF in the SC space was similar to the concentration of the injected fluid. The choroidal concentration was similar to the concentration in the SC space. Retinal concentration closely followed SC concentration starting from approximately 2 hours after injection. Significantly less SF diffused into the vitreous. Scleral concentration of SF was at approximately 20% of its maximum value 24 hours after injection.
The model also was used to predict transport of anti-VEGF antibodies following intraocular administration, including intravitreally-administered ranibizumab and suprachoroidallyadministered bevacizumab in human eyes. Concentration distributions in human eyes following intraocular injection are shown in Figure 7. Transport of ranibizumab and bevacizumab represented the transport of similarly-sized antiangiogenic proteins following intraocular administration. As with SF, intravitreally administered ranibizumab was significantly diluted in the vitreous following intravitreal injection. As shown in Figure 7c, retinal concentration was similar to vitreal concentration. The scleral and choroidal concentrations were significantly lower. Suprachoroidallyinjected bevacizumab was not diluted, and targeted directly the chorioretinal tissues. Following SC administration (Fig. 7d), bevacizumab concentrations in the SC space and choroid were at the same level, and retinal concentration was significantly lower due to the permeation-limiting RPE. It should be noted that, while after intravitreal administration the level of the drug persisted for over 2 weeks, after SC administration the drug persisted on the order of a day, which should necessitate either frequent injections, which is not practical, or the use of sustained delivery, such as microparticle-based delivery.
Parameter Sensitivity for RPE and ILM Permeability
Retinal delivery of antiangiogenic proteins is affected by the diffusion barrier RPE between choroid and retina, and by the diffusion barrier ILM between retina and vitreous. Permeability of molecules across these barriers can vary between individuals, and some diseases would cause structural changes of the RPE and ILM; therefore, varying their transport properties as discussed below. The data used for model validation were not spatially resolved enough to accurately assess the permeability across the thin diffusion barriers, due to the nature of the fluorophotometry experiments. Therefore, the uncertainty due to RPE and ILM permeability was explored by comparing the results obtained from varying the RPE and ILM permeabilities from their baseline values (Fig. 8).
For intravitreal injection, ILM resists drug entrance into the retina from the vitreous, while RPE keeps drug from leaving the retina to the choroid where it is lost to circulation. Vitreal concentration is not sensitive to RPE and ILM permeability (Figs. 8a, 8c). Retinal concentration is very sensitive to RPE permeability, but not to ILM permeability (Figs. 8b, 8d). A 5-fold increase in RPE permeability causes peak retinal concentration to decrease by more than 60%.
For SC delivery, RPE is the diffusion barrier that keeps injected molecules from entering the retina. Figures 8e to 8h show the SC and retinal concentrations following SC administration of bevacizumab using different RPE and ILM permeability values. Retinal concentration of bevacizumab showed high sensitivity to RPE permeability (Fig. 8f) and low sensitivity to ILM permeability (Fig. 8h). A 5-fold increase in RPE permeability resulted in an approximately 6-fold increase in peak retinal concentration.
Intravitreally Administered Solutions are Partially Mixed Within the Vitreous Humor Immediately After Injection
For intravitreal injection of a solution containing certain substances, assuming the injected fluid is less viscous than the vitreous humor (the relative viscosity of vitreous humor has been reported as 1.59) 37 and the injection of approximately 50 lL liquid is performed using a 28-gauge (G) or similar needle within one second, the Reynolds number is estimated to be approximately 350. Based on an experimental study of a submerged fluid jet injected into a reservoir, with the same or different viscosities, 35,36 significant mixing would occur during the injection process. The viscosity of the human vitreous depends on many factors, such as age and disease. Relative viscosity of human vitreous varies from approximately 1 at birth to over 2 at later age. 37 Vitreous viscosity also can be affected by conditions, such as myopia, and ocular procedures, such as cataract extraction. 37,38 Additionally, different injection speed or needle size would affect the Reynolds number of the injected solution, and result in different initial distribution of the drug immediately after injection. Therefore, significant mixing of injected solution and vitreous humor would occur immediately following intravitreal injection. Mixing causes the injected solution to be diluted and to occupy 2 to 10 times the injected volume within the vitreous. The degree to which the mixing occurs is likely to vary among individuals due to all of the aforementioned reasons.
However, as demonstrated in the Results, the long-term distribution of drug concentration after the administered drug has reached homogeneity within the vitreous is less impacted by the initial distribution, and therefore, would less likely be affected by this individual variability and would show more consistency among individuals.
Routes of Clearance Following Intraocular Injection Depend on Method Of Delivery and Molecule Size
Sensitivity to clearance parameters demonstrated in Supplementary Figure S2 showed that for intravitreally delivered drugs, the pharmacokinetic profiles of the drug in vitreous and retina are most sensitive to clearance rate in vitreous. For suprachoroidally-delivered drugs, however, choroidal clearance and episcleral clearance can have a role. For small molecules, such as fluorescein, the concentration profile is most sensitive to clearance from the episcleral vein. For large molecules, such as bevacizumab, the concentration profile in the SC space and retina is insensitive to episcleral clearance rate, but sensitive to choroidal clearance rate. This suggests that molecular size and delivery method can affect the clearance routes. Small molecules that diffuse more readily to the episcleral surface will become cleared into the circulation through the episcleral vein. Large molecules will be cleared primarily through choroidal vasculature. Design of an optimal drug delivery strategy must consider contributions of different clearance routes and any conditions that might affect them (for example, high IOP can limit the clearance through the anterior routes and high vein pressure can limit posterior clearance).
Effectiveness of Retinal Delivery of Drugs can be Affected by RPE and ILM Permeability
The model demonstrated that effectiveness of retinal delivery of large-sized molecules is sensitive to RPE and ILM permeability. Retinal delivery from the intravitreal route was mostly affected by RPE permeability, and retinal delivery from the SC route was affected by RPE and ILM. RPE permeability is known to have high individual variability due to age and diseases. Thus, diseases that cause the structural changes to RPE, such as diabetic retinopathy, would alter the effectiveness of retinal delivery through SC or intravitreal injection. For individuals with diseases that cause the RPE barrier permeability to be significantly higher, for example, targeted delivery of drug to the retina would be more easily achieved through SC injection because RPE no longer limits the permeation of drug from choroid to retina.
In vivo fluorophotometry is a powerful tool for studying the pharmacokinetics of intraocularly administered drugs. However, due to its limited spatial resolution (of 0.25-0.5 mm), and the fact that fluorescence signals are spatially convoluted, 39 further limiting its accuracy in evaluating the concentration gradient along the visual axis, fluorophotometry data are not sufficient in quantitative studies of the transport property of RPE as a diffusion barrier.
Drug Delivery to the Retina Following SC Injection is Sensitive to Permeability Across the RPE
Although concentration in the SC space following SC injection of bevacizumab is sensitive to choroidal clearance rate and largely unaffected by permeability across diffusion barriers ( Supplementary Fig. S4A), retinal concentration is very sensitive to permeability across RPE compared to clearance parameters. The model predicted that a 5-fold decrease in permeability prevents most of the drug from entering the retina following SC injection of bevacizumab ( Supplementary Fig. S4B).
Molecules of Smaller Size Can Permeate Through the Diffusion Barrier and Reach Retina More Easily
In the simulations and experiments on intraocular drug delivery in human and rabbit eyes, SF and antiangiogenic proteins represent molecules of small and large hydraulic radii, respectively. The effectiveness of delivery of antiangiogenic agents to the retina through intraocular injection is limited due to the existence of permeation-limiting barriers, such as RPE and ILM. For transscleral administration of antiangiogenic drugs, the molecules must pass through an additional barrier, the episcleral boundary (ESB). Although the transport properties of ESB as diffusion barriers are not as well understood and characterized, all the diffusion barriers, ESB, RPE, and ILM, are likely to limit the diffusion of larger size molecules. The model predicts that SF reaches retina more easily than large antiangiogenic proteins, such as ranibizumab, because the permeation-limiting effect is significantly lower for smaller compared to larger-sized molecules. Thus, for effective retinal delivery of antiangiogenic agents for suppression of angiogenesis, drugs of smaller molecular radius, such as short peptides with antiangiogenic properties, are more desirable due to better permeation through the diffusion barriers. Drugs of smaller molecular radius, however, have a shorter half-life compared to large antiangiogenic proteins, like ranibizumab and bevacizumab. Therefore, achieving a sustained suppression of neovascularization would necessitate some form of sustained delivery, such as a microparticle-based delivery system that encapsulates nanoparticles formulated with a therapeutic peptide; 12 in addition, a therapeutic peptide has been shown to form a natural depot upon injection and that is slowly released from the depot. 12,13 Intravitreal Injection of Antiangiogenic Proteins Provides More Sustained Suppression of Angiogenesis Intravitreally administered molecules remain largely inside the vitreous humor following injection. The benefit of intravitreal compared to SC injection for delivery of antiangiogenic proteins is more sustained suppression of angiogenesis. For intravitreal injection of ranibizumab in rabbit eyes, for example, ranibizumab is present for as long as 14 days following injection. Drugs administered through the SC space show faster clearance following injection. Suprachoroidally administered bevacizumab cleared out 90% 6 hours following injection. This limitation can be overcome by using some form of sustained delivery, such as ocular implants, microparticlebased drug delivery systems, or depot formation by some antiangiogenic peptides.
SC Administration Provides More Efficient Retinal Delivery for Smaller Molecules and Clearance is Faster Following Injection
Intravitreally administered drug formulation is significantly mixed and diluted inside the vitreous humor. This would mean that to achieve similar concentration, a larger dose would be needed for intravitreal administration compared to SC injec- tion. However, the invasiveness of intravitreal injection makes this intraocular drug delivery route less desirable. Intravitreal injection is associated with rare complications, such as retinal detachment and cataract. SC injection, as a less invasive alternative to intravitreal injection, uses a microneedle to inject directly between the choroid and retina in the back of the eye, avoiding injection directly into the eye altogether. As discussed previously, during SC injection, the SC space will open up gradually while becoming filled with the injected solution. This would mean almost no dilution occurs during SC administration of drugs. The SC space simply holds the injected fluid and serves as a reservoir for diffusion of drug into the choroid and retina. Following SC delivery, as predicted by the model, the drug concentrates near the SC space, and stays mostly near the scleral, choroidal, and retinal tissues. For large molecules, such as many antiangiogenic proteins, including bevacizumab and ranibizumab, however, the effectiveness of retinal delivery through SC injection is limited due to the permeability-limiting RPE between choroid and retina that keeps large molecules from entering the retina from the choroid. This could be overcome by using smaller molecules, such as short antiangiogenic peptides, as small molecules can move much more freely across the RPE. We proposed that using a combination of the SC injection technique to reduce invasiveness, drugs of small molecular size to better permeate through the RPE, and sustained delivery systems, such as a natural depot formation or biodegradable microparticles, would achieve sustained suppression of neovascularization with benefits to patients of wet AMD and ME.
Limitations of the Model
The model does not consider binding of the drugs to the tissues within the eye. Although permeability values used in the model are measured in ocular tissue, drugs can have different binding kinetics in vivo, especially in pathologic conditions where binding can have a major effect on diffusion of the drug and permeation of drug across barriers. In addition, following SC injection, closure kinetics of the SC space and clearance kinetics and clearance routes have been studied. The SC space is observed to close within an hour after injection. 5,6 If the injected fluid in the SC space were squeezed out to the periphery as the SC space closes, the transit of drug concentration in the retina would be further accelerated as a result.
CONCLUSIONS
We developed a physiologically-based anatomically-correct 3D transport model for intraocular drug delivery. To the best of our knowledge, our model is the first to use spatial concentration distribution from published in vivo fluorophotometry data for validation for a small molecule and a protein drug following intravitreal and SC injections. This provided more confidence in predicting local concentration distribution, which is relevant in many applications including drug delivery to the SC space, placement of ocular implant, and design of optimal delivery strategy for sustained delivery. Our model was fitted to experimental pharmacokinetics data for intravitreally-and suprachoroidally-delivered sodium fluorescein and therapeutic antibody, and validated with in vivo fluorophotometry data. The model was able to predict in rabbit and human eyes the 3D distribution of drugs of different size following intraocular administration.
The model suggested that that the initial mixing effect of the intravitreally-injected drug due to injection has little effect on long-term drug distribution. Distinct clearance mechanisms differentially impact clearance of drugs of various sizes administered through different routes, and permeabilities of different diffusion barriers will differentially impact effectiveness of intravitreal and SC drug delivery to the retina. The model provided a framework and platform for testing new drugs and sustained delivery devices as well as of regimens for existing ones. | 2018-11-15T08:55:05.438Z | 2018-10-01T00:00:00.000 | {
"year": 2018,
"sha1": "d71dde1890f5d29b300ee72a01b0d7f735fc4e4c",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1167/iovs.17-23632",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d71dde1890f5d29b300ee72a01b0d7f735fc4e4c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2025054 | pes2o/s2orc | v3-fos-license | Metal-Macrofauna Interactions Determine Microbial Community Structure and Function in Copper Contaminated Sediments
Copper is essential for healthy cellular functioning, but this heavy metal quickly becomes toxic when supply exceeds demand. Marine sediments receive widespread and increasing levels of copper contamination from antifouling paints owing to the 2008 global ban of organotin-based products. The toxicity of copper will increase in the coming years as seawater pH decreases and temperature increases. We used a factorial mesocosm experiment to investigate how increasing sediment copper concentrations and the presence of a cosmopolitan bioturbating amphipod, Corophium volutator, affected a range of ecosystem functions in a soft sediment microbial community. The effects of copper on benthic nutrient release, bacterial biomass, microbial community structure and the isotopic composition of individual microbial membrane [phospholipid] fatty acids (PLFAs) all differed in the presence of C. volutator. Our data consistently demonstrate that copper contamination of global waterways will have pervasive effects on the metabolic functioning of benthic communities that cannot be predicted from copper concentrations alone; impacts will depend upon the resident macrofauna and their capacity for bioturbation. This finding poses a major challenge for those attempting to manage the impacts of copper contamination on ecosystem services, e.g. carbon and nutrient cycling, across different habitats. Our work also highlights the paucity of information on the processes that result in isotopic fractionation in natural marine microbial communities. We conclude that the assimilative capacity of benthic microbes will become progressively impaired as copper concentrations increase. These effects will, to an extent, be mitigated by the presence of bioturbating animals and possibly other processes that increase the influx of oxygenated seawater into the sediments. Our findings support the move towards an ecosystem approach for environmental management.
Introduction
Trace levels of copper are essential for the healthy functioning of organisms owing to its central role in a range of enzymes [1]. However, this heavy metal is well known for its toxicity and the biocidal properties of copper have been exploited by mankind for centuries. Copper in the form of Cu 2 O is now the dominant active ingredient found in antifouling paints applied to marine vessels and other permanently submerged structures such as fish farm cages [2,3] due to the global ban of organotin-based compounds in 2008. Cu 2+ ions slowly leach from the paint and particulate copper is further released to the environment in flakes of paint produced during the periodic cleaning and maintenance of antifoulantcoated structures [2,4,5]. Moving ships are estimated to leach ,100 tonnes of copper into the Greater North Sea each year, a value that does not include the considerable losses occurring in harbours and marinas [6]. Copper ions have a strong affinity for binding with particulate matter [7], which ultimately carries this heavy metal to the seafloor. Relatively low biological demands for copper and the typically reducing nature of marine sediments result in the accumulation of this element at the seabed [8], with concentrations in ship recycling zones, beneath fish farms and near boat yards reaching up to 703, 805 and 2230 mg Cu [kg dry sediment] 21 respectively [9,10,11].
The global growth in demand for sea freight, which increased from 28,723 to 40,891 billion ton-miles between 2000 and 2010 [12], is causing concentrations of copper entering the marine environment to rise. Marine aquaculture activities are also increasing rapidly and represent another growing input of copper into coastal benthic ecosystems, through the use of copper-enriched feeds and antifouling products [9,13]. Increased inputs of copper to marine ecosystems are occurring in concert with ocean warming and acidification. These processes are both expected to increase the bioavailability and hence toxicity of copper, potentially by $100% over the next 100 years [14,15].
Copper is known to affect the biomass and metabolic activities of sediment-associated bacteria [3,16], driving changes in their community structure [17,18]. It also affects the activity and survival of marine metazoan fauna [19], impacting upon their capacity for bioturbation [20]. Bioturbation is the process by which faunal movements increase the flow of oxygenated water into the seabed, and is well known to influence sediment microbial community structure and nutrient effluxes [21,22,23]. Macrofaunal movements also increase the transport of heavy metals into sediments [24] and mobilize copper owing to the enhanced supply of oxidizing solutes [8] and organic copper-complexing compounds [25]. Previous mesocosm studies investigating the effects of copper on marine benthic communities have reported direct effects [26] and interactions with trophic complexity and nutrient availability [27,28,29]. An equally complex variety of responses to copper contamination are reported in multi-species aquatic microcosm experiments [30]. We hypothesized that copper contamination and bioturbation interactively affect the composition and metabolic functioning of soft sediment microbial communities. Our factorial mesocosm experiment focussed on the cosmopolitan bioturbating amphipod Corophium volutator because a) its ventilatory-and sediment reworking activities can significantly affect carbon and nitrogen cycling in marine sediments [31,32] and b) long-term exposure to even low concentrations of copper is expected to negatively affect their population density [33]. We employed a combination of phospholipid fatty acid (PLFA) analyses and compound-specific isotope ratio mass spectrometry (IRMS) to examine the microbial response in our experiments. This combination of techniques enables the relative structure and metabolic functioning of extant microbial groups to be examined. The d 13 C signature of individual fatty acids can provide information on the balance between catabolism and anabolism of particular PLFAs, carbon isotope fractionation, and also the identity of substrates used for biosynthesis [34,35,36].
Study Location and Sediment Preparation
Experimental animals and sediments were collected at low tide from the mudflats in the lower reach of the Ythan Estuary, Aberdeenshire, NE Scotland, UK (57u 20.0859N, 02u 0.2069W) on 12/10/2009. All necessary permissions for work on the Ythan and Forvie National Nature Reserve were obtained from Scottish Natural Heritage. No protected or endangered species were involved in our experiments. The marine amphipod, C. volutator, was removed from the upper 3 cm of sediment by gentle sieving (1 mm) and acclimated to laboratory conditions in fresh, aerated seawater for 24 hrs prior to experimentation. Bulk sediments from the upper 3 cm were gathered by hand and subsequently sieved (0.5 mm) to remove macrofauna and large organic debris. The resulting homogenised sediments contained 1.3760.01% w/w organic carbon, d 13 C 222.2860.04 % where error is 61se and n = 5. Copper concentrations in these sediments range between 1.9-4.5 mg Cu [kg dry sediment] 21 [37]. Copper treatment levels (Tables 1 and S1), chosen to span the range of concentrations present in the natural environment [9,10,11], were established by thoroughly homogenizing a saline solution of copper (II) sulphate pentahydrate into known quantities of pre-sieved sediment [19]. This was chosen because the majority of antifouling paints use Cu 2 O and the main biocidal species obtained from this compound in the presence of O 2 is Cu 2+ [2]. All of the seawater used was pumped from the estuary at high tide (33 ppt), UV-sterilized and 10 mm filtered prior to use.
Mesocosm Experiments
A total of 60 mesocosms were assembled to examine how increasing concentrations of copper and the presence of C. volutator affected nutrient release from the sediments and benthic microbial community structure. Individual mesocosms consisted of clear cylindrical cores (300 mm high, 100 mm internal diameter) fitted with removable acetyl baseplates. An 8 cm thick layer of sieved sediment at the required copper concentration was carefully introduced into each core and subsequently submerged beneath a 20 cm column (,1.5 L) of UV-sterilized and 10 mm filtered seawater. Thirty healthy C. volutator ($4 mm body length) were introduced to replicate (n = 5) mesocosms at each treatment level. This is equivalent to 3820 individuals m 22 , a density chosen to be equivalent to or lower than that found at the sampling location. C. volutator are deposit-and episammic feeders in the absence of suspended particulate matter [38]. The organic carbon content of the sediments was greatly in excess of the respiratory demands of these animals over the experimental duration (Text S1). The remaining mesocosms at each concentration of copper (n = 5) were incubated without C. volutator. All experimental units were incubated at 15uC with a 12 h light:dark cycle and were continuously aerated throughout the 10 day experimental period to ensure that the water was saturated with oxygen. Cores were examined daily and any dead animals on the sediment surface were removed via a glass tube. However, in many cases dead animals were lost from the experiment through decomposition beneath the sediment surface. Water samples to determine nutrient concentrations were collected at the end of the experiment. The remaining overlying water was then carefully removed from each core and sediment samples from the upper 1 cm were collected for subsequent analysis of the phospholipid fatty acid (PLFA) content. All nutrient and PLFA samples were stored frozen (220uC) prior to analysis. Surviving animals were retrieved by sieving the experimental sediments.
Analytical Techniques
Concentrations of dissolved NH 4 + -N, NO x -N and PO 4 3-P (collectively 'nutrients' hereafter) were determined with a modular flow injection auto-analyser (FIA Star 5010 series) using an artificial seawater carrier solution. Sediment organic carbon isotopic composition was determined on pre-acidified samples using a Flash EA 1112 Series Elemental Analyser connected via a Conflo III to a Delta Plus XP IRMS (Thermo Finnigan, Bremen, Germany). Purified PLFAs were extracted from freeze-dried sediment samples and derivitized to yield fatty acid methyl esters (FAMEs) [39,40]. The concentrations and carbon isotope ratios of individual FAMEs were measured using a GC Trace Ultra with [41]. Bacterial biomass was calculated from concentrations of the biomarker PLFAs i15:0, ai15:0 and i16:0, assuming these represent 10% of total bacterial PLFAs and 0.056 gC PLFA/gC biomass [42]. Carbon isotope ratios of individual PLFAs were calculated with respect to Vienna-PDB (d 13 C V-PDB ) through the use of a CO 2 reference gas injected before every sample and traceable to International Atomic Energy Agency reference material NBS 19 TS-Limestone. Repeated analysis over a two month period of the d 13 C value of a C19 FAME internal standard gave a standard error of 0.26 % (n = 18). PLFA-derived data relates only to the extant microbial community as PLFAs in nonliving biomass undergo rapid environmental degradation.
Data Analysis
Data exploration was undertaken to identify outliers and instances of collinearity [43]. The effect of copper concentration on the survival of C. volutator was examined using linear regression. Proportional survival data are bounded by 0 and 1 and were therefore square root arcsin transformed prior to analysis. The median concentration of copper that caused 50% mortality of C. volutator was estimated using the trimmed Spearman-Karber method using software supplied by the U.S. Environmental Protection Agency [44]. Partial linear regression analysis of each nutrient dataset was undertaken to examine the relative importance of copper concentration and the transformed proportion of surviving C. volutator. In all 3 models, $38.4% of the explained variance was attributable to copper concentration and #1.1% was solely attributable to C. volutator (Table S2). Subsequent analysis of the three nutrient datasets using generalized least squares (GLS) regression included copper concentration as a nominal variable to allow for non-linear effects and C. volutator as a binary variable (present/absent) to avoid collinearity issues with copper level. Bacterial biomass data were analysed similarly. All GLS regression models were subjected to a hierarchical backwards selection procedure using likelihood ratio (L. Ratio) tests to remove nonsignificant terms. Full details of this procedure and subsequent model validation are presented elsewhere [13,45,46]. Redundancy analysis (RDA) was used to investigate how the proportional abundance and delta values of individual PLFAs were influenced by nominal copper concentration, the presence/absence of C. volutator and the interaction between these variables. The significance of individual model terms was determined using a permuted (n = 9999) forwards selection procedure analogous to that employed in the CANOCO software [47]. All statistical analyses were conducted in the 'R v2.11.1' programming environment [48] using the 'nlme' [49] and 'vegan' [50] packages.
Copper Effects on Survival of C. volutator
Copper concentration had a significant, negative effect on the survival of C. volutator (F = 22.01, df 5,24 , p,0.001; Figure 1a; Table 1), although differences between copper concentrations of 0 (control) and 30.2 mg Cu [kg wet sediment] 21 were not significant ( [19], demonstrating that the concentrations of copper in the experimental sediments and overlying waters were effectively the same as measured previously (Table S1).
Copper and C. volutator Effects on Nutrient Concentrations and Bacterial Biomass and Sediment d 13 C
It was necessary to account for the different levels of variance (heteroscedasticity) observed across the copper treatments by including copper concentration as a variance covariate in all of the analyses (p#0.027 in all cases; Table 2). Bacterial biomass and all nutrient concentrations at the end of the 10-day experiment were affected by significant copper 6 C. volutator interactions (p#0.011 in all cases; Table 2; Figure 1b-e). Bacterial biomass clearly declined in response to the copper additions, but remained higher in the presence of C. volutator at concentrations between 30 and 302 mg Cu [kg wet sediment] 21 (Figure 1b). Concentrations of NH 4 + -N in all copper-spiked mesocosms were above those in the controls. They reached a maximum at 91 mg Cu [kg wet sed] 21 ( Figure 1c) and declined thereafter. The net release of NH 4 + -N after 10 days was greater in the presence of C. volutator at all but the highest concentration of copper. All levels of copper contamination, excluding the lowest treatment, resulted in concentrations of NO x -N being lower than those in the controls (Figure 1d). The presence of C. volutator resulted in lower mean concentrations of NO x -N in all copper treatments, with the relative difference being greatest at 302 mg Cu [kg wet sed] 21 .
In the absence of C. volutator, there was a positive association between copper concentrations and the net accumulation of PO 4 3-P in the overlying water (Figure 1e). When C. volutator was present, concentrations of PO 4 3-P also increased across the control and two lowest concentrations of copper and then declined rapidly as copper contamination increased further.
Copper and C. volutator Effects on Microbial PLFAs
Absolute concentrations of individual PLFAs, their relative composition (mol %) and isotopic signatures (d 13 C) are presented in Figures S1, S2 and S3 respectively. In order of importance, copper concentration, the presence of C. volutator and an interaction between these variables all significantly (p,0.001) increased the amount of variance explained in the percentage (mol %) PLFA data (Table S4). The amount of variation purely attributable to copper and C. volutator was 42% and 12% respectively. A total of 67% of the variation in the data was explained by all the explanatory variables and 49% of all variation was explained by the first two axes (Table S5). The resulting RDA triplot (Figure 2) visualises the additive and interactive effects of copper and C. volutator on the relative abundance of PLFAs in the sediments. The PLFA signature in the control treatments (black symbols) was distinct from all others and did not differ due to the addition of C. volutator. Increasing copper concentration resulted in a progressive shift in the composition of PLFAs (imagine a vertical line of origin x = 20.75, y = 0.5 rotating anticlockwise as copper concentration increases); the relative change in the PLFA signature decreased as copper concentration increased. For a given concentration of copper, the addition of C. volutator resulted in a distinct shift in the relative abundance of the PLFAs (compare filled circles and triangles for any given colour but black).The control and lowest copper treatment were largely discriminated on the first axis, which had strong negative loadings of the PLFAs 20:4(n-5, 8,11,14), 17:0cy, 15:0ai, 20:5(n-3) and 17:0ai. Mesocosms containing C. volutator at copper concentrations $91 mg Cu [kg wet weight] 21 were also mainly discriminated on the first axis, with strong positive loadings of 15:0, 17:1(n-8), 17:0, 18:1(n-7) and 16:1(n-7). The amount of explained variance in the d 13 C PLFA data increased significantly (p,0.001) by incorporating C. volutator, nominal copper concentrations and their interaction (Table S4). Copper and C. volutator individually explained 19% and 7% of the variance in the data. The total amount of variance explained by the explanatory variables was 45%; 29% of the variance in the d 13 C PLFA data was explained by the first two axes (Table S5)
Discussion
Our study demonstrates that the effects of copper contamination on the structure and metabolic functioning of a soft sediment benthic microbial community are different in the presence of macrofauna. Copper 6 C. volutator interactions affected bacterial biomass, nutrient concentrations, microbial community structure and their isotopic signatures at the end of our experiment.
Copper and Macrofaunal Effects on Sediment Nutrient Exchange
C. volutator are known to stimulate biogeochemical cycling in marine sediments through their ventilatory activities [31,32], and can up-regulate their metabolic-and hence excretion rates in response to heavy metal contamination [31,51,52]. Elevated levels of NH 4 + -N and PO 4 3-P in the presence of C. volutator were therefore expected. However, our experimental sediments also contained bacteria and microalgae, both of which also contribute significantly to benthic elemental cycling.
Copper has direct, adverse affects on bacteria [16,3] and microalgae [53,54]. C. volutator affects these two groups of organisms directly through their feeding [55,56,57]. They also affect them indirectly via bioturbation [31,32] and their capacity to bioaccumulate copper and hence detoxify the surrounding environment [58,59]. In addition, the movement and burrowing activity of C. volutator changes in response to increasing copper concentrations [19,59], further affecting the supply and distribution of oxygenated seawater and bioavailable copper to benthic organisms [8]. We suggest that the typically lower concentrations of NOx -N observed in the mesocosms containing C. volutator indicates that the presence and activity of these animals provided some, albeit variable, alleviation of copper-inhibited denitrification for the resident microbial community [29,31]. This interpretation is consistent with the understanding that denitrification is the dominant loss process for nitrate in intertidal sediment ecosystems [60,61,62] and the known sensitivity of denitrifying bacteria to copper [29,63].
Looking beyond the interactive effects of copper and C. volutator, broad similarities exist in the patterns of net nutrient fluxes across the different levels of copper either with or without C. volutator. This observation is consistent with the understanding that the direct effects of copper on microbe-mediated nutrient cycling are greater than indirect effects caused by impaired-or lost macrofaunal functionality and species identity [20]. Concentrations of NH 4 + -N and NO x -N were elevated and depressed respectively relative to the controls at all but the lowest copper treatment (Figures 1d & e). The high concentrations of NO x -N observed in the lowest copper treatment likely reflect the stimulatory effect that low concentrations of this metal have on nitrification [63,64]. Lower concentrations of NO x -N at levels of copper beyond this do not reflect increased uptake by benthic microalgae as NH 4 + -N concentrations, which phytoplankton preferentially utilise over NO 3 2 [65], remained above control values at all levels of copper contamination. It is also unlikely that this effect reflects an increase in denitrification as this process is negatively affected by copper [29,63]. Rather, the positive effect of copper on the net accumulation of NH 4 + -N indicates an increase in heterotrophy of the benthic community [66] and reduced nutrient uptake by benthic microalgae. The reduced accumulation of NO x -N was likely due to copper inhibition of nitrification [31]. The negative relationship between copper concentration and the algal biomarker polyunsaturated fatty acids (PUFAs), 18:2(n26,9), 20:4(n25, 8,11,14) and 20:5(n23) (Figures S1 and S2) demonstrates that copper contamination had a strong, negative effect on the microphytobenthos in our mesocosms [53]. This observation is consistent with the aforementioned mechanism to explain the observed accumulation of NH 4 + -N in the overlying waters. Nevertheless, further work across a variety of scales is required to confirm the mechanisms suggested above. Highly targeted laboratory experiments using mono-and multi-species mixtures of benthic microbes and macrofauna are necessary to provide mechanistic insights into the interrelationships between copper contamination and the ways in which organisms interact with elemental cycles. Equally, field-scale observations across a variety of locations will be necessary to 'ground-truth' the existence and relevance of such effects in the natural world.
Copper and Macrofaunal Effects on the Sediment Microbial Community
Copper has been shown to increase maintenance energy demands in estuarine microbial communities, requiring an increasing proportion of the available substrates to be channelled towards catabolic processes as contamination levels increase [3,66,67]. The reduction in bacterial biomass with increasing copper concentrations was therefore expected. However, the typically positive effect of C. volutator on bacterial biomass is in contrast to the reported negative effects of this animal on microbial biomass in estuarine sediments [55]. We attribute this discrepancy to the increased availability of metabolic substrates for bacterial growth in the form of dead C. volutator in our experiments (discussed below).
The significant copper 6C. volutator interactions observed in the PLFA relative abundance-( Figure 2) and isotopic data (Figure 3) demonstrate that the effects of copper on the structure and metabolic functioning of the sediment microbial community differ in the presence of C. volutator. This result is consistent with the known effects of bioturbating organisms on microbial community structure [22,23] and the capacity for C. volutator to influence copper bioavailability [58,59]. Considering the effects of copper on bacteria and microphytobenthos discussed above, it is not surprising that sediments from the control-and lowest copper treatments were discriminated from others by generic PLFA biomarkers for bacteria (17:0cy, 15:0ai and 17:0ai) and diatoms (20:4(n25, 8, 11, 14), 20:5(n23)) [68]; the highest quantities of these two groups of organisms were present in the control cores.
Discerning the mechanisms causing the observed changes in the isotopic signatures of individual PLFAs is somewhat more complex. Carbon isotope signatures reflect a variety of factors, including the signatures of basal resources and specific metabolic pathways that result in isotopic fractionation [34,36]. Indeed, recent work in an undisturbed soil ecosystem has highlighted the paucity of knowledge on the turnover rates of individual groups of microorganisms and the isotopic fractionations that result from their specific metabolic pathways [69]. Even less is known about these issues in natural marine microbial communities. Potential modifications in sediment oxygen concentrations, driven by the interactive effects of bioturbation [31,32] and copper [70], could have influenced the d 13 C signatures of PLFAs both directly and indirectly. Discrimination during bacterial lipid biosynthesis can depend upon respiratory conditions [71]. Any change in the relative abundance of the bacteria responsible for anaerobic ammonia oxidization, a process that is widespread in soft sediment habitats [72], will influence the observed changes in d 13 C of individual PLFAs. Pure culture experiments with these organisms demonstrate that they strongly fractionate against 13 [74,75,76]. It therefore seems likely that the observed changes in d 13 C were at least partially attributable to this group of organisms, which predominate in shallow water sediments [77], including those used in the present study [78]. Cultured sulfate-reducing bacteria typically produce 13 C depleted PLFAs, with the extent of isotopic discrimination depending upon whether they are undergoing auto-, mixo-or heterotrophic growth [76]. This explanation is incomplete, however, as the generic bacterial PLFAs 15:0, 15:0ai and 17:1(n28) became progressively 13 C enriched as copper concentrations increased ( Figure S3). We suggest that this phenomenon can be explained by the increased bacterial utilisation of dead C. volutator biomass as copper concentrations increased. The d 13 C signature of C. volutator collected from the same location as our experimental animals, approximately 215.8 % [79], is considerably greater than the value of 222.3 % observed for the bulk sedimentary organic material. There is a need for a more detailed understanding of the processes that influence the isotopic signatures of organisms, particularly as the application of compound-specific techniques such as those used herein are likely to become more commonplace in the future.
Copper and Macrofaunal Effects in Natural Sediment Ecosystems
Our data were derived from a 10-day experiment conducted on defaunated sediments retrieved from an intertidal mudflat. We made no attempt to allow the microbial community to adapt or shift towards copper tolerance, although this process was almost certainly taking place over the experimental duration [63]. We also added only a single invertebrate species which is clearly an over simplification of the natural world. Such limitations are typical of mesocosm-type experiments, and must be carefully considered when attempting to place the resulting data into a wider ecological and biogeochemical context [80]. It is conceivable that the strong, interactive effects between copper concentration and the presence of a bioturbating organism reported herein are only applicable to our experimental system. However, we suggest that our findings are more widely applicable because a) our original hypothesis was developed from the understanding that copper and bioturbation both affect the structure and functioning of microbial communities across a range of habitats and ecosystems [18,21,22,30,32]; b) the reported effects of copper and C. volutator are consistent with previous research conducted over different timescales, in different locations using different organisms and techniques. We therefore contend that the assimilative capacity of any marine soft sediment benthic community will become progressively impaired by any process that causes the concentrations and bioavailability of copper to increase. Reductions of benthic bacterial-and microalgal biomass will decrease the capacity of these organisms to process elements such as carbon and nitrogen. This is particularly important in the global context of aquaculture activities, which must double by 2050 if current per capita consumption rates are to be sustained [81]. Marine fish farming can result in the accumulation of both organic matter and copper in the underlying sediments [9,13,46]. It follows that copper contamination will serve as a positive feedback mechanism for organic enrichment in such environments. Decreased availability of benthic bacteria and microalgae will also negatively impact upon the energetic and nutritional value of contaminated sediments, particularly as PUFAs such as 20:5(n23) are widely considered to be essential for many marine organisms.
Our data demonstrate that the effects of copper contamination on the structure and functioning of soft sediment habitats cannot be predicted solely from ambient concentrations of this heavy metal. The macrofaunal community at any particular location will influence, and in certain cases alleviate, the negative effects of copper contamination on the assimilative capacity of the local environment. Other processes that influence the flushing of sediments with fresh, aerated seawater, such as storm surge events, may also be expected to have similar effects. These findings have serious implications for environmental managers and marine policy makers; they indicate that a concentration-based approach to environmental management will yield unsatisfactory results across multiple benthic habitats. Indeed, they support the move towards an ecosystem approach to environmental management that places increased emphasis on the biological and ecological characteristics of each given location [30,82]. The successful and widespread application of the ecosystem approach will require increased efforts to investigate if and how pollutants influence the structure and functioning of marine communities across a range of environments and seasons. Table S1 Nominal and measured concentrations of copper in the sediments and overlying waters of a previous, identical experiment with C. volutator after 10 days of incubation.
Supporting Information
(DOC) Text S1 Estimated metabolic demands of C. volutator over the 10-day experimental duration. (DOC) | 2016-05-12T22:15:10.714Z | 2013-05-31T00:00:00.000 | {
"year": 2013,
"sha1": "778c4923304dfd8e41abb52db259cb93c63cd442",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0064940&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "778c4923304dfd8e41abb52db259cb93c63cd442",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
244687417 | pes2o/s2orc | v3-fos-license | FRONTIER RESEARCH IN ASTROPHYSICS IN THE GRAVITATIONAL WAVE ERA-I
In this paper we will provide several examples that marked the continuous evolution on the knowledge of the physics of our Universe, updating our recent review (Giovannelli & Sabau-Graziati, 2019a). We want to emphasize that all the objects in our Universe are interdependent on each other, and that the classifications – that are usually made to simplify problems – are artificial, since nature evolves in all its manifestations continuously.
INTRODUCTION
The Bridge Between the Big Bang and Biology undoubtedly exists (Giovannelli, 2001). Indeed, we are present here regardless of the origin of our Universe. So we must understand how to cross this bridge and understand what are the tools that allow us to make out the structure of the pillars supporting the bridge.
In order to cross this bridge, as always when we cross a bridge, we must advance slowly, step by step, with continuity, because everything is smoothly linked in the magma of the Universe: from the infinitely small to infinitely big, as discussed by Rees (1988).
In nature, nothing is isolated. Everything is related to the surrounding environment in a more or less strong way. However, the link exists. Fig 1 shows from left to right: i) a section of the metabolic network of a "simple" bacterium. Note that each point (each chemical compound) is connected to any other point through the complexity of the network (Luisi & Capra, 2014); ii) the cosmic network: each point is connected to any other point through the complexity of the network (Credit: Andrew Pontzen/Fabio Gover-1 INAF -Istituto di Astrofisica e Planetologia Spaziali, Area di Ricerca di Roma-2, Via Fosso del Cavaliere, 100, I 00133 Roma, Italy (franco.giovannelli@inaf.it).
2 INTA -Dpt de Cargas Utiles y Ciencias del Espacio Ctra de Ajalvir Km 4 -E 28850 Torrejón de Ardóz, Spain (sabaumd@inta.es). nato, 2014; see also in (https://it.wikipedia. org/wiki/Cosmologiadelplasma). The large-scale structure of the Universe, as traced by the distribution of galaxies, is now being revealed by largevolume cosmological surveys. The structure is characterized by galaxies distributed along filaments, the filaments connecting in turn to form a percolating network. The objective of Shandarin, Habib & Heitmann (2010) was to quantitatively specify the underlying mechanisms that drive the formation of the cosmic network. By combining percolation-based analyzes with N-body simulations of gravitational structure formation, they elucidate how the network has its origin in the properties of the initial density field (nature) and how its contrast is then amplified by the nonlinear mapping induced by the gravitational instability (nurture); iii) the human body network: each point (organ) is connected to any other point (organ) through the complexity of the network; iv) the human society network: each point is connected to any other point through the complexity of the network (Luisi & Capra, 2014). The human population follows the cycle: birth, growth, aging, death. This is a general rule of the nature. Indeed also all the components of the Universe follow the same cycle. Therefore for a complete understanding of the history of the Universe it is necessary to search along that cycle.
2019a) -we will jump most of the arguments discussed there, while we emphasize a few topics we consider important especially after the detection of the gravitational waves (GWs).
Example of continuity in nature
In the systems named cataclysmic variables (CVs), the accretion structure depends on the magnetic field of white dwarf (B) and on the transfer mass rate. Depending on B it is possible to classify CVs in three groups: • Non Magnetic CVs (NMCVs): B ∼ 10 4 -10 6 G; • Intermediate Polars (IPs): B ∼ 10 6 -10 7 G; • Polars (MCVs): B ∼ 10 7 -10 8 G.
However we have a smooth continuity among these classes, as discussed by Giovannelli & Sabau-Graziati (2015).
Indeed, taking into account the average values of magnetic field intensity and orbital periods for polars and IPs, and the minimum and maximum value for both parameters (B and P orb ), it is possible to construct a very interesting plot (Fig. 2) that shows the evident continuity between the two classes of MCVs.
The nature in all its manifestations shows continuity. Then we have to abandon the "convenient method" of thinking everything in watertight compartments and to go toward a general model for compact accreting stars, like was done by Vladimir Lipunov and collaborators when they developed the "Scenario Machine" (Lipunov, 1987;Lipunov & Postnov, 1988).
THE PRESENT SITUATION ABOUT THE KNOWLEDGE OF THE PHYSICS OF OUR UNIVERSE
Undoubtedly the advent of new generation experiments ground-and space-based have given a strong Magnetic field intensity versus orbital period for MCVs. Polars and IPs are contained in the light blue and light green rectangles, respectively. Violet rectangle indicates the so-called "period gap". Cyan-50 rectangle represents the intersection between the Polars and IPs (adopted from Giovannelli & Sabau-Graziati, 2015).
impulse for verifying current theories, and for providing new experimental inputs for developing a new physics for going, probably, over the standard model (SM). Recent results coming from Active Physics Experiments (APEs) and Passive Physics Experiments (PPEs) have opened such a new path.
The composition of the Universe is poorly known. Only ∼ 4.4% of ordinary matter, ∼ 0.6% of neutrinos, ∼ 22% of Dark Matter (DM), and ∼ 73% of dark Energy (DE). With the detection of GWEs a new window to the Universe has been opened.
An extensive review on the situation about the knowledge of the physics of our Universe has been recently published by Giovannelli & Sabau-Graziati (2016;2019a,b). The reader interested is invited to look at those papers. However, we are obliged to update a few topics that, in our opinion, could be useful
Confirmation of the Theory of General Relativity
In the last few years two further experimental results confirmed the validity of the theory of General Relativity (GR theory).
2.1.1. Gravitational lenses Renn, Sauer & Stachel (1997) published a historical reconstruction of some of Einstein's research notes dating back to 1912. These notes reveal that he explored the possibility of gravitational lensing 3 years before completing his general theory of relativity. On the basis of preliminary insights into this theory, Einstein had already derived the basic features of the lensing effect. When he finally published the very same results 24 years later, it was only in response to prodding by an amateur scientist. Kochanek (2003) discussed "The whys and hows of finding 10,000 lenses", mentioning the first radio lens survey -the MIT -Green Bank survey (MG) -that found lenses by obtaining Very Large Array (VLA) snapshot images of flux-limited samples of 5 GHz radio sources. The Hubble Space Telescope (HST), and Chandra observations (e.g. Dai & Kochanek, 2005) showed without any doubt that the gravitational lensing is operating.
Gravitational lensing is widely and successfully used to study a range of astronomical phenomena, from individual objects, like galaxies and clusters, to the mass distribution on various scales, to the overall geometry of the Universe (Williams & Schechter, 1997). They describe and assess the use of gravitational lensing as "gold standards" in addressing one of the fundamental problems in astronomy, the determination of the absolute distance scale to extragalactic objects, namely the Hubble constant.
Several papers have been published about the strong gravitational lensing (e.g. Tyson, Kochanski & Dell'Antonio, 1998;Tyson, 2000 and references therein), and the weak gravitational lensing (Wittman et al., 2000). A review on "Gravitational Lenses" have been published by Blandford & Kochanek (2004). A book on "Gravitational Lensing: Strong, Weak and Micro" was published by Meylan et al. (2006). Winn, Rusin & Kochanek (2004) reported the most secure identification of a central image, based on radio observations of PMN J1632-0033.
Therefore, a further dowel supports the GR theory.
Considering that some divergent conclusions about cosmic acceleration were obtained using Type Ia supernovae (SNe Ia), with opposite assumptions on the intrinsic luminosity evolution, Tu, Hu & Wang (2019) use strong gravitational lensing systems to probe the cosmic acceleration. They found that the flat ΛCDM is strongly supported by the combination of the data sets from 152 strong gravitational lensing systems.
Gravitational waves
The Universe that contains by definition all the matter or all the energy available showed one important event that was possible to be detected on the Earth. This event was a further direct experimental demonstration of the validity of the GR theory. Indeed, on September 14, 2015 at 09:50:45 UTC the two detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO) simultaneously observed a transient gravitational-wave signal. It matches the waveform predicted by GR theory for the inspiral and merger of a pair of black holes and the ringdown of the resulting single black hole. The signal was observed with a significance ≥ 5.1σ. The source lies at a luminosity distance of 410 +160
−180
Mpc corresponding to a redshift z = 0.090 +0.03 −0.04 . In the source frame, the initial black hole masses are 36 +5 −4 M ⊙ and 29 ± 4 M ⊙ , and the final black hole mass is 62 ± 4 M ⊙ with 3.0 ± 0.5 M ⊙ c 2 radiated in gravitational waves. All uncertainties define 90% credible intervals. These observations demonstrate the existence of binary stellar-mass black hole systems. This was the first direct detection of gravitational waves and the first observation of a binary black hole merger (Abbott et al., 2016a). Abbott et al. (2016b) reported the second observation of a gravitational-wave signal produced by the coalescence of two stellar-mass black holes. The signal, GW151226, was observed by the twin detectors of the LIGO on December 26, 2015 at 03:38:53 UTC. The signal was detected at significance ≥ 5σ. The inferred source-frame initial black hole masses are 14.2 +8.3 −3.7 M ⊙ and 7.5 ± 2.3 M ⊙ , and the final black hole mass is 20.8 +6.1 −1.7 M ⊙ . One finds that at least one of the component black holes has spin greater than 0.2. This source is located at a luminosity distance of 440 +180 −190 Mpc corresponding to a redshift z = 0.09 +0.03 −0.04 . All uncertainties define a 90% credible interval. This second gravitational-wave observation provides improved constraints on stellar populations and on deviations from the GR theory.
For these detections of gravitational waves -first predicted by Einstein 100 years ago -Rainer Weiss,
228
GIOVANNELLI & SABAU-GRAZIATI Barry Barish & Kip Thorne have been awarded the 2017 Nobel prize in physics. Abbott et al. (2016c) present a possible observing scenario for the Advanced LIGO (aLIGO) and Advanced Virgo gravitational-wave detectors over the next decade, with the intention of providing information to the astronomy community to facilitate planning for multimessenger astronomy with gravitational waves.
Gravitational waves provide a revolutionary tool to investigate yet unobserved astrophysical objects. Especially the first stars, which are believed to be more massive than present-day stars, might be indirectly observable via the merger of their compact remnants. An interesting paper by Hartwig et al. (2016) developed a self-consistent, cosmologically representative, semi-analytical model to simulate the formation of the first stars. They estimated the contribution of primordial stars to the merger rate density and to the detection rate of the aLIGO. Owing to their higher masses, the remnants of primordial stars produce strong GW signals, even if their contribution in number is relatively small. They found a probability of ≥ 1% that the current detection GW150914 is of primordial origin. The higher masses of the first stars boost their GW signal, and therefore their detection rate. Up to five detections per year with aLIGO at final design sensitivity originate from Pop III BH-BH mergers. Approximately once per decade, we should detect a BH-BH merger that can unambiguously be identified as a Pop III remnant.
On 2017 August 17 the merger of two compact objects with masses consistent with two neutron stars was discovered through gravitational-wave (GW170817), gamma-ray (GRB 170817A), and optical (SSS17a/AT2017gfo) observations. The optical source was associated with the early-type galaxy NGC 4993 at a distance of just ∼ 40 Mpc, consistent with the gravitational-wave measurement, and the merger was localized to be at a projected distance of ∼ 2 kpc away from the galaxy's center (Abbott et al., 2017a,b). Lipunov et al. (1995) predicted the NS-NS merger at a distance of ≤ 50 Mpc and the possibility of detecting GWs! This prediction was born by the "Scenario Machine" that describes the evolution of gravimagnetic rotators (Lipunov, 1987;Lipunov, & Postnov, 1988), and recently commented by Giovannelli (2016).
On August 17, 2017 Multimessenger Astrophysics born! As pioneers of the Multifrequency Astrophysics, we are particularly happy! Poggiani (2018) published an extensive review about the GW170817 event, in which she discussed also the related multimessenger observations. The LIGO and Virgo interferometers have now confidently detected gravitational waves from a total of 10 stellar-mass binary black hole mergers and one merger of neutron stars, which are the dense, spherical remains of stellar explosions. Table 1 shows the eleven events (adapted from Abbott et al., 2019). Barone et al. (1992) analyzed the class of CVs as sources of Gravitational Radiation, basing their analysis only on known objects at that time (168 CVs) taken from the Catalog of Ritter (1990).
From the analysis of GW emission from CVs, they derived that the emission frequencies are in the range 10 −3 -10 −5 Hz and that the GW flux at Earth is in the range 10 −10 -10 −13 erg s −1 cm −2 while the dimensionless amplitude is in the range 10 −21 -10 −23 . These results constituted a solid basis for planning the construction of GW detectors (especially space-borne GW antennas). Moreover, these results provided the possibility of experimentally proving the effectiveness of the mechanism of Gravitational Radiation on CV evolution. This important work was not sufficiently taken into account by the international community. However, now, after the detection of GWs coming from the fusion of black holes and neutron stars, the interest for that work has been rekindled in order to test the possibility of detecting GWs from CVs. Poggiani (2017), and the references therein) discussed this possibility, reaching the conclusion that AM CVn systems and generally short-period systems are candidates for GW emission. Amaro-Seoane et al. (2017) in response to the ESA call for L3 mission concepts, presented the Laser Interferometer Space Antenna (LISA) that since 2030 will allow to observe Gravitational Waves from cosmic sources, then to explore a Universe inaccessible otherwise, a Universe where gravity takes on new and extreme manifestations.
Hubble Constant
The Hubble constant (H 0 ) is one of the most important numbers in cosmology because it is needed to estimate the size and age of the universe. The important problem of determination of H 0 value is one of the most exciting. Indeed, in the literature it is possible to find many determinations coming from different experiments using different methods. However, it is very complicated to obtain a true value for H 0 . It is necessary to have two measurements: i) spectroscopic observations that reveal the galaxy's FRONTIER RESEARCH IN ASTROPHYSICS 229 redshift, indicating its radial velocity; ii) the galaxy's precise distance from Earth (and this is the most difficult value to determine). A large summary about the methods used for H 0 determination, and its derived values can be found in the Proceedings of the Fall 2004 Astronomy 233 Symposium on "Measurements of the Hubble constant" (Damon et al., 2004). In this book, Teymourian (2004), after a comparison of many constraints on the Hubble constant determinations, reports a value H 0 = 68 ± 6 km s −1 Mpc −1 .
A discussion about the Hubble constant has been published by Giovannelli & Sabau-Graziati (2014, 2019b, where it is possible to find also a large number of references, reporting the many controversial evaluations of H 0 . Figure 6 shows the determinations of H 0 since 1970 (adapted from John Huchra, 2008). Practically all the determinations lie in the range 40-100 km s −1 Mpc −1 (marked with light-blue rectangle), and most of them are converging in the range 55-70 km s −1 Mpc −1 (marked with light-red rectangle). The CMB is used to predict the current expansion rate of the universe by best-fitting cosmological model. At low redshift baryon acoustic oscillation (BAO) measurements have been used -although they cannot independently determine H 0 -for constraining possible solutions and checks on cosmic consistency. Comparing these measurements they found H 0 = 69.6 ± 0.7 km s −1 Mpc −1 (Bennett et al., 2014). Does this determination, finally, close the history about the search of the "true" value of H 0 ? Independent estimation of the Hubble constant from the luminosity distance of GW signal (GW 170817) and the event association with NGC 4993 (Abbott et al., 2017c) gives a value H 0 = 70.0 +12.0 However, due to large errors, this value of Hubble constant do not add any significative information, but being obtained with independent methods provide a good support for the value of H 0 = 69.6 ± 0.7 km s −1 Mpc −1 , determined by Bennett et al. (2014).
Reionization Epoch
The formation of the first stars and quasars marks the transformation of the universe from its smooth initial state to its clumpy current state. In current cosmological models, the first sources of light began to form at a redshift z ∼ 30 and reionized most of the hydrogen in the universe by z ∼ 7 (see review by Loeb & Barkana, 2001). Figure 4 shows schematically the updated experimental situation about cosmic sources (galaxies, GRBs, QSOs, SNe) detected at high redshifts. The light-red rectangle marks the possible range of z during which the reionization occurred.
However, although there is rather good agreement about the epoch of reionization, how really reionization occurs is still object of debate. Indeed, Dopita et al. (2011), considering that observations show that the measured rates of star formation in the early universe are insufficient to produce reionization, suggest the presence of another source of ionizing photons. This source could be the fast accretion shocks formed around the cores of the most massive haloes.
A deep discussion about the reionization epoch has been reported in the review paper by Giovannelli & Sabau-Graziati (2019b, and the referees therein).
An interesting review about The epoch of reionization was published by Zaroubi (2013). Recently An Introductory Review on Cosmic Reionization have been published by Wise (2019).
Recently, Yang et al. (2019) announced the discovery of six new z ∼ > 6.5 quasars.
This work opens a glimmer of light on the possibility of revealing in the future, with the advent of JWST, the presence of quasars immediately after the formation of the first Pop. III stars at z ≈ 25, as well as the possibility of detecting GRBs up to that redshift (Lamb & Reichart, 2000;Ciardi & Loeb, 2000;Bromm & Loeb, 2002). Indeed, the detection of the GRB 090429B at z ≃ 9.4 (Cucchiara et al., 2011) is a good omen to think that future experiments can reveal GRBs up to the fateful threshold of z ≈ 25.
Gamma Ray Bursts
Long discussions about Gamma-ray bursts (GRBs) can be found in numerous publications. A list of these can be found in GSG2004 and in Giovannelli & Sabau-Graziati (2016, 2019a.
Although big progress has been obtained in the last few years, GRBs theory needs further investigation in the light of the experimental data coming from old and new satellites, often coordinated, such as BeppoSAX or BATSE/RXTE or ASM/RXTE or IPN or HETE or INTEGRAL or SWIFT or AGILE or FERMI or MAXI.
The idea that GRBs could be associated to gravitational waves (GWs) emission is now popular. Indeed, short GRBs are believed to be produced by FRONTIER RESEARCH IN ASTROPHYSICS 231 the mergers of either double NSs or NS-BH binaries (Nakar, 2007) and the observation of a kilonova associated with GRB130603B (Tanvir et al., 2013;Berger, Fong & Chornock, 2013) lends support to this hypothesis. Thanks to the NASA's Swift satellite we assisted to ten years of amazing discoveries in time domain astronomy. Its primary mission is to chase GRBs. The list of major discoveries in GRBs and other transients includes the long-lived X-ray afterglows and flares from GRBs, the first accurate localization of short GRBs, the discovery of GRBs at high redshift (z > 8) (Gehrels & Cannizzo, 2015). And essentially thanks to these discoveries we are now closer to understand the real nature of GRBs.
Anomalous X-ray Pulsars and Soft Gamma Repeaters: Magnetars
Since their discovery, neutron stars (NSs) have excited a broad range of interests not only in the astrophysical context, but also in terms of fundamental physics.
NSs are characterized by extreme conditions, such as dense matter, rapid rotation, and high magnetic field, they have proved to be ideal laboratories to test fundamental physics, which cannot be achieved by ground-based experiments.
Multi-wavelength observations from radio to the highest energy gamma-rays have revealed a remarkable diversity of NSs (Kaspi, 2010).
In the last two decades a new class of X-ray binaries has been recognized. They are X-ray pulsars with properties clearly different from those of the common HMXBs. This new group of pulsars constitutes a subclass of the LMXBs, characterized by lower luminosities, higher magnetic fields and smaller ages than non-pulsating LMXBs. These objects have been called Anomalous X-ray Pulsars (AXPs) (e.g. GSG2004, and the references therein) and this is now the current accepted name. Soon after their discovery, this new class of objects, whose nature was recognized to be that of neutron stars, were characterized by a spin periods ranging between 5.5 − 11.8 s -andṖ, in the range 0.05 − 10 × 10 −11 s s −1 -contrary to the larger spread of those of HMXBs (0.069-few ×10 3 s). Spin periods of AXPs are monotonically increasing on timescales of ∼ 10 4 -4 × 10 5 yr.
Measurements of the spin down rates of SGRs and AXPs have been interpreted as evidence of very strong magnetic fields at the collapsed object poles, roughly two orders of magnitude greater than those of the 'normal' X-ray pulsars. For this reason they are now known as 'magnetars'. Their derived magnetic field intensity is ∼ 10 14 -10 15 G.
The problem of the nature of magnetars is one of the hottest in modern astrophysics. Indeed, for instance, Dar (2003) argued that, instead, the observations support the hypothesis that SGRs and AXPs are neutron stars that have suffered a transition into a denser form of nuclear matter to become, presumably, strange stars or quark stars. Internal heat and slow gravitational contraction long after this transition can power both their quiescent X-ray emission and their star quakes, which produce 'soft' gamma ray bursts. Dar (2006) discussed once more this idea by using results from short-duration hardspectrum GRBs, such as 050509B, 050709, 050724, and 050813, which could have been the narrowly beamed initial spike of hyperflares of SGRs in galaxies at cosmological distances. Such bursts are expected if SGRs are young hyperstars, i.e. neutron stars where a considerable fraction of their neutrons have converted to hyperons and/or strange quark matter. Ghosh (2009) discussed some of the developments in the quark star physics along with the consequences of possible hadron to quark phase transition at high density scenario of neutron stars and their implications on the Astroparticle Physics.
However, the nature of magnetar is not yet definitively proved. Giovannelli & Sabau-Graziati (2006) speculated as follows: if magnetic fields of ∼ 10 15 G can be expected in order to explain the behaviour of magnetars, an almost 'obvious' consequence can be derived from the diagram magnetic field intensity versus the dimension of the relative cosmic source. They extrapolated the value of B up to 10 15 G; the correspondent dimension of the source is of ∼ 10 m. This could be the dimension of the acceleration zone in a supercompact star, probably a quark star. If you construct a trap, the rat falls into it! Table 2 shows the pulse timing properties of magnetars, the derived magnetic field intensity, the age (after Olausen &Kaspi, 2014 andKaspi &Beloborodov, 2017), and their association with SNRs (after Giovannelli & Sabau-Graziati, 2006).
The open questions about magnetars are numerous, namely: i) What are the distances of the Galactic magnetars? Then what is the Energetics? ii) What is the number-intensity relation for giant magnetar flares? iii) What are the SGR and AXP birth rate? What are their lifetimes? How many SGRs and AXPs are in the Milky Way? iv) What kind of supernova produces a SGR or an AXP? v) What is the relation between SGRs and AXPs? Does one evolve into the other, or are they separate manifes-
232
GIOVANNELLI & SABAU-GRAZIATI In order to answer to these open questions, more sensitive instruments, more detailed theories, and more data (probably in the next 30 years) are necessary.
In the extensive and excellent reviews by Kitamoto et al. (2014) and by Kaspi & Beloborodov (2017) most of the critical points about magnetars have been deeply discussed.
A large diversity of neutron stars has been discovered by multifrequency observations from the radio band to the X-ray and gamma-ray energy ranges. Among different manifestation of neutron stars -which include SGRs, AXPs, high-B pulsars (HBPs), high-E binaries (HEBs), rotating radio transients (RRATs), central compact objects (CCOs), rotation-powered radio pulsars (RPPs), and X-ray isolated neutron stars (XINSs) (Harding, 2013) -magnetars are the strongest magnetized objects.
These various manifestations of neutron stars show different characteristics of rotation period P and its derivativeṖ. The measurements of P anḋ P provide to estimate the dipole magnetic field strength B d ∝ PṖ and characteristic age τ c = P/2Ṗ. Figure 5 shows the P-Ṗ diagram (Enoto, 2018), where SGRs and AXPs are collectively called "magnetars" since their slow rotation (P ∼ 2-12 s) -FRONTIER RESEARCH IN ASTROPHYSICS 233 with the exception of PSR J1119-6127 (P = 0.41 s) and PSR J1846-0258 (P = 0.33 s) -and high period derivatives (Ṗ ∼ 10 −13 -10 −9 s s −1 ) indicate high magnetic fields B = 10 14−15 G and young characteristic age τ c ∼ < 10-100 kyr. To date, there are 29 known magnetars in the Milky Way and local universe (see Table 2). It seems almost natural to think about a continuity among different classes of neutron star systems. However radio pulsations that have been observed from about 2000 neutron stars with weaker magnetic fields have never been detected from any of the known magnetars until the paper by Camilo et al. (2006) which showed that XTE J1810-197 -the first transient magnetar discovered (Ibrahim et al., 2004) -emits bright, narrow, highly linearly polarized radio pulses, observed at every rotation, thereby establishing that magnetars can be radio pulsars. Thus, these observations which link magnetars to ordinary radio pulsars, rule out alternative accretion models for AXPs, and provide a new window into the coronae of magnetars.
In the excellent review paper by Kaspi & Beloborodov (2017) most of the critical points about magnetars have been deeply discussed. They concluded that: "The magnetar model has now been used to predict, naturally and uniquely, a wide va-riety of remarkable phenomena and behaviors in sources that once seemed highly anomalous. The now seamless chain of phenomenology from otherwise conventional radio pulsars through sources previously known for radically different behavior makes clear that these objects are one continuous family, with activity correlated with spin-inferred magnetic field strength. Recent advances in the physics of these objects, from the core through the crust and to the outer magnetosphere, hold significant promise".
CONCLUSIONS
It is difficult to summarize the conclusions in a few words. We simply want to emphasize that all the objects in our Universe are interdependent on each other (as shown in Fig. 1), and that the classifications -that are usually made to simplify problems -are artificial, since nature evolves in all its manifestations continuously, as demonstrated with the examples of CVs and neutron star systems. | 2021-11-28T16:32:03.028Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "717790de29e6e96934ef7a4ad84cd8defdd30d1e",
"oa_license": null,
"oa_url": "https://doi.org/10.22201/ia.14052059p.2021.53.42",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e07e1bf178391bff44737004811c957334b4757b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
241950665 | pes2o/s2orc | v3-fos-license | Prevalence and Risk Factors of Chronic Kidney Disease among Palestinian Diabetic Patients: a Cross Sectional Study
Chronic kidney disease (CKD) is a worldwide public health problem and diabetes is one of major risk factor for its development and progression. The aim of this study is to assess the prevalence of chronic kidney disease in a cross-sectional population of patients with type 2 diabetes treated in primary health care centers in North West Bank. Patients’ data including patient characteristics, creatinine level, blood pressure, HbA1c, and hypertension and diabetes duration was collected from primary health care centers. eGFR was calculated using the CKD-EPI equation. CKD was staged according to the Kidney Disease Improving Global Outcomes System (KDIGO) 2012 guidelines. Both univariate and multivariate statistical analysis was conducted using SPSS.
3 Background Non-communicable diseases, such as diabetes and kidney disease, are the leading cause of mortality and morbidity worldwide [1]. Diabetes Mellitus (DM) is recognized as the world's fastest growing chronic condition. Worldwide, 1 in each 11 adults has DM, 90% of whom have Type 2 Diabetes Mellitus (T2DM). This number has actually increased tremendously in the last three decades owing to the increasing rates of sedentary lifestyle, unhealthy diet, smoking and alcohol consumption [2]. Unfortunately, all Arab states in The Middle East and North African region are burdened with the second highest diabetes prevalence rate [3]. Among the Palestinians living in the West Bank, the prevalence of T2DM was 15.3% in 2010 and it is predicted to increase up to 20.8% and 23.4% in the years 2020 and 2030 subsequently [4].
The chronic hyperglycemia of DM is known to be, besides hypertension (HTN), one of the leading causes of Chronic Kidney Disease (CKD); which is defined as a progressive loss in renal function over a period of more than three months, where kidney's ability to filter blood or perform other activities is impaired. This is usually associated with a reduction in glomerular filtration rate (GFR) and proteinuria [5].
CKD is a worldwide public health problem, both for its high morbidity and cost of treatment. Outcomes of CKD include not only progression to kidney failure but also complications of reduced kidney function and increased risk of cardiovascular disease and all-cause mortality overall [6]. The Global Burden of Disease 2015 study estimated that, in 2015, 1.2 million people died from kidney failure, an increase of 32% since 2005 [7].
Unfortunately, renal disease in its mild form is commonly under-diagnosed and undertreated, especially in the Arab, world resulting in lost opportunities for prevention [8].
Many of the complications of CKD can be prevented or delayed by early detection and treatment. As well, early diagnosis of kidney disease can slow down or avert worsening of kidney functions by inexpensive interventions, several of which are on the WHO's socalled best buys list for noncommunicable disease management [9].
In the Middle East, there is a lack of data about the prevalence of CKD among diabetic patients. However, the prevalence of CKD among the general population was 6.8% in Jordan, 5.7% in Saudi Arabia [10] &14.9% in Iran [11].
Common risk factors including greater duration of DM, HTN, poor metabolic control, smoking, obesity and hyperlipidemia had been suggested to increase the risk of developing diabetic complications. Blood pressure (BP) and glycemic control are shown to cause more kidney damage and subsequent decrease in kidney function. Similarly, some background variables proved to be positively associated with CKD such as age, smoking, and BMI [12][13][14].
In North West Bank, a study was conducted in 2008 on diabetic hypertensive patients, reported that 35.5% of DM and HTN patients had reduced renal function that is significantly associated with patients age, duration of DM and the number of chronic diseases [15].
GFR is the best measure of kidney function since it accounts for age, race and sex.
Currently the two most common methods for determining GFR are creatinine clearance and estimated GFR (eGFR) [16]. Formula-derived eGFR results have become widely used in clinical practice and have been recommended by the National Service Framework for Renal Services in the U.K in the annual evaluation of all patients with diabetes (Department of Health Renal Team [17].
It is important to identify the risk factors of renal function deterioration for development of prevention interventions in diabetic patients' management and to prevent its complications. The aim of this study is to estimate the prevalence of CKD among Palestinian diabetic patients, and to assess the associated risk factors.
Study design and population
This is a cross-sectional study targeted Palestinian diabetic adult patients from the West Bank. In Palestine, all patients diagnosed with DM are referred to primary health care (PHC) directorates, in their cities, where they receive their diabetes care, treatment and follow up on regular basis. Patients with T2DM and age > 18 were included in the study.
However, type 1 diabetes mellitus, gestational diabetes mellitus, pregnant women and patients who didn't have at least 2 serum creatinine readings at least 3 months apart were excluded from the study.
The Cochran's sample size formula was used to calculate the sample size [Necessary Sample Size = (Z-score) 2 * Std Dev*(1-Std Dev) / (margin of error) 2 ]. A sample size of 385 patients was calculated assuming a 95% confidence level, 0.5 standard deviation and a margin of error ± 5%.
Patients were randomly selected as they attended their PHC clinics. The data were collected between September 2018 and December 2018 by personal interviews and from the electronic records of the patients.
Serial serum creatinine data were collected and eGFR was determined using Chronic Kidney Disease Epidemiology Collaboration formula based on serum creatinine (CKD-EPI Cr ). CKD was defined as having a decreased eGFR (< 60 ml/min per 1.73 m2) for 3 months. CKD stage was defined in accordance with the guideline of National Kidney Foundation [5], which stage 1, 2, 3, 4, and 5 had a eGFR of ≥ 90, 60 to 90, 30 to 59, 15-29, and < 15 ml/min per 1.73 m 2 or commencement of dialysis therapy, respectively.
Prevalence of CKD was determined using the National Kidney Foundation Kidney Disease
Outcomes Quality Initiative Classification of CKD based on eGFR alone. Albuminuria marks CKD stages 1 and 2, however, it was missing most of enrolled patients and thus it was not considered to determine CKD stages.
Participants were considered diabetics if they already had a diagnosis of T2DM or were taking insulin or oral hypoglycemic agents. We defined patients as hypertensive if they were previously diagnosed with HTN or taking antihypertensive medications. Obesity was defined as BMI ≥ 30.
Measures
Blood pressure measurements were taken by a trained nurse using an electronic sphygmomanometer. The height and weight were taken at the time of interview and were used to calculate the BMI. Creatinine readings were obtained from the medical records using at least 2 readings at least 3 months apart and the CKD-Epi formula was used to calculate eGFR. Patients with eGFR < 60 ml/min/1.73m² were considered to have CKD. The last available HbA1c reading was used.
Approvals from Al-Najah National University Institutional Review Board (IRB) and the Palestinian Ministry of Health were taken. All subjects approached for the study were invited to participate voluntarily after being explained the study aim, risk and benefit of participation. Informed consent was obtained from all individual participants included in the study.
Statistical analysis
Statistical analysis was conducted using the SPSS version 20.0. Categorical data were expressed as number (percentage) and continuous data as means ± standard deviation unless otherwise stated. Differences in patient's characteristics and risk factors for CKD were studied using chi-square test and T-test, as appropriate. Statistical significance is taken at p value of < 0.05. Additionally, multivariate logistic regression was conducted to control for possible confounders.
Study population:
The study recruited 386 patients with T2DM from PHC clinics in North West Bank. Almost half the participants were male (49.7%) and their mean age was 60.6 ± 10.4 years. HTN was reported in 278 (75.3%) participants, with a mean duration of 6.78 ± 7.7 years. The mean duration of diabetes was 12.4 years (3.9 years to 20.9 years) and their HbA1c level ranged from 6.39-10.47% with an average of 8.4% ± 2.0. The majority of the participants was obese and 30.4% were smokers. Table 1 presents the clinical and background data of the patients.
Frequency Of Chronic Kidney Disease
Using the CKD-EPI equation, the eGFR mean (± SD) for the all participants was 75.3 ± 24.
CKD Stages 3, 4, and 5 were present in 19.7%, 2.6%, and 1.3% of the participants, respectively. In total, the prevelance of Impaired Renal Function (CKD stages 3-5) among T2DM patients was 23.6% (95% CI: 19.4%-28.1%) (Fig. 1). Table 2 shows the average eGFR in relation to clinical and background variables. A significant decrease in eGFR was noted as the age, SBP, duration of HTN, and duration of DM increases (p value < 0.001). Table 2 Clinical and background variables of each eGFR category eGFR Stage Univariate analysis was conducted to explore the factors associated with the development of CKD. The results showed that CKD is significantly associated with HTN (p < 0.001), smoking (p = 0.022), age (p < 0.001, DM duration (p < 0.001) and HTN duration (p < 0.001). However, HbA1c and BMI showed no significant relation with CKD, as shown in Table 3. [18]. ESRD represents the tip of the iceberg and the actual number of patients with CKD is a lot more. Studying the prevalence of CKD in Palestine is important as it helps in early detection and thus prevention and control diabetic nephropathy.
To the best of our knowledge, this study was the first epidemiological investigation on [21]. Unfortunately, similar data from the surrounding countries is lacking. This variation on the prevalence of CKD among Diabetic patients is attributed to difference in the definitions adopted and the characteristics of the studied populations.
Studying the risk factors associated with CKD, especially the modifiable factors, is important to develop prevention and control interventions. The prevalence rate of HTN reported in this study among patients with type 2 diabetes (75%) was high. It is more than what have been reported in the neighboring countries; Jordan (72.4%) [22], Qatar (64.5%) [23] and Saudi Arabia (53%) [24]. This relatively higher rate of HTN could be related to the fact that most diabetic patients included in the study were obese and aged > 60 years.
This study showed a significant relation (P Value < 0.001) between BP and the kidney damage, reflected by decreased eGFR as systolic BP increases (Table 2). Diabetic patients with HTN are 4.4 more times prone to develop CKD compared to diabetic patients with normal BP. These findings are consistent with literature from different countries [14,19,21]. This risk of CKD and kidney damage can be further increased as the age increases.
There is a great overlap between HTN and impaired renal function. The patient goes into a vicious cycle where decreased kidney function causes an elevation in BP and this elevation will cause further kidney damage and subsequent decrease in the kidney function.
The high prevalence of HTN among our patients is alarming and should be taken into consideration, as many studies reported the relation between high BP and development of [13]. The not well understood nephrotoxic effect of smoking that includes endothelial cell dysfunction and the increased insulin resisitance regardless of the diabetic status can explain this finding [13].
Regarding age, it was found to be a significant risk factor (P value < 0.001). This can be explained by the steady decline in GFR with normal aging; a process that is accelerated by superimposing factors like diabetes. Many studies reported age as a risk factor for CKD among diabetic patients [11,14,19,20]. Additionally, as noted in Table 2, a significant decrease in eGFR was reported as the age increases. These results indicate the need for robust screening of diabetic patients with focusing on elderly patients.
The average BMI of diabetic patients, in this study, was 32.5 kg/m 2 (± 5.8) without significant correlation with the renal function (P value = 0.508). There are inconsistent results regarding the relationship between obesity and CKD, where many studies, like Framingham study [27], show a positive associated between BMI and CKD. Another study in UK showed that there is an increasing risk of CKD with increasing the weight [18]. In the other hand, a study in Thailand found a negative association between BMI and CKD, with which was owed to reverse causality where patients with advanced CKD may have a reduced BMI due to their disease [14]. These variations in results question the reliability of BMI for predicting CKD among diabetic patients. However, it should be noted that the BMI in the whole sample was high, which means that patients should be advised and counseled to lose more weight in the primary health care clinics.
HbA1c is a recommended standard of care to monitor diabetes. In this study, the mean HbA1c was 8.31%, but it was not significant to the presence of CKD (P value = 0.527).
Increasing evidence shows a link between the glycemic environment and the renal damage. As in obesity, there are conflicting data regarding this association. A study in Spain showed that HbA1c levels were significantly higher among CKD diabetic patients (OR = 1.011, 95% CI 1.005-1.017, P < 0.001) [21]. However, other studies showed no significance increase in HbA1c level among CKD diabetic patients [20]. This results can be partially explained by the physiologic improvement of HbA1c due to the decreased insulin excretion by the kidneys in patients with impaired renal function [29].
In this study there was no association between gender and CKD (P value = 0.384). The relation between gender and CKD among diabetic patients is inconsistent in the literature.
Many studies showed the female gender as a risk factor [11,14,19]; however, others reported the male gender as a risk factor [21]. This may be due the distribution of risk factors, like obesity and T2DM control status, between genders.
There were some limitations in this study. First, being a cross sectional study, not longitudinal, precludes any causal relationship between impaired renal function and their risk factors. Secondly, due to the low resources in the primary care settings there is a lack of data regarding proteinuria and renal biopsy which made it difficult to diagnose stage 1 CKD.
Including the diagnosis of CKD based on eGFR on multiple measures to establish chronicity, and conducting the study in PHC centers where almost all diabetic patients in Palestine receive, for free of charge, their preventive and curative services are the main strengths of our study.
Conclusions
This study demonstrates a high prevalence (23.6%) of CKD among diabetic patients in Palestine. The rate is higher among hypertensive patients and increase with age. We recommend intensive screening for diabetic patients to detect CKD at earlier stages and implement more aggressive treatment modalities for diabetes as well as other important risk factors, especially HTN and smoking. We also recommend studying the effect of antidiabetic and anti-hypertensive medications on the rate of renal function deterioration and to check treatment compliance in these patients, in addition to assessing the mortality rate and progression to ESRD and dialysis in each eGFR category. | 2020-04-09T09:19:56.437Z | 2020-03-18T00:00:00.000 | {
"year": 2020,
"sha1": "e7f3662de6d48290bb47e353ffafc4266be1ddfd",
"oa_license": "CCBY",
"oa_url": "https://bmcnephrol.biomedcentral.com/track/pdf/10.1186/s12882-020-02138-4",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8e90f7b5112c129311e50c09e55001ab355d1ebd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
143736422 | pes2o/s2orc | v3-fos-license | What is in a name ? A Short Survey on the Sources and the Factors Affecting the Act of Name-Giving
Today, nations are not only transferring goods, services, knowledge, and technologies but also cultures and values which may be points of conflict especially in multicultural societies. This entails creating understanding among nations, people, and ethnic groups through intercultural communication studies and research. An area of interest for many people, lay or expert, is the practice of name-giving; it is more than just an act of designation or an official registration. Names are expressions of cultural identity deeply imbedded in sociocultural contexts showing the underlying values and practices of people in an area. This paper aims at surveying and the factors, motives, and the reasons of name-giving patterns and classifying the sources based on which people choose names for their children. On the whole, fourteen common classes of sources and a short description of their subdivisions will be presented in the paper.
Introduction
Rapid development of international trade, the improvement of communication technologies, the ease of access to means of transportation and the progress in globalization have increased the capacity for local as well as international interaction among people (Wang and Le, 2011).These changes and the outcomes of globalization add new concepts to the literature of various disciplines one of which is intercultural communication competence (Penbek, et al, 2009).Today, the world is considered as a village in which coping with differences both at home and abroad is an important issue to be considered.Differences in values, attitudes, ethnicity, social practices, religion, etc. must be fully respected and integrated into our lives.Intercultural communication failure both at the local and the international levels can lead to intercultural maladjustment, misunderstanding, and even cultural shock.One of the causes of this failure is the lack of adequate knowledge and skills to communicate with others from other cultures (Wang and Le, 2011).Intercultural communication competence considered to be a sub-skill of communication competence (Deardorrff, 2004) gives people the ability to change their attitude and behavior toward others and to be open and flexible to other cultures.It is a critical issue for individuals to survive in the globalized societies of this century.Language is an intrinsic part of our everyday reality and we put our linguistic knowledge to use to give shape to our internal thoughts (Widdowson, 2007).People negotiate, realize, or even reject identities through the use of language, a factor for expressing identity stronger than cultural artifacts such as dress, food, housing, etc. (Wardhaugh, 2006).Our knowledge of the world and our experiences of social events and social realities influence our interpretations of individual words; even the use of personal pronouns can be an indicator of identity (Bloor and Bloor, 2007).Human cognition and human experience are closely related to humans" use of language (Johnstone, 2008).An important manifestation of language is naming.Names not only function as common and rigid devices for direct reference but also function as abstract linguistic markers that signal and reinforce the referents" individuality (Jeshion, 2009).Personal names occur in all languages forming a special group within the vocabulary of languages; like any other word, they follow the phonological, morphological, syntactic, and semantic rules of the language.
Naming is a specific linguistic act which shows values, traditions, hopes, fears, and everyday events in people"s lives; names reveal the preferences and concerns of their owners as well as givers in terms of real life objects, actions, and beliefs (Rosenhouse, 2002) Name is people"s possession and identification telling the world who they are (Mayrand, 2011).
A given name _also known as the first name, forename, and in Christian countries as the Christian name_ is a personal name that differentiates between members of a family or a society.A given name is purposefully given to a child by parents, grandparents, godfathers, local clergymen, etc. before or after the birth, unlike the family name which is an inherited and hence predetermined one (Wikipedia, 2011).The given names come before the family name in most European cultures, and after the family name in Hungary, parts of Africa, and most of the East Asia (e.g.China, Japan, Korea, and Vietnam).Some people have more than one forename but usually one is the main forename.Parents invest tremendous effort in choosing the name of their children which shows the importance of names.The name of a child is important for a child"s earliest sense of identity and may affect his/her feeling for a lifetime; it helps define a child within the family, to friends, and to the outsiders (Wikipedia, 2011;Aden, 2011).
The Significance of Studying Names
Names give us important insights because they can act as an indicator of the patterns of social and cultural organization of societies.They render important information about the sex and the infant"s social class and some background information about the name-giver; names may reveal important information about the social and historical circumstances at the time of the birth of a child.They also exercise a great symbolic power because names are chosen consciously and by free will and even with a lot of considerations and care.Names identify a person and may send a message to others regarding the name-givers" hopes, prayers, cultural traditions, religious background, etc. (Alford, 1987).Naming is not just a simple act of designation or an official registration of a name for paper work; a name grants identity to and even develops or establishes the personality of a person.Naming is not a simple phenomenon; it is a clear expression of cultural identity deeply imbedded in sociocultural contexts (Encyclopedia of Children, 2011).A given name serves as an identity marker both for the child in his/her future activities and for recognition by others (Gerhards and Hans, 2009).Implicitly shared conventions and traditions adhered to by people constitute the identity of a nation or group; these conventions make actions and transactions and on the whole the internal functioning of a culture possible.
Knowledge of naming practices can even help us trace our families back to their origins, for example, a village, tell us about their jobs and activities, or even tell us how our ancestors looked like physically (Myrand, 2011).A historical study of names can help us access the past because they are reflections of our ancestors" everyday life, their world views, and family relationships and beyond.Names have an integrative function too; they integrate the new baby into the family organization or the community he will be a member of and grow up in (Encyclopedia of Children, 2011).Gudrun (2002), for example, reports that most Icelandic personal names are of Nordic origin, many of them are found in sagas, and some can be traced back to the Christianization of the country in the year 1000.Many things can act as an identity marker, for example, type of housing, clothes, the way we talk, our car, etc. but names are chosen freely and without any financial cost, or investment of time and effort; as such, they are pure expressions of parents" preferences and ideas (Liberson, 2009).
Personal names reveal interesting details about people"s life, their origins, professions, traditions, fashion, social rank, etc (Ghaleb Al-Zumor, 2009).Toppe (2011) reported of seventy percent of the Norwegian names being related to farms.Bryner (2010) comments on these days" spread of unusual names believing this shows the parents" overall philosophy that their own child is special, that having a unique name helps the child to be salient and to stand out, that fitting into social norms is something to be avoided, and that names can lead the child towards the parents" desired personality traits.For instance, in Christian countries "Jesus" is not used for boys but "Mary" is used for girls showing people"s belief that Jesus is considered taboo or sacrilegious in parts of the Christian world (Wikipedia, 2011).
There are also interesting cases of the real effect and use of names in people"s everyday activities.In Lithuania, (Girvilas, 1978) people believe that names can determines or even change a child"s destiny.People in some parts of the world even consider names in deciding their future practices like marriage.In Burma, as an instance, the initial of a child"s name is consistent with the day on which the child is born; some days are believed to be incompatible with each other, so people born on these days cannot marry together and, for example, you can"t find a K-husband with an H-wife (Medlej (2011).In Korea we can see the same case as Koreans avoid marrying people who have the same family name.For example, marriage between the Kims and the Parks, which are two very common last names in Korea, is considered to cause problems.There are interesting cases of the relationship between naming and religion reported too.In Judaism it is a common practice to choose sectarian or theophorous names for men, in Christianity for women, and Islam for both in.In France, during1830s and 1840s, priests gave the name Philomene _after Saint Philomena who was associated with virginity_ to illegitimate girls, thereby marking their status (Encyclopedia of Children, 2011).
Social considerations are also closely tied to naming practices.Naming a child after godparents also shows the kind of social relations and the social structure in a society; parents create new social relationships, create and reinforce social networks, and seek alliances through godparent patronage (Encyclopedia of Children, 2011).In a study (Gardner, 1988), it was shown that in Arab countries traditional names refer to virtues of piety, justice, abundance, and purity, and enduring names, that is, names which have retained their popularity encompass concepts of companionship and kindness; in these countries, new names used by people depict concepts of love, happiness, and hope.Sue and Telles (2007) argue that traditional or ethnic names are more frequently used for boys because they are expected to carry on family name, reputation, and traditions.
Changes in Name-Giving Practices
Another important aspect of studying personal names and name-giving is looking for changes in naming practices of different nations or ethnic groups.Convergence into or divergence from other ethnic groups, whether they are minority or the dominant groups, could arise from spatial, economic, occupational, or social segregation, or conversely because of acculturation, willingness for integration, having a kind of instrumental motivation, etc.These changes are reflected in people"s everyday life, their musical taste, the colour and the design of their dresses, religion and religious practices, recreational activities, and name-giving practices (Gordon, 1964).A study of changes in name-giving among different nations may show forced acculturation, like the time when, in 1986, the Bulgarian government forced the Turkish minority to adopt Slavic names, or like the time when the Turkish constitution in Turkey banned the use and registration of Kurdish names from 1983 to 2000, forced segregation, which happened when in 1938 Jews were forced to use only Jewish names to make their Jewish origin distinguishable by adding "Israel" and "Sara" to the names of men and women respectively, voluntary acculturation, which happens, for example, when ethnic groups voluntarily give up their traditional first names and adopt names of the dominant ethnic group without any force by law or other social forces, and voluntary ethnic segregation or maintenance, for which we have the case of the French Revolution of 1789 after which people turned back to names derived from the ancient Roman tradition (Gerhrads and Hans, 2009).
There are some studies reporting interesting cases of the reasons of changes in names around the world.These days it is becoming a common practice to name children after real or even fictional figures and characters from novels, opera heroes, actors and actresses, musicians, singers, sportsmen, famous businessmen, etc.In 2004, the names "Keira" and "Kiera" gained popularity in United Kingdom, because the British actress "Keira Knightley" was at the peak of fame at that time.The song "Hey Jude", released by the Beatles, increased the frequency of the name "Jude" chosen for children.Likewise, after the popularity of the film "Home Alone", there appeared a great interest in adopting the name "Kevin", the main character of the film (Wikipedia, 2011.;Encyclopedia of Children, 2011).When the civil war finished, African Americans returned back to names like Moses and Abraham which the black were banned from choosing; they also started to use the complete names like Thomas instead of shortened names like Tom and freely adopted names which showed their African origin (Conrad, 2001).Gardner (1992, and1993), considering the traditional role for boys in Arab societies, believes the reason that girls outnumber boys in adopting new names is because boys are supposed to continue the traditions and the good family name; on the other hand, girls are given new names to enhance their attractiveness, this change suggesting change for a better future.Regarding the effect of religion on name-giving, we can mention certain Hadiths (words of wisdom by the Prophet Mohammad and Imams) and Quranic verses which include recommendations regarding the characteristics of names chosen for boys.In Egypt, soap opera singers" popularity caused the emergence of new names in the Arab world.
Personal names exist, change, develop, and die; this is what happens in all languages.The study of name-giving and changes in name-giving practices of different nations and ethnic groups show us interesting patterns in a community"s culture; sometimes changes in these patterns reveal the presence of a certain spirit among certain people during a certain time (Bryner, 2010;Ghaleb Al-Zumor, 2009).
Sources of Names
Personal names may be derived from different sources and because of different factors and motives.The following list includes the most common sources reported in the literature (Myrand, 2011;Wikipedia, 2011;Encyclopedia of Children, 2011;Conrad, 2011;Ghaleb Al-Zumor, 2009;Djcnonyomaye, 2002):
Names Of Motherland
These names can be the name of the city, province, or the country the child belongs to, generic names meaning homeland, names of races, or names of tribes.
Personal Features
These features may concern the external/physical or internal/spiritual features, which in turn can be positive or negative.
Occupations
This category includes the jobs or even the tools necessary for doing a job.
The Time Of Birth
This can be the name of the day of the birth, the moment or part of the day when the child is born, or the circumstance of the birth, for example, the festivals or celebrations at the time of the birth of the child.
Objects Around
People usually choose the name of objects which are of high importance in their lives.
Traditions Or Local Customs
Children may be given the name of everyday chores, festive customs and activities, objects used in ceremonies, cultural or ethnic concepts, musical traditions, and traditional literary works which are honoured nationwide.
Variations Of Another Name
People, for instance, may change a male name to a female name by adding a suffix to the male name or vice versa.
Geographical Places
These can be both common generic names and proper names like the name of specific cities, villages, mountains, rivers, and plains.
Names Of Important People
These names could include real names of religious or divine, political, artistic, or historical characters or even fictional names in history, myth, and literature.
Names Of Ancestors
Children may be given the name of their ancestors or patronymic names or even their fathers, especially if they are dead.
Names Based On Nature
This category includes various subdivisions such rain or snow, natural phenomena, seasons, natural objects, weather conditions, common names of animals, the agricultural environment, common names of plants, and the sounds present in nature.
Brands
These may be the names of famous commercial brands and goods, especially luxurious ones.
Harmony Of Names
Sometimes parents who have more than one child try to choose names which are sound-harmonious.
Innovation Names
Sometimes parents devise unique names to make their children salient.These names may be names which are not chosen by other people at all or for example may be names derived from abbreviations of another name or even derived from the combination of other names.
Conclusion
A study of personal names and changes in name-giving patterns leads us to a better understanding of the culture and the socio-cultural values of different societies and ethnic groups.By studying people"s choice of names we can understand what is important for them and this helps us appreciate other people"s rights, preferences, feelings, ideas, and practices, what we believe is the purpose of "intercultural studies". | 2017-09-07T14:01:37.171Z | 2012-08-10T00:00:00.000 | {
"year": 2012,
"sha1": "63e16b560ee464704e8e5333ee0626c80631985a",
"oa_license": "CCBY",
"oa_url": "https://www.macrothink.org/journal/index.php/jsr/article/download/2127/1885",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "63e16b560ee464704e8e5333ee0626c80631985a",
"s2fieldsofstudy": [
"Sociology",
"Linguistics"
],
"extfieldsofstudy": [
"Sociology"
]
} |
256379014 | pes2o/s2orc | v3-fos-license | Letter to the editor: health professionals’ attitudes toward individuals with eating disorders: who do we think they are?
Health professionals are not immune to stigmatizing attitudes and stereotypes found in society-at-large. Along with patients and their loved ones, treatment providers are important stakeholders – and gatekeepers – in the successful delivery of mental healthcare. Prevailing attitudes among professionals can facilitate timely recognition, enable access to care and uptake of evidence-based practices, or undermine help-seeking and therapeutic engagement. At an interactive activity at the 2016 Nordic Eating Disorders Society (NEDS) meeting, we asked health professionals to describe individuals with eating disorders. The most common descriptive term used was “anxiety” followed by “thin”, “sad”, “control”, "female", and "suffering/pain". Further research on professionals’ attitudes toward individuals with eating disorders is necessary to inform education, awareness, and advocacy efforts following the diagnostic revisions in the DSM-5.
Background
Health professionals play a vital role in connecting science to service, and bridging bench-to-bedside gaps in the delivery of care, yet they are not immune to lay stereotypes or stigmatizing beliefs found in the community [1]. Individuals with eating disorders (ED) have been viewed by society-at-large as attention-seeking, blameworthy, or as having a trivial, self-imposed problem [2], and viewed by professionals as vain, manipulative, or difficult [3,4]. These findings are particularly worrying in light of studies of patient perspectives on treatmentseeking and engagement in ED. Individuals with ED highly value clinician attributes such as acceptance, empathy, warmth, and openness, whereas negative clinical encounters are characterized by a judgmental stance, disregard, or prejudice by health professionals [5]. Frequency of stigma exposure is associated with numerous adverse effects on health and well-being for those with ED, including greater ED symptomology, depression, and lower self-esteem [6]. Perceived stigma, or fear thereof, is consistently recognized as a prominent barrier to help-seeking for ED [7], diminishing our ability to identify and effectively treat all who may benefit [8].
Traditional views that ED are afflictions of "thin, affluent, young, white women" [9], render higher-weight individuals, older individuals, males, and ethnic minorities highly susceptible to bias and under-detection. Symptoms may go unrecognized, misinterpreted, or dismissed due to health professionals' expectations about the presentation of an ED. The DSM-5 criteria for ED have recently undergone changes with the removal of femalecentric criteria (i.e., amenorrhea) and pejorative terminology (i.e., "refusal" to maintain weight). How these diagnostic changes might affect provider attitudes toward individuals with ED is unclear. More research is also needed to understand professionals' attitudes toward newly added diagnostic labels, including avoidantrestrictive food intake disorder and binge eating disorder, as well as atypical presentations such as muscle dysmorphia [10].
Putting it into words: who do we think they are?
An interactive activity at the 2016 Nordic Eating Disorders Society (NEDS) in Helsinki, Finland offered a recent glimpse into professionals' views toward individuals with ED. The main conference theme of the 2016 meeting was "Information and Misinformation," and 3 days were organized to highlight common myths and misconceptions of ED [9]. At one of the plenaries, the audience was instructed to write down the "first word that comes to mind" to describe someone with an ED. Over 150 professionals attended, with 6 months to 35 years of experience in the field of ED. Limitations notwithstanding, this activity provided a rapid assessment of attitudes and associations at a manifest level and offers an interesting, if not powerful, visual (see Fig. 1). Many words reflected the profound and devastating toll of an ED (e.g., suffering, pain, trapped, struggle). Responses specific to ED pathology (e.g., food) were less common than associated features or comorbidity. Overall, anxiety was the most frequent response, followed by thin, sad, control, female, and suffering/ pain.
Conclusions
Whether thinness and female-centric words reflect lingering stereotypes of ED, or simply reflect the clientele treated by this group of professionals is unclear, yet findings deserve further investigation given the implications for potential bias and ascertainment. Encouragingly, and in contrast to some prior indications from the literature [4], little evidence of stigmatizing or pejorative terms was observed; rather, we noted several empathic or humanizing adjectives reflecting strength and individual differences. Research with a variety of professional categories is needed, as this line of investigation would almost certainly prove fruitful to help direct our education, awareness, and advocacy efforts. In particular, targeting primary care professionals is important for early detection, given their likelihood of encountering an undiagnosed eating disorder along the initial pathway-to-care. | 2023-01-30T15:10:53.629Z | 2017-07-17T00:00:00.000 | {
"year": 2017,
"sha1": "840160a3cf45f7207ae2190384f5609d4e9c992d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s40337-017-0150-6",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "840160a3cf45f7207ae2190384f5609d4e9c992d",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
} |
229386933 | pes2o/s2orc | v3-fos-license | Microstructural Analysis and Tribological Behavior of AMDRY 1371 (Mo–NiCrFeBSiC) Atmospheric Plasma Spray Deposited Thin Coatings
: Water treatment plants include a set of pumping stations, and their mechanical components experience various wear modes. In order to combat wear, the mechanical components of the pumps are coated with various types of wear resistant coatings. In this research, AMDRY 1371 (Mo–NiCrFeBSiC) coatings were deposited with the atmospheric plasma spray (APS) method on parallelepipedal steel samples manufactured from a worn sleeve of a multistage vertical irrigation pump. In order to find an optimum thickness of AMDRY 1371 coatings, the samples were coated with five, seven and nine passes (counted as return passes of the APS gun). Mechanical properties of the coating (microhardness and Young’s modulus) were determined by micro-indentation tests. An AMSLER tribometer was used to investigate the wear resistance and wear modes of the coated samples in dry conditions. A mean coe ffi cient of friction (CoF) of around 0.3 was found for all the samples, but its evolution during the one hour of the test and also the final wear volumes and wear rates depended on the thickness of the coating. To estimate the roughness of the surfaces and the wear volumes, measurements were carried out on a Taylor Hobson profilometer. In order to understand the nature and evolution of wear of coatings of various thicknesses, the unworn and worn surfaces of the coated samples were analyzed by scanning electron microscopy (SEM), energy-dispersive X-ray spectroscopy (EDS), and X-ray di ff raction (XRD). The wear modes of the coatings were studied, emphasizing the coating removal process for each sample. According to our results, for each dry friction application, there is an optimum value of the thickness of the coating, depending on the running conditions.
Introduction
Pumps are the heart of any water treatment plant. Their mechanical components, such as impellers, rolling bearings, seals, bushes, and sleeves, suffer severe damage due to various wear mechanisms: corrosion, abrasion, adhesion, erosion, cavitation, pitting etc. One of the ways to combat such kind of combined wear is by coating the surfaces with wear resistant powders. These powders can be of Mo-NiCrFeBSiC coatings were proposed by Sampath and Vanderpool [17] and the values of the dry coefficient of friction (CoF) obtained by tests on a ball-on-disc tribometer were very high, the lowest CoF being 0.66 for 10 N normal load and 0.5 m/s sliding speed (440-C ball sliding against a coated disc). The results presented in [17] were not so encouraging, with high values of the dry friction coefficient being reported. Yegunov et al. [18] investigated the features of the coaxial laser gas powder surfacing (CGPS) of the AMDRY 1371 powder alloy (also known as Mo+NiCrBSi alloy) and reported low porosity and low fracture susceptibility of the deposited coatings, with better results at low speed and high power consumption, but no tribological study was carried out on the AMDRY 1371 coating. Preliminary research of Niranatlumpong and Koiprasert [19] reported that the effect of added Mo on tribological properties of NiCrBSi plasma sprayed coatings is beneficial for wear resistance with a small ratio (25 wt.%) of added Mo. Zhang et al. [20] added 5 wt.%-30 wt.% Mo to NiCrBSi and carried out dry and oil lubricated reciprocating friction tests. It was revealed that 30% of Mo in NiCrBSi provides the best tribological behavior in both dry and lubricated tests. Nevertheless, the lowest value of dry friction CoF was 0.6, which is very high, but the wear rate was drastically reduced by about 96% when compared to pure NiCrBSi. Dilawary et al. [21] added only 10 wt.% of Mo to NiCrBSi and reported that the wear resistance of plasma transfer arc (PTA) deposited hardfacing increased at both room and high temperatures (300-700 • C) due to the formation of Mo oxides. The measured CoF during ball-on-disc tests was between 0.4 and 0.9, the highest CoF being obtained for NiCrBSi + 10 wt.% at room temperature. Liu et al. [22] investigated the effect of heat treatment at 300, 500, and 700 • C of atmospheric plasma sprayed NiCrBSi coatings on their microstructure, phase composition, microhardness and tribological performance. Inter-splat oxidation of the coatings took place at high temperatures, reducing its toughness and increasing the microhardness and wear rate, whereas it did not exert a pronounced effect on dry friction coefficients. Sang et al. [23] showed that APS deposited NiCrBSi coatings from small diameter particles (50-75 µm) have diminished porosity but also smaller hardness values and corrosion resistance than coatings made from higher diameter particles (75-100 µm).
In this research, coatings made of AMDRY 1371 (Mo-NiCrFeBSiC) powder were deposited by the APS process in multiple successive passes on samples made of AISI 304 (EN 1.4301) steel substrate. The aim of this paper is to investigate the effect of coating thickness on its obtained microstructure and tribological properties, with no studies being published until now on thickness optimization for this coating.
Materials
The chemical composition of this powder, according to Oerlikon-Metco's online catalogue, is presented in Table 1. As can be observed, Mo is in a proportion of about 75 wt.%. The manufacturer recommends this powder with high molybdenum content for coatings with scuffing resistance, high toughness, and low friction coefficient, as well as its application for pump bushings and sleeves. The morphology of the powder consists of spheroidal particles with particle sizes of 90 + 25 µm. This blending powder has a melting point of 660 • C, service temperature ≤ 340 • C, as compulsory spray process being indicated APS or HVOF. The supplier indicated that the finishing of such coatings typically used the wet grind using a SiC or diamond wheel. In order to keep the morphology of the deposited coatings unmodified, no grinding was applied, and the coatings were studied as they resulted from the deposition process.
Parallelepiped samples of 100 mm × 10 mm × 5 mm were cut from a worn irrigation pump sleeve made of AISI 304 (EN 1.4301) [24]. Before the APS process, the steel samples were sandblasted and polished. Coated pads were realized by APS deposition of AMDRY 1371 (Mo-NiCrFeBSiC) on the steel substrate, using a Oerlikon-Metco 9MB gun (Pfäffikon, Switzerland) and SPRAYWIZARD 9MCE equipment (Sulzer & Metco, Pfäffikon, Switzerland), the technological parameters of the coating process are indicated in [24]. Three samples were coated with 5, 7, and 9 successive deposition passes of AMDRY 1371, the samples being denoted in this paper as 5L, 7L, and 9L.
Surface Topography Measurement
The roughness of the tested samples and also the profile of the wear traces on samples were measured using a stylus profilometer (made by Taylor Hobson, Leicester, UK), model Form Talysurf 50 and the µltra Intra Form Talysurf interpretation software (Ultra Version 5.5.4.20). For the used standard stylus arm code 112/2009, the range/resolution was 1.0 mm/16 nm (0.04 in/0.64 µin).
Microhardness Measurements
The Rockwell microhardness of coatings was measured using the indentation modulus of CETR UMT-2 micro-tribometer (Luleå, Sweden), the microhardness mean values were obtained as the arithmetic mean of five tests. The flat coated samples were indented using a Rockwell diamond tip indenter (CETR, Luleå, Sweden), with an opening angle of −120 • ± 0.35 • , radius −200 ± 10 µm, deviation from profile ± 2 µm. The Young's modulus, E, was also obtained. The indentation method consisted of a progressive increase in the indentation force from 0 to 5 N and then the return to the initial value. The capacitive sensor along with the force sensor allowed us to obtain the typical indentation diagram (force-deformation). The software for micro-indentation and the microscratch test was the CETR-UMT Test Viewer.
Morphological and Structural Analyses
Surface and cross-section SEM images of coatings were taken with SEM equipment Quanta 200 3D Dual Beam (Waltham, MA, USA). The EDS analysis was realized using the unit model XFlash (Bruker, Billerica, MA, USA). XRD analysis was carried out using an Expert PRO MPD facility from Panalytical (Almelo, The Netherlands), with a Cu X-ray tube (Kα-1.54051 • ) [25].
Tribological Friction and Wear Tests
The AMSLER machine, known also as a two-disc machine, was first fabricated by Alfred J. AMSLER & Co., Schaffhouse, Switzerland, and used for wear tests of metals under a wide variety of testing conditions [26]. Our AMSLER machine (Type A 135, made by Wolpert Werkstoffprüfmaschinen G.mb.H. in Schaffhausen, Switzerland) was used to test the tribological pairs composed of a fixed upper coated sample and lower rotating disc made of AISI 52100 rolling bearing steel with hardness 60-64 HRc, with an equal radius of 29.5 mm in both radial and axial directions, and a disc thickness of 10 mm. To keep the upper coated sample fixed, the upper gears transmission chain was interrupted [27]. A complete description of the testing machine is provided in [27][28][29][30]. For the sake of completeness, images of the AMSLER machine and the data acquisition system are also provided in this paper ( Figure 1). Tribological tests were repeated thrice and mean values were reported. All the tests were carried out in dry friction conditions, with a constant applied load of 20 N, and constant speed of 100 rpm, the running time of each test being one hour. The load was applied by dead weights. A data acquisition chain based on tensometric measurements and Vishay P3 strain gage bridge (made in Braunschweig, Germany) provided friction torque measurements. The acquired data were post-processed by a LabVIEW virtual instrument (Version: 7.1) [27][28][29][30]. The computation formulas for the mean friction moment, mean coefficient of friction (CoF), and the data acquisition chain calibration procedure are presented in [28].
Surface Topography
Before the tribological tests, the arithmetic mean roughness, Ra, of each tested sample surface was measured. Profilometry results are presented in Table 2. After each friction test, the rotating disc of AISI 52100 steel used in the friction and wear tests carried out on the AMSLER machine was polished, with the obtained mean roughness also being indicated in Table 2. The polishing was realized by using sandpaper grit 320. The aim of the polishing process was to obtain a similar roughness for all the tests but also to shorten the processing time of the surface. For the roughness line analysis, the next parameters were adopted: Gaussian filter, cutoff (Lc) 0.25 mm, cutoff (Ls) 0.008 mm, bandwidth 30:1. The variation of the roughness of the obtained samples is due to the nature of the APS process, the successive passes erratically conducted to splat-on-splat melted powder structures.
Hardness and Elasticity Modulus
Rockwell microhardness HR, in GPa, was measured. As the adopted test was HR0.5 (preload 0.5 N and maximum applied load of 5 N), the indenter maximum displacement was below 12 µm, far from the thickness of the deposited coatings (over 50 µm). For this reason, global values of the microhardness and elastic modulus were computed as mean values of five measurements ( Figure 2). It can be seen that the reduced indentation modulus, E0, is near to Young's elasticity modulus (Table 3). In Figure 1, the next notations were adopted: 1-stationary coated sample; 2-AISI 52100 rotating disc; 3-dead weight loading system; 4-tensometric sensor; and 5-data acquisition system (type: Vishay P3 tensometric bridge with four channels, made by Vishay, in Braunschweig, Germany).
Surface Topography
Before the tribological tests, the arithmetic mean roughness, Ra, of each tested sample surface was measured. Profilometry results are presented in Table 2. After each friction test, the rotating disc of AISI 52100 steel used in the friction and wear tests carried out on the AMSLER machine was polished, with the obtained mean roughness also being indicated in Table 2. The polishing was realized by using sandpaper grit 320. The aim of the polishing process was to obtain a similar roughness for all the tests but also to shorten the processing time of the surface. For the roughness line analysis, the next parameters were adopted: Gaussian filter, cutoff (Lc) 0.25 mm, cutoff (Ls) 0.008 mm, bandwidth 30:1. The variation of the roughness of the obtained samples is due to the nature of the APS process, the successive passes erratically conducted to splat-on-splat melted powder structures.
Hardness and Elasticity Modulus
Rockwell microhardness HR, in GPa, was measured. As the adopted test was HR0.5 (preload 0.5 N and maximum applied load of 5 N), the indenter maximum displacement was below 12 µm, far from the thickness of the deposited coatings (over 50 µm). For this reason, global values of the microhardness and elastic modulus were computed as mean values of five measurements ( Figure 2). It can be seen that the reduced indentation modulus, E0, is near to Young's elasticity modulus (Table 3). As the EDS results proved (see Table 4), the composition of the coating is different from point to point. Additionally, the surface roughness may influence the indentation results. Regarding the microhardness, it seems that it depends on the APS spraying distance [31], which was kept constant in our research. As we observed, the measured microhardness of coatings depends on the indentation point, the spread of the results being acceptable for indentations at micro level. A close value of Young's modulus, E = 55 ± 6 GPa, was reported for APS deposited AMDRY 1371 in [32].
SEM Analysis
Images of surface SEM analysis of the base material and coated samples are presented in Figure 3. The base material microstructure is composed of evenly distributed α-ferrite (darker) and perlite (lighter) grains ( Figure 3a). The AMDRY1371 powder coated surface possesses a morphology characteristic to the APS deposition process, with splats, pores, and rare partially melted powder particles (see Figure 3b,c). It must be noticed that the semi-melted light spherical particles are of Mo (Figure 3c), as proved by these EDS results and confirmed by [19]. At high magnification of 5000× (Figure 3d), the increased molybdenum content (over 75%) formed a flake-like microstructure, which may protect the surface during the dry running-in period of the tribological tests. As the EDS results proved (see Table 4), the composition of the coating is different from point to point. Additionally, the surface roughness may influence the indentation results. Regarding the microhardness, it seems that it depends on the APS spraying distance [31], which was kept constant in our research. As we observed, the measured microhardness of coatings depends on the indentation point, the spread of the results being acceptable for indentations at micro level. A close value of Young's modulus, E = 55 ± 6 GPa, was reported for APS deposited AMDRY 1371 in [32].
SEM Analysis
Images of surface SEM analysis of the base material and coated samples are presented in Figure 3. The base material microstructure is composed of evenly distributed α-ferrite (darker) and perlite (lighter) grains ( Figure 3a). The AMDRY1371 powder coated surface possesses a morphology characteristic to the APS deposition process, with splats, pores, and rare partially melted powder particles (see Figure 3b,c). It must be noticed that the semi-melted light spherical particles are of Mo (Figure 3c), as proved by these EDS results and confirmed by [19]. At high magnification of 5000× (Figure 3d), the increased molybdenum content (over 75%) formed a flake-like microstructure, which may protect the surface during the dry running-in period of the tribological tests. As the EDS results proved (see Table 4), the composition of the coating is different from point to point. Additionally, the surface roughness may influence the indentation results. Regarding the microhardness, it seems that it depends on the APS spraying distance [31], which was kept constant in our research. As we observed, the measured microhardness of coatings depends on the indentation point, the spread of the results being acceptable for indentations at micro level. A close value of Young's modulus, E = 55 ± 6 GPa, was reported for APS deposited AMDRY 1371 in [32].
SEM Analysis
Images of surface SEM analysis of the base material and coated samples are presented in Figure 3. The base material microstructure is composed of evenly distributed α-ferrite (darker) and perlite (lighter) grains ( Figure 3a). The AMDRY1371 powder coated surface possesses a morphology characteristic to the APS deposition process, with splats, pores, and rare partially melted powder particles (see Figure 3b,c). It must be noticed that the semi-melted light spherical particles are of Mo (Figure 3c), as proved by these EDS results and confirmed by [19]. At high magnification of 5000× (Figure 3d), the increased molybdenum content (over 75%) formed a flake-like microstructure, which may protect the surface during the dry running-in period of the tribological tests. The thickness of the deposited coatings is much less than reported by [19], which is about 300-400 μm. As the present friction and wear results will prove, a very large thickness is not always beneficial from the viewpoint of the dry wear rate of sliding contacts. Some pores and micro cracks can be observed at a higher magnification of 2000×. As seen from the SEM images, the microstructure of these coatings is better than that obtained by Zhang et al. [20] for 5 wt.% to 30 wt.% added Mo in NiCrBSi, with less pores and voids observed (Figure 4).
EDS Analysis
The EDS results for the unworn surfaces of coated samples were obtained as mean values (wt.%) of four measurements at random points of the coatings ( Figure 5). Additionally, computed global mean values of elements (wt.%) for all the AMDRY 1371 samples are presented in Table 4. The thickness of the deposited coatings is much less than reported by [19], which is about 300-400 µm. As the present friction and wear results will prove, a very large thickness is not always beneficial from the viewpoint of the dry wear rate of sliding contacts. Some pores and micro cracks can be observed at a higher magnification of 2000×. As seen from the SEM images, the microstructure of these coatings is better than that obtained by Zhang et al. [20] for 5 wt.% to 30 wt.% added Mo in NiCrBSi, with less pores and voids observed (Figure 4).
EDS Analysis
The EDS results for the unworn surfaces of coated samples were obtained as mean values (wt.%) of four measurements at random points of the coatings ( Figure 5). Additionally, computed global mean values of elements (wt.%) for all the AMDRY 1371 samples are presented in Table 4. The thickness of the deposited coatings is much less than reported by [19], which is about 300-400 μm. As the present friction and wear results will prove, a very large thickness is not always beneficial from the viewpoint of the dry wear rate of sliding contacts. Some pores and micro cracks can be observed at a higher magnification of 2000×. As seen from the SEM images, the microstructure of these coatings is better than that obtained by Zhang et al. [20] for 5 wt.% to 30 wt.% added Mo in NiCrBSi, with less pores and voids observed (Figure 4).
EDS Analysis
The EDS results for the unworn surfaces of coated samples were obtained as mean values (wt.%) of four measurements at random points of the coatings ( Figure 5). Additionally, computed global mean values of elements (wt.%) for all the AMDRY 1371 samples are presented in Table 4. Comparing the composition of the commercial deposited powder (Table 1) against the mean values from the surface EDS analysis of the deposited coatings (Table 4), it seems that Mo has the tendency to migrate to the surface during the APS deposition process, while Cr remains in the substrate in a larger quantity. The presence of Mo to the surface zones is a gain from the viewpoint of friction reduction capabilities.
In Figure 5, we emphasized the EDS analyzed surfaces and we reported mean values in Table 4. For pertinent mean values of elemental analysis, in Figure 5a we selected a higher area containing some voids. The idea was to compare the obtained general mean values, including the analysis of selected specific smaller different areas, with the results of this measurement influenced by the existence of these voids. As observed, the general mean values are between the results of test 1 and the mean of tests 2 to 4, as it should be for a porous structure. Figure 6 presents the results of line scan EDS analysis in cross-sections for all 5L, 7L, and 9L samples. The base material is observed on the left side of each subpicture. The increase in Mo content over the thickness of deposited coatings is accompanied by a strong decrease in Fe content for all the samples. Comparing the composition of the commercial deposited powder (Table 1) against the mean values from the surface EDS analysis of the deposited coatings (Table 4), it seems that Mo has the tendency to migrate to the surface during the APS deposition process, while Cr remains in the substrate in a larger quantity. The presence of Mo to the surface zones is a gain from the viewpoint of friction reduction capabilities.
In Figure 5, we emphasized the EDS analyzed surfaces and we reported mean values in Table 4. For pertinent mean values of elemental analysis, in Figure 5a we selected a higher area containing some voids. The idea was to compare the obtained general mean values, including the analysis of selected specific smaller different areas, with the results of this measurement influenced by the existence of these voids. As observed, the general mean values are between the results of test 1 and the mean of tests 2 to 4, as it should be for a porous structure. Figure 6 presents the results of line scan EDS analysis in cross-sections for all 5L, 7L, and 9L samples. The base material is observed on the left side of each subpicture. The increase in Mo content over the thickness of deposited coatings is accompanied by a strong decrease in Fe content for all the samples.
Friction Tests on AMSLER Machine
Friction tests were carried out on the AMSLER machine in dry conditions, at constant load conditions and constant speed. The friction moment evolution in time was monitored by a data acquisition system [27,28], and the mean values of the friction coefficients were obtained by postprocessing of acquired data using a developed LabVIEW virtual instrument [27].
At the beginning of the tests (Figure 7a), the friction torque of sample 5L was less than that of 7L and 9L, but it undertook an ascendant trend as the thinner coating was removed by abrasion. Its dynamic evolution in the range of 1000-2000 s is supposed to be due to the direct contact between the AISI 52100 surface of the counterpart test roller and the base material surface. Thicker coatings presented a constant linear evolution of the friction torque (9L), and descendent evolution for sample
Friction Tests on AMSLER Machine
Friction tests were carried out on the AMSLER machine in dry conditions, at constant load conditions and constant speed. The friction moment evolution in time was monitored by a data acquisition system [27,28], and the mean values of the friction coefficients were obtained by post-processing of acquired data using a developed LabVIEW virtual instrument [27].
At the beginning of the tests (Figure 7a), the friction torque of sample 5L was less than that of 7L and 9L, but it undertook an ascendant trend as the thinner coating was removed by abrasion. Its dynamic evolution in the range of 1000-2000 s is supposed to be due to the direct contact between the AISI 52100 surface of the counterpart test roller and the base material surface. Thicker coatings presented a constant linear evolution of the friction torque (9L), and descendent evolution for sample 7L. The lowest coefficient of friction (CoF) was obtained for sample 5L, but the CoF values increased slowly for the thicker coatings-7L and 9L. The evolution of the 7L friction coefficient was as generally expected, with a typical curve being indicated in [33]. Regarding the obtained values of CoF, around 0.3, they are better than those reported by Dilawary et al. [21], who found values of CoF increasing from 0.4 to 0.9, for tests at room temperature on PTA deposited Mo(10%)-NiCrBSi and a sliding distance of 500 m.
Coatings 2020, 10, x FOR PEER REVIEW 9 of 17 7L. The lowest coefficient of friction (CoF) was obtained for sample 5L, but the CoF values increased slowly for the thicker coatings-7L and 9L. The evolution of the 7L friction coefficient was as generally expected, with a typical curve being indicated in [33]. Regarding the obtained values of CoF, around 0.3, they are better than those reported by Dilawary et al. [21], who found values of CoF increasing from 0.4 to 0.9, for tests at room temperature on PTA deposited Mo(10%)-NiCrBSi and a sliding distance of 500 m. The fluctuation of the friction moment during tests affected the mean values of the friction coefficients (see Figure 7b). To interpret the data, friction results must be correlated with wear results, EDS, and XRD analysis, with this aspect being treated later in the Discussion section.
Wear Analysis of Coatings
The wear analysis of the coatings includes SEM analysis of the wear spots and their vicinity (Figure 8), EDS analysis of worn coatings, and XRD analysis. The fluctuation of the friction moment during tests affected the mean values of the friction coefficients (see Figure 7b). To interpret the data, friction results must be correlated with wear results, EDS, and XRD analysis, with this aspect being treated later in the Discussion section.
Wear Analysis of Coatings
The wear analysis of the coatings includes SEM analysis of the wear spots and their vicinity (Figure 8), EDS analysis of worn coatings, and XRD analysis.
Coatings 2020, 10, x FOR PEER REVIEW 9 of 17 7L. The lowest coefficient of friction (CoF) was obtained for sample 5L, but the CoF values increased slowly for the thicker coatings-7L and 9L. The evolution of the 7L friction coefficient was as generally expected, with a typical curve being indicated in [33]. Regarding the obtained values of CoF, around 0.3, they are better than those reported by Dilawary et al. [21], who found values of CoF increasing from 0.4 to 0.9, for tests at room temperature on PTA deposited Mo(10%)-NiCrBSi and a sliding distance of 500 m. The fluctuation of the friction moment during tests affected the mean values of the friction coefficients (see Figure 7b). To interpret the data, friction results must be correlated with wear results, EDS, and XRD analysis, with this aspect being treated later in the Discussion section.
Wear Analysis of Coatings
The wear analysis of the coatings includes SEM analysis of the wear spots and their vicinity (Figure 8), EDS analysis of worn coatings, and XRD analysis.
SEM Analysis
The left side of Figure 8 contains magnification at 50× of the wear spots of each coating with dimension lines of each wear scar: 5L (Figure 8a), 7L (Figure 8b), and 9L (Figure 8c). The SEM images of sample 5L at higher magnification (Figure 8a-1000×) expose crashed material and micro cracks, typical of combined abrasive and adhesive wear. It can be seen that 7L has the smallest wear scar area, while 5L has the biggest worn area. The images from the middle column of Figure 8 present the border area between the worn and unworn coated zones of each sample, but at higher magnification (100×). Samples 7L and 9L reveal a glassy microstructure (darker area), indicating the presence of Si and MoO 2 oxides and intensified sliding friction between the glossy contact surfaces. Scratches can be seen in the middle of the wear spot of sample 9L, indicating abrasive scratching with hard particles. As the XRD analysis has shown, there are some CrB particles on the wear scar of the 9L coating. An agglomeration of such particles at the bottom of the wear cup, entrapped in the Mo substrate, adhered to the similar metallic component from the counterpart surface of the AISI 52100 testing roller and produced these scratches.
EDS Point and Line Analysis of Worn Coatings
To identify light and dark regions within the wear spots, EDS analysis was also conducted according to Figure 9 and Table 5. (Figure 8c). The SEM images of sample 5L at higher magnification (Figure 8a-1000×) expose crashed material and micro cracks, typical of combined abrasive and adhesive wear. It can be seen that 7L has the smallest wear scar area, while 5L has the biggest worn area. The images from the middle column of Figure 8 present the border area between the worn and unworn coated zones of each sample, but at higher magnification (100×). Samples 7L and 9L reveal a glassy microstructure (darker area), indicating the presence of Si and MoO2 oxides and intensified sliding friction between the glossy contact surfaces. Scratches can be seen in the middle of the wear spot of sample 9L, indicating abrasive scratching with hard particles. As the XRD analysis has shown, there are some CrB particles on the wear scar of the 9L coating. An agglomeration of such particles at the bottom of the wear cup, entrapped in the Mo substrate, adhered to the similar metallic component from the counterpart surface of the AISI 52100 testing roller and produced these scratches.
EDS Point and Line Analysis of Worn Coatings
To identify light and dark regions within the wear spots, EDS analysis was also conducted according to Figure 9 and Table 5. Figure 10 presents the results of line scan EDS analysis over the worn area of all the tested samples. It can be seen that the scan line over each sample started from the unworn coating (left side), and passed over the boarder unworn to worn surfaces. In such a way, the fluctuation of each element wt.% over the scanning line was emphasized. Figure 10 presents the results of line scan EDS analysis over the worn area of all the tested samples. It can be seen that the scan line over each sample started from the unworn coating (left side), and passed over the boarder unworn to worn surfaces. In such a way, the fluctuation of each element wt.% over the scanning line was emphasized. Material, Figures S4-S6).
The effects of the various elements on the friction and wear of each sample will be treated later, in the Discussion section.
XRD Analysis of Coatings
A 3D map of elemental distribution over the base material, with coatings without wear and worn 5L, 7L, and 9L samples, is provided in Figure 11. The highest peaks correspond to Mo, and in their right and left near vicinity there are peaks associated with MoO2. The obtained distribution of the elements is similar to [19], a reference that also studied Mo-NiCrFeBSi (75-25 wt.%).
Wear Profiles and Wear Rates
In order to obtain the exact wear volume of each coated sample, the measurements of the worn area from SEM images (Figure 8) were combined with the information from worn profiles (Figure 12). The effects of the various elements on the friction and wear of each sample will be treated later, in the Discussion section.
XRD Analysis of Coatings
A 3D map of elemental distribution over the base material, with coatings without wear and worn 5L, 7L, and 9L samples, is provided in Figure 11. The highest peaks correspond to Mo, and in their right and left near vicinity there are peaks associated with MoO 2 . The obtained distribution of the elements is similar to [19], a reference that also studied Mo-NiCrFeBSi (75-25 wt.%). Material, Figures S4-S6).
The effects of the various elements on the friction and wear of each sample will be treated later, in the Discussion section.
XRD Analysis of Coatings
A 3D map of elemental distribution over the base material, with coatings without wear and worn 5L, 7L, and 9L samples, is provided in Figure 11. The highest peaks correspond to Mo, and in their right and left near vicinity there are peaks associated with MoO2. The obtained distribution of the elements is similar to [19], a reference that also studied Mo-NiCrFeBSi (75-25 wt.%).
Wear Profiles and Wear Rates
In order to obtain the exact wear volume of each coated sample, the measurements of the worn area from SEM images (Figure 8) were combined with the information from worn profiles (Figure 12).
Wear Profiles and Wear Rates
In order to obtain the exact wear volume of each coated sample, the measurements of the worn area from SEM images (Figure 8) were combined with the information from worn profiles (Figure 12). The deep scratches in the middle area of sample 9L can be seen by the transversal profile image from Figure 12f, this remark being in close correlation with the results presented in Section 3.5.1. The computed wear rate, W, in mm 3 /N•m, expressed by Equation (1), from [33], is represented in Figure 10.
where V is volume of wear of the coated sample in Equation (2), computed as the volume of the demiellipsoid with the semiaxes a, b and c measured from raw wear profiles, Q is the applied normal load on the contact (20 N), L is the total sliding distance given by Equation (3), in this case 1112 m. The AISI 52100 roller diameter is 59 mm, the speed N = 100 rpm, and the time T = 3600 s. The highest wear volume and wear rate were obtained for sample 5L, while the lowest corresponded to sample 7L ( Figure 13). An explanation for the increased wear volume of 5L is provided by the existence of a higher proportion of hard compounds based on Cr, B, and C; this being denoted as peak five from Figure 11. The error bars are in close correlation with the variation of the measured friction torque, the influence of friction on the wear process being evident. The deep scratches in the middle area of sample 9L can be seen by the transversal profile image from Figure 12f, this remark being in close correlation with the results presented in Section 3.5.1. The computed wear rate, W, in mm 3 /N·m, expressed by Equation (1), from [33], is represented in Figure 10.
where V is volume of wear of the coated sample in Equation (2), computed as the volume of the demi-ellipsoid with the semiaxes a, b and c measured from raw wear profiles, Q is the applied normal load on the contact (20 N), L is the total sliding distance given by Equation (3), in this case 1112 m. The AISI 52100 roller diameter is 59 mm, the speed N = 100 rpm, and the time T = 3600 s. The highest wear volume and wear rate were obtained for sample 5L, while the lowest corresponded to sample 7L ( Figure 13). An explanation for the increased wear volume of 5L is provided by the existence of a higher proportion of hard compounds based on Cr, B, and C; this being denoted as peak five from Figure 11. The error bars are in close correlation with the variation of the measured friction torque, the influence of friction on the wear process being evident.
Discussion
In reference to previously published research on similar subjects, only Niranatlumpong and Koiprasert [19] have studied both the microstructure and the tribological properties of APS coatings from Mo-NiCrFeBSi (75-25 wt.%) powder. The results reported in [19] are not positive for the Mo-NiCrFeBSi (75-25 wt.%) coating, that is AMDRY1371. In tribological tests of [19], the tested materials were the same as in this paper, and even the applied load was similar: 25 N in [19], and 20 N in this research. Dry conditions and pure sliding were chosen in both tests, but the most important aspect is the existing Hertz pressure, σH, between the contact bodies, computed according to Equation (4).
where Q is the applied load on the contact, a and b are the semimajor and semiminor axes of the elastic elliptical contact [34]. In fact, [19] warned that this coating is not suited for very high pressures, but a high pressure was obtained by choosing a small diameter ball, even if a reduced load was applied. The ball with a 6.3 mm diameter used as counter-body in [19] creates a Hertz pressure of 1135 MPa, corresponding to high loads and severe exploitation regime, typical for mechanisms with concentrated contacts (e.g., rolling bearings and gears), while the computed Hertz pressure in our contact, created by a disc of 59 mm diameter, was only 237 MPa, as encountered in highly loaded sleeve bearings, this being the aim of this study. This aspect, correlated with the thicker coatings (350-400 µm) deposited in [19], led to the rapid crash of the coating containing 75% Mo and 25% NiCrFeBSi, even if the testing time was reduced (1000 s). As Marquer et al. [4] reported, the frictional properties are more dependent on the contact pressure rather than on the sliding velocity.
Regarding the results on the friction coefficient during our tribological tests (Figure 7), they are closely related to the different roughness of the tested samples (Table 2). This supposition is confirmed by the findings of [19], where reduced friction coefficients of 0.1-0.15 were obtained by polishing the coatings up to Ra = 0.5 µm. The 5L samples had a higher roughness, from here the primordially flake-like Mo particles of the asperities (see Figure 3) were broken from the beginning and entered the contact as solid lubricant, assuring a mild abrasive wear and extended contact area. The evolution of the friction coefficient fluctuated after 1000 s, the time supposed to be necessary to remove the thinner coating of sample 5L. After that, the metal-to-metal contact of the base material and the testing steel counter-body contributed to a continuous increase in the friction torque. However, the EDS and XRD analysis proved the existence of a high quantity of Mo in the contact area of sample 5L at the end of the test, demonstrating the existence of solid lubrication regime over an extended contact area with diminished real contact pressure, the obtained mean friction coefficient being the lowest. The results of XRD analysis (Figure 11) confirmed the findings of Dilawary et al.
Discussion
In reference to previously published research on similar subjects, only Niranatlumpong and Koiprasert [19] have studied both the microstructure and the tribological properties of APS coatings from Mo-NiCrFeBSi (75-25 wt.%) powder. The results reported in [19] are not positive for the Mo-NiCrFeBSi (75-25 wt.%) coating, that is AMDRY1371. In tribological tests of [19], the tested materials were the same as in this paper, and even the applied load was similar: 25 N in [19], and 20 N in this research. Dry conditions and pure sliding were chosen in both tests, but the most important aspect is the existing Hertz pressure, σ H , between the contact bodies, computed according to Equation (4).
where Q is the applied load on the contact, a and b are the semimajor and semiminor axes of the elastic elliptical contact [34]. In fact, [19] warned that this coating is not suited for very high pressures, but a high pressure was obtained by choosing a small diameter ball, even if a reduced load was applied. The ball with a 6.3 mm diameter used as counter-body in [19] creates a Hertz pressure of 1135 MPa, corresponding to high loads and severe exploitation regime, typical for mechanisms with concentrated contacts (e.g., rolling bearings and gears), while the computed Hertz pressure in our contact, created by a disc of 59 mm diameter, was only 237 MPa, as encountered in highly loaded sleeve bearings, this being the aim of this study. This aspect, correlated with the thicker coatings (350-400 µm) deposited in [19], led to the rapid crash of the coating containing 75% Mo and 25% NiCrFeBSi, even if the testing time was reduced (1000 s). As Marquer et al. [4] reported, the frictional properties are more dependent on the contact pressure rather than on the sliding velocity.
Regarding the results on the friction coefficient during our tribological tests (Figure 7), they are closely related to the different roughness of the tested samples (Table 2). This supposition is confirmed by the findings of [19], where reduced friction coefficients of 0.1-0.15 were obtained by polishing the coatings up to R a = 0.5 µm. The 5L samples had a higher roughness, from here the primordially flake-like Mo particles of the asperities (see Figure 3) were broken from the beginning and entered the contact as solid lubricant, assuring a mild abrasive wear and extended contact area. The evolution of the friction coefficient fluctuated after 1000 s, the time supposed to be necessary to remove the thinner coating of sample 5L. After that, the metal-to-metal contact of the base material and the testing steel counter-body contributed to a continuous increase in the friction torque. However, the EDS and XRD analysis proved the existence of a high quantity of Mo in the contact area of sample 5L at the end of the test, demonstrating the existence of solid lubrication regime over an extended contact area with diminished real contact pressure, the obtained mean friction coefficient being the lowest. The results of XRD analysis (Figure 11) confirmed the findings of Dilawary et al. [21], which indicated the presence of Cr 5 B 3 , Ni 3 B, and Cr 7 C 3 around 83 degrees (peak five, Figure 11). They asserted that Mo refined the microstructure of hardfacing NiCrBSi, introducing a new Mo 2 (B, C) type boro-carbide phase. All these compounds containing Cr and B are hard particles, and they were found on the map of Figure 11 in a greater proportion on the worn surface corresponding to sample 5L. This explains the accelerated abrasive wear of 5L, the phenomenon being accompanied by micro adhesions between similar metallic compounds of contacting bodies (see Figure 8a, 1000×). Furthermore, as the line scan EDS analysis shows, the 5L wear scar contains an increased percent of iron just over the severe worn area (Figure 10d, Fe is magenta color). The quantity of Mo (see XRD results, Figure 11), O, and Ni in the worn area is also abundant, from here the reduced friction coefficient, which had an ascendant evolution due to the contact between the bare base material surface and counterpart steel roller surface, amplified over the time. In comparison, more O was found over the worn area of samples 7L and 9L (Figure 10e,f) and the Mo and MoO 2 compounds protected the contact area and slowed down the wear rate. On the scratched area of sample 9L (Figure 8c), there is an increased amount of Fe, Cr and B (Figure 10f), that is, agglomeration of hard particles, but a reduced quantity of Mo. High Mo content is found near these scratches, confirming that the agglomeration of hard particles of Fe, Cr and CrB was entrapped in the Mo, producing the scratching in the sliding motion direction, due to the high applied pressure on sliding contact. The highest distribution of Mo was found in the center of the contact area of sample 7L, explaining the reduced wear. The positive result is confirmed by Dilawary et al. [21], who emphasized the beneficial contribution of 10% added Mo to NiCrBSi powder, with the wear resistance increasing two-fold.
Regarding the wear modes, specific to all the coatings deposited by APS, the SEM images confirmed splats' delamination and abrasion. Furthermore, by varying the thickness of the coatings, it seems that the thin 5L coating was subjected to both abrasive and adhesive wear over time, while the thicker coating favorized agglomeration of Fe, Cr, B hard particles and the initiation of abrasive ploughing wear. The surface of sample 7L had no scratches and became glassy at the end of the friction test, indicating a mild abrasive wear at the interface surface of the coating and a high content of O and Mo being detected by EDS line analysis of the worn surface.
Comparing the obtained wear rates with results from the literature, the 7L and 9L samples displayed better behavior than HVOF sprayed Colferoloy-NiCrFeSiBC coatings (1 × 10 −4 mm 3 /N·m) tested at room temperature [11]. For a similar coating, [19] only reported results on wear depth (175 µm), whereas our samples presented maximum depths of the wear path around 95, 50, and 55 µm for samples 5L, 7L, and 9L, respectively, and even the testing time in [19] was only 1000 s and the initial roughness of samples was diminished by polishing (R a = 0.5 µm).
Conclusions
Mechanical components of irrigation and slurry pumps experience various kinds of wear during exploitation. In order to combat wear, these components can be coated with diverse types of wear resistant coatings. AMDRY 1371 (Mo-NiCrFeBSiC) is a high molybdenum content (75 wt.%) coating, Oerlikon-Metco's online catalogue recommends it as a coating for pump bushings and sleeves.
In this research, coatings made of AMDRY 1371 (Mo-NiCrFeBSiC) powder were deposited by the APS process in multiple successive passes on parallelepipedal AISI 304 (EN 1.4301) steel samples manufactured from a worn sleeve of a multistage vertical irrigation pump. The aim was to investigate the effect of coating thickness on its obtained microstructure and tribological properties. To find an optimum thickness of AMDRY 1371 coating, the samples were coated by five, seven and nine passes (counted as return passes of APS gun), and denoted as 5L, 7L, and 9L, respectively.
Mechanical properties of the coating (microhardness HR 0.5 and Young's modulus) were determined by micro-indentation tests. HR 0.5 microhardness and Young's modulus were assessed as mean values of five indentation tests, with HR 0.5 = 0.464 GPa and E = 64 GPa. A close value E = 55 ± 6 GPa was reported for the same coating in [33].
Friction behavior, wear resistance, and wear modes of the coated samples were investigated by tests on AMSLER tribometer at a constant load (20 N), constant speed (100 rpm), and dry conditions, the span of each test being one hour. The obtained mean value of CoF was found to be around 0.3 for all the samples. Depending on the thickness of the coatings, initial roughness, and their microstructure, the evolution of the friction moment during the one hour of the test and also the final wear volumes and wear rates were totally different.
Combining the information gathered from all the tests, it was found that high roughness and a thin coating attracts metal-to-metal contact between the surfaces of the base material and counterpart steel roller used in the tribological tests, and after about 1000 s the thin coating of sample 5L was partially removed, and an accelerated wear process was started. The lowest wear volume and wear rate was obtained for the 7L coating, and for the thicker coating 9L, scratches were found on the wear surfaces, due to the agglomeration of Fe, Cr, and B hard particles.
Comparing the obtained wear rates with results from the literature, samples 7L and 9L proved similar or better results. According to our results, there is an optimum value of the coating thickness for each application, and improvements to the friction and wear properties are possible by decreasing the surface roughness of the coating by wet polishing, as recommended by the powder manufacturer.
The reported results have both applicable and cognitive characteristics. For the short term, such a coating can be applied to the investigated mechanical component of irrigation pumps that are subjected to abrasive wear. In addition, our findings have a cognitive characteristic too, as similar results have been previously reported for a different coating [30].
Future research on the subject should aim tests towards oil lubricated conditions at various loads and speed. Additionally, friction and wear results will be correlated with theoretical results on developed equivalent stress in coatings and substrates, as the thickness of the coating may influence the position point of the maximum stress relative to the coating and base material interface. | 2020-12-10T09:07:02.675Z | 2020-12-04T00:00:00.000 | {
"year": 2020,
"sha1": "ff123acbe51913e387808a189f7a6cae992bac18",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6412/10/12/1186/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "da2a598d1208a367da812997eb76274782a92553",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
28847846 | pes2o/s2orc | v3-fos-license | Sexually transmitted infections and substance use disorders: evidence and challenges in Mexico
According to international reports, Mexico has a high prevalence of sexually transmitted infections (STIs), such as human immunodeficiency virus (HIV), hepatitis B virus (HBV) and hepatitis C virus (HCV). There are approximately 34 million people infected with HIV, between 130 and 150 million with HCV, approximately 400 million with HBV, and 12 million infected with syphilis every year.
Epidemiology of STI in substance abusers
According to international reports, Mexico has a high prevalence of sexually transmitted infections (STIs), such as human immunodeficiency virus (HIV), hepatitis B virus (HBV) and hepatitis C virus (HCV).There are approximately 34 million people infected with HIV (National Center for Prevention and Control of HIV-AIDS [CENSIDA], 2011), between 130 and 150 million with HCV (WHO, 2014a), approximately 400 million with HBV (Hepatitis B Foundation, 2014), and 12 million infected with syphilis every year (WHO, 2014b).
Great strides have been made in the understanding, treatment and prevention of STIs.For example, the updating of transfusion protocols and standards in 2000 has significantly reduced the incidence of HCV infections by this means (Mathers et al., 2008).On the scientific and public health policy agendas, however, concern remains about specific populations with a greater risk of infection than the general population, such as substance abusers, who are estimated to have a high prevalence of STI (Scheinmann et al., 2007).
Scientific literature reports that HIV prevalence at addiction treatment centers is approximately 3% among non-injection substance users, as opposed to 27% among injection substance users (Lehman, Allen, Green & Onorato, 1994;Prevots et al., 1996).At the same time, a number of studies show that the prevalence of HCV among injection substance users is above 50% (Aceijas & Rhodes, 2007), with the largest user populations being found in China, the United States and Russia (Nelson et al., 2011).This information is particularly important in view of the fact that the risk of developing chronic diseases after exposure to HCV is higher among substance abusers than non-users (Page et al, 20093;Piasecki et al., 2004;Poustchi et al., 2011), and over ten times higher among those with substance use disorders (SUD) and other psychiatric disorders (OPD) when compared to the general population (Rosenberg et al. 2001).
Studies in Mexico report that the prevalence of HIV in non-injection substance users oscillates between 3.7% and 4% (Deiss et al., 2012;Magis-Rodríguez et al., 2005), while one study on the northern border of Mexico found that 96% of injection drug users tested positive for HCV antibodies, while 2.8% were HIV positive.Several studies have also reported that intravenous or intranasal substance users with HIV have an increased risk of HCV co-infection (Alvarado-Esquivel, Sablon, Martínez-García & Estrada-Martínez, 2005;White et al, 2007).
Another study conducted at outpatient treatment centers for addictions and prisons in the west of Mexico reported a prevalence of 4.1% of HCV, 5.7% of HBV and 1.6% of HIV in the outpatient sample, together with 40% of HCV, 20% of HBV and 6.7 of HIV in the prison sample (Campollo et al., 2012).
Despite progress in this field, there are still very few studies in Mexico on STI in substance abusers.However, available data suggest that substance abusers are a high-risk population in comparison with the general population, which has a much lower prevalence (0.2% of HIV and 2% of HCV) (CENSIDA, 2011;Quer & Mur, 2014).
Risky behaviors in substance abusers
The scientific literature has established a link between substance abuse, risky sexual behavior and STIs.First, substance use can itself constitute risky behavior, since some forms of substance use, such as the use of stimulants like methamphetamines and cocaine increases the likelihood of having sex with multiple partners and not using a condom (Barta et al., 2008;Brown & Vanable, 2007).Second, it has been estimated that the prevalence of blood-borne infectious diseases is higher among injection substance users than among those who do not inject, since these are easily transmitted by sharing the paraphernalia used to administer drugs.However, some types of infection can also be acquired through the use of contaminated utensils for preparing (cooking) the drug, such as filters, tourniquets and water for rinsing, which may be enough to infect other users (Strathdee et al., 2008).Third, addiction can increase the likelihood of an exchange of substances for sex (Deiss et al., 2012).
Barriers to STI treatment in substance abusers
Among substance abusers, there are individual factors limiting help-seeking for their detection, diagnosis and treatment.These factors include lack of knowledge about medical comorbidities and their impact on health, misperceptions about STIs and their treatment (Grebely & Tyndall, 2011;Treloar, Hull, Dore & Grebely, 2012), absence of symptoms, unemployment, unstable housing, social stigma and barriers to accessing specialized health services (Grebely & Tyndall, 2011;Treloar, Newland, Rance & Hopwood, 2010).
Various studies have shown that other significant factors associated with barriers to accessing treatment for STIs include: comorbidity with other medical diseases (Ho et al., 2008), SUD and OPD (Evon et al., 2013;Lieveld et al., 2013).
Common reasons why people with active substance use are inappropriately denied the treatment are: a) serious side effects resulting from the pharmacological interaction of HIV antiretrovirals with opioid agonist treatments like methadone, b) lower response rates due to liver damage caused by alcohol use, and c) concerns about continuous risk of reinfection due to re-exposure to the virus after treatment (Edlin, 2002;Mathurin, Canva, Dharancy & Paris, 2002;Peters & Terrault, 2002).These are remediable and should therefore not be contra-indications for treatment.
Other factors include treatment costs (Moirand, Bilodeau, Brissette & Bruneau, 2007), lack of infrastructure, limited accessibility and long waiting lists for gaining access to evaluation and treatment services (Grebely & Tyndall, 2011;Swan et al., 2010) as well as the difficulties of ensuring that health professionals adopt screening, assess-ment and treatment procedures (Pai, Vadnais, Denkinger, Engel & Pai, 2012).
Early detection and initiation of STI treatment in substance abusers
According to available scientific evidence, early detection of STIs is one of the most important preventive strategies, since it is estimated that a large percentage of infections occur through people who are unaware of their HIV status (Hall, Holtgrave & Maulsby, 2012).It is also known that people decrease risky sexual behavior once they are told they have positive serostatus (Marks, Crepaz, Senterfitt & Janssen, 2005).Moreover, there is evidence that people who enter treatment reduce the likelihood of infecting others (Cohen & Gay, 2010).
These benefits of early detection suggest that performing rapid STI tests is highly recommended because: a) it has been shown that they can assist in detection with a diagnostic efficiency level of over 95% (Kyle et al., 2015), b) they are cost-effective and easily transportable (Schackman et al., 2013), c) they are highly accepted by patients (Schackman et al., 2013), and d) they contribute to narrowing the gap between infection and late initiation of treatment (Leber et al., 2015).
STIs in people with SUD: a challenge for Mexico
The situation of STIs in people with SUD poses a challenge for the public health system in Mexico, since, despite efforts made to date to research and address this specific population, there is a gap between the public services for treating STI and those addressing SUD.The corollary of scientific evidence clearly indicates that patients with SUD have a high prevalence of psychiatric comorbidity.This co-occurrence between SUD and OPD also significantly increases risky sexual behavior and risky use behaviors, both of which are directly associated with a high risk of transmission of STIs (Marín-Navarrete et al., 2016).
Moreover, Mexico has limited coverage of public residential services for people with severe SUD.Accordingly, the past 20 years have seen a growing offer of non-profits with more than 2000 self-help based residential centers. | 2017-10-28T08:15:28.175Z | 2017-01-31T00:00:00.000 | {
"year": 2017,
"sha1": "4ebb1ff6b58218ef42d44fc76c47325158517f33",
"oa_license": "CCBYNC",
"oa_url": "http://revistasaludmental.mx/index.php/salud_mental/article/download/SM.0185-3325.2017.001/3045",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "550dc5ceaa699d5031d1861db982ac2478259462",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52301275 | pes2o/s2orc | v3-fos-license | Exploring and Experimenting with Shaping Designs for Next-Generation Optical Communications
A class of circular 64-QAM that combines 'geometric' and 'probabilistic' shaping aspects is presented. It is compared to square 64-QAM in back-to-back, single-channel, and WDM transmission experiments. First, for the linear AWGN channel model, it permits to operate close to the Shannon limits for a wide range of signal-to-noise ratios. Second, WDM simulations over several hundreds of kilometers show that the obtained signal-to-noise ratios are equivalent to - or slightly exceed - those of probabilistic shaped 64-QAM. Third, for real-life validation purpose, an experimental comparison with unshaped 64-QAM is performed where 28% distance gains are recorded when using 19 channels at 54.2 GBd. This again is in line - or slightly exceeds - the gains generally obtained with probabilistic shaping. Depending upon implementation requirements (core forward-error correcting scheme for example), the investigated modulation schemes may be key alternatives for next-generation optical systems.
I. INTRODUCTION A. Historical Notes
In communication theory, shaping is the art of adapting a mismatched input signaling to a channel model by modifying the per-channel-use distribution of its modulation points. Efficient information transmission schemes may use various shaping methods in order to increase spectral efficiency. Many of them have been investigated over the years, from nonlinear mapping over asymmetric channel models or many-to-one mapping [2] to optical experiments involving non-uniformly shaped QAM signaling. In particular, research efforts from the 70s towards the 90s derive conceptual methods to achieve shaping gains in communication systems. Following the advent of trellis coded modulation [3], a sequence of works [4]- [10] present operational methods and achieve a large fraction of the ultimate shaping gain associated with square lattices. Trellis shaping or shell mapping are implemented in applications such as the ITU V.34 modem. Non-uniform input signaling for the Gaussian channel is further investigated in [11], [12]. While several shaping schemes are based on the structural properties of lattices [13]- [17], the interest in randomized schemes rose in the late 90s after the rediscovery of probabilistic decoding [18], [19]. Multilevel schemes such as bit-interleaved coded modulation [20] offer flexible and low-complexity solutions [21], [22]. In the 2000s, several schemes have been investigated or proved to achieve the fundamental communication limits in different scenarios as discussed in [23]- [27].
In the last years, practice-oriented works related to optical transmissions have successfully implemented different shaping methods, from many-to-one and geometrically-shaped formats to non-uniform signaling. The latter, more often called probabilistic shaping in the optical community [28]- [32], has perhaps received most attention. Various transmission demonstrations and record experiments using shaped modulation formats have indeed been reported as, e.g., in [33]- [40], [42]. For illustration purpose, 65Tb/s of operational achievable rate using state-of-the-art dual-band WDM technologies, partial nonlinear interference cancellation, and non-uniform signaling are reported in [37].
B. Implementations Constraints and Future Optical Systems
This work is motivated in part by the use of advanced QAM formats and in part by non-binary information processing. The investigated formats are neither restricted to non-binary architecture, nor specific to any information representation, nor even constrained by any coding/modulation method. Depending upon the application, different design criteria might be considered. In particular, despite the induced complexity, several advanced channel models envisioned for next-generation optical systems require the use of circular and possibly high-dimensional constellations. In one example, nonlinear particularities of the optical fiber channel should be addressed. Due to the third-order nonlinear Kerr effect, the fiber channel becomes nonlinear at optimum launch power for WDM transmission [44]. The perturbation-based model [45]- [47], [49], [50] shows that specific characteristics such as the 4-th or 6-th order moments of the random input may be taken into consideration. In another important example, non-unitary and multidimensional channel characteristics may be addressed. In particular, the work in [51]- [53] shows that rotation-invariant formats are instrumental whenever polarization-dependent loss happens. It indeed permits to attenuate or even eliminate the angle dependency when dimensional imbalance occurs, hence removing capacity loss due to angle fluctuation. In addition, spherical constellations may facilitate implementations of MMA-type (multi-modulus algorithm) of MIMO blind equalization. Various other system criteria may also enter the picture. A matching between channel physical model and transceiver architecture (in particular, receiver algorithms) is key to enable the ultimate transmission performance. A conventional receiver chain (comprising sampling, chromatic dispersion post-processing, MIMO equalization, phase and channel estimation, channel decoding and demodulation) that operates in a sequential manner is quite often sub-optimal. Joint processing may be required to preserve the sufficient statistics and improve the receiver performance. An implementation solution consists of using conventional non-binary information processing associated with matching signaling.
As various digital communication schemes requiring nonsquare-QAM-based constellation are candidates for nextgeneration optical applications, this paper aims at providing design guidelines for modulation formats.
C. Outline of the Paper
This paper presents results originally reported in [42]. It deals with an experimental study on the use of specific modulation formats with high spectral efficiency for long-haul communications. The WDM fiber channel has been historically approximated in the linear regime, or in the limit of short reach communications with short to mid-size constellations, by the standard additive white Gaussian noise (AWGN) channel model encountered in communications theory [1], [2]. This paper investigates efficient modulation formats defined on the complex plane that operate very close to the fundamental communication limits of the Gaussian model. They are further tested in more complete scenarios, including the simulation of long reach cases, and, finally, experiments that validate the modulation proposals. Note that, because this work deals with first guidelines for advanced signaling and multi-dimensional optical systems, it does not, at first, consider system-dependent optical models such as the enhanced Gaussian noise (EGN) model [47].
II. SHAPING AND OPTICAL COMMUNICATIONS
A. Setup and Notations 1) Channel Model: A crude approximation of the fiber channel under current coherent WDM technologies (involving PDM and mismatched architecture) is represented by the complex-valued AWGN channel model. This model is valid in ideal back-to-back scenarios and short-range transmissions. For characterizing future optical systems, the performance in the linear regime remains central at the first order. In most real-life scenarios however, long range communications create different types of (intra, extra, noise) nonlinear interference for which perturbations on the solution of the Manakov equation [45], [46], [49], [50] may provide some insight into the design and analysis of efficient constellation. See also [44], [47], [48]. In this paper, shaping tradeoffs are first addressed in the idealized linear regime. They are later tested in the nonlinear regime by simulations and experiments. Formally, the receiver is assumed to see, independently at each channel use, an overall additive white noise equivalent to a complex-valued random noise Z = Z 1 + √ −1Z 2 where the independent Z 1 , Z 2 obey a real-valued zero-mean half-unit-variance Gaussian distribution. We model the random channel output by whereby X ∈ X is the random input with probability p X and SNR the signal-to-noise ratio. In case of continuous and power constrained input alphabet, the capacity of the model is achieved by the Gaussian distribution and equals log(1+SNR).
2) Coding and Modulation: This paper investigates simple but efficient time-invariant modulation formats. A format is defined by the pair (X , p X ) composed of the input alphabet (constellation of points in the complex plane) X and the input distribution p X . The input alphabet is a codebook with indexes formed by letters (denoted by B or S) of the original information alphabet. Shaping in this paper is seen as the art of optimizing the transmission performance of a format with bounded entropy. Recall that, if the resulting constellations asymptotically sample a Gaussian density that achieves the capacity log(1+SNR), then the spectral efficiency gets optimized. Nonuniform signaling is obtained in [12] by letting p X follow the Maxwell-Boltzmann envelope (or any other distribution). It is called probabilistic shaping and sometimes probabilistic constellation shaping in the optical literature, which leads to distinguish between geometric and probabilistic shaping aspects of a format (X , p X ). Optimal system performance is measured in terms of achievable rates. The mutual information between X and Y is denoted by I(X; Y ). This quantity operationally corresponds to coded-modulation: it is termed the CM information rate. For practical (often mismatched) systems, we may operationally refer to the achievable rate associated with conventional estimation of the representation letter (bit or symbol). This quantity corresponding to bit (or symbol) MAP estimation is termed the B-CM (or S-CM, respectively) information rate. In many instances, it coincides with the classical bit-interleaved (or symbol-interleaved) coded-modulation BICM (or SICM, respectively) framework of [20], [22], [58]- [60] and is a particular case of generalized mutual information (GMI) [61], [62]. Unless stated otherwise, the information source is represented by the random binary variable B. Random binary vectors can be equivalently represented as random symbols S i = S i (B 1 , · · · , B m ). Random symbol vectors can be equivalently represented as random channel inputs X = X(S 1 , · · · , S m ). The S-CM information rate is then given as H(S 1 , · · · , S m ) − m i=1 H(S i |Y ) (and similarly for the B-CM rate). In practice, simple Riemannbased integration methods are used to compute the different information rates. More details on achievable rates are given in Appendix A.
B. Square Quadrature Amplitude Modulation 1) Definition: Popular modulation formats are based on Pulse Amplitude Modulation (PAM) per quadrature, for which P = 2 m real-valued points x 1 , · · · , x P are equally spaced and centered around 0. The alphabet set is denoted by P -PAM and In current practice, the P points are associated with equal probabilities p X = 1 2 m . The choice P = 2 m enables simple bit labeling. For notational convenience, we use X = 2 m -PAM is a constant that normalizes the total power to one. The Cartesian product of two P -PAM alphabets is called (square) Quadrature Amplitude Modulation and denoted by P 2 -QAM. 2) Properties: QAM formats are the constellations of choice in various communication systems. PAM enables a natural Gray labeling of the information bits which increases performance at mid-to-large signal-to-noise (SNR) ratios. Because I/Q QAM components remain independent in the presence of standard Gaussian noise, the statistical separation leads to individualized demodulation schemes. Practical individual demodulation in this regime is enabled by the maxlog approximation. Despite such important practical aspects, square QAM formats suffer from a noticeable drawback when associated with uniformly distributed codebooks. Geometric arguments on square lattices show that the overall transmission rate is generally bounded away from the channel capacity [5]. In the case of additive Gaussian noise, shaping permits to reduce this gap and asymptotically achieve up to πe 6 ≈ 1.53dB of signal-to-noise ratio (SNR) gain. This is shown in Fig. 1 for the example of 64-QAM. It can be observed that, when associated with an example of Gray mapping, the B-CM capacity deviates from the CM capacity at low SNR. As summarized in Section I-A, various shaping methods involving multi-dimensional geometric considerations have been devised in the past, e.g., shell mapping and trellis coded modulation for wire-line communications. In this paper, we focus on timeinvariant non-uniform signaling [6], [12] as recently investigated in optical research (in particular in combination with Probabilistic Amplitude Shaping (PAS) [29], [36], [37]). This is exemplified for QAM in Fig. 1 where non-uniform signaling is obtained using the Maxwell-Boltzmann distribution [12]. In the target SNR region, the capacity of the shaped system approaches the ultimate limit given by the continuous Gaussian input distribution. Beyond shaping loss, one may list additional drawbacks of square QAM that are specific to optical systems. Those include suboptimal equalization in case of non-unitary impairments [51], [52] or other mismatches as discussed in Section I-B. Investigations on QAM-based variations are therefore critical to envision alternative engineering designs.
C. Circular Quadrature Amplitude Modulation 1) Definition: Within this paper, we define a q × q -CQAM constellation to be a circular QAM format that is rotationinvariant in the I/Q plane. More precisely, by rotation of angle 2π q , the P = q × q constellation points are mapped onto constellation points with same associated probabilities. Examples include APSK formats as in [39], or other constructions as in [64]. If q = q , then a q 2 -circular quadrature amplitude modulation (q 2 -CQAM) is a two-dimensional constellation that includes q shells (circles containing points of the same amplitude) with q points per shell [32]. We write where B is a fundamental (connected or not) discrete set of q points with distinct amplitudes.
2) Properties: One interesting aspect of CQAM-like formats is that they are naturally adapted to q-ary PAS coding. There are obviously many possible q 2 -CQAM constructions. Depending upon the design criteria, e.g., the figure of merit [5] (minimum distance) as in [32], different properties and performance are obtained. In the sequel, we investigate different criteria options for CQAM constellation and perform specific optimization. We eventually focus on the CQAM construction of [32]. This particular CQAM construction, which originates from an exercise on the generalization of the PAS method, turns out to be particularly efficient with respect to CM capacity.
D. Non-uniform QAM Signaling and the PAS method 1) Background: The reason PAS [29]- [31] has been experimented with in optical communications are twofold. First, non-uniform signaling is obtained after shaping the source up front a (possibly legacy) coding system: this offers backward compatibility. Second, the distribution matcher (DM) provides an additional degree of freedom for rate adaptation: this may be a useful feature. The general PAS framework is found in Appendix B.
2) Case with Square QAM : In its original binary instance, PAS is based on the antipodal symmetry of 2 m -PAM. Indeed, up to power normalization, 2 m -PAM ∝ s=±1 sB, where B = {1, 3, . . . , 2 m − 1}. Referring to Appendix B, the isomorphic representation permits to distinguish between signal amplitudes in B and their sign in {−1, +1}. PAS in [29] is based on the mapping {1, 0} ≡ {−1, +1} that encodes the sign while, independently, binary vectors label points in B. PAS is a layered coding scheme. The central channel coding layer uses a linear code with systematic encoding and rate R C where parity bits encode amplitude signs. The systematic information is, for example, Maxwell-Boltzmann-shaped in a layer up front via, for example, prefix-free source coding or similar methods: this is further used to encode the amplitudes at the end layer. For 64-QAM-based systems, the code rate constraint is R C ≥ 2 3 . 3) Case with Circular QAM: A linear dense combination of q-ary symbols tends to asymptotically 1 admit a uniform distribution [19], [32]. If PAS (with, e.g., standard LDPC, Turbo, or polar codes) tends to map uniformly-distributed parity bits into the signs of PAM points, then parity bits do not perturb amplitude shaping. The generalization of this property to alternative (non-binary) information representations is enabled by specific QAM format, among others q × q-CQAM as in [32] when the underlying alphabet is assumed to be a finite 1 The proof [19], [32] over Fq involves the q roots of the unity as a generalization of the sign symmetry over F 2 . This generalization motivates the construction of CQAM over Fq with the use of circular symmetry [32]. field F q with a prime q > 2. A generalized PAS framework is presented in Appendix B where it is observed that the new schemes relax the code rate constraint to R C ≥ 1 2 for any q. Referring to Appendix B, we use the isomorphic representation where B represents the fundamental region. In [32], the main goal is to explore the use of q-ary codes by generalizing PAS and the binary sign flipping technique to the q-ary case. In this paper, the goal is slightly different. For practical reasons, we are restricted to computation fields of characteristic 2 and, in particular, q = 2 3 . As this field is an extension of the binary field, the nature of code constraint is less stringent and, even for PAS, suboptimal schemes with binary codes could be envisioned. The code rate constraint of the generalized framework however remains valid and of interest. For 64-CQAM-based systems, it is R C ≥ 1 2 . 4) Perspective: Notice that the non-uniform signaling formats presented in this paper may serve as general baseline performance guidelines. They are not PAS-specific guidelines. The obtained results and insights can be used in a variety of coding system models and coded modulation schemes. As discussed in the previous section, this paper is an experimental validation of efficient modulation formats taking first into account advanced core linear model constraints, in particular multi-dimensional non-unitary polarization-multiplexing and multi-ary DSP implementations. Non-linear interference or other mismatches that depend on the transmission distance are treated in a second phase. Because optimizing the Euclidean distance is sufficient in the simplified high SNR linear scenarios and because we target low-complexity practical solutions, we use Maxwell-Boltzmann distributed amplitudes as the implied distorsion is negligible. Notice that alternative ratedistortion methods (e.g., Blahut-Arimoto) to optimize the input pair (X, p X ) have been recently investigated. In [54] and [55], the mutual information is optimized within the framework of the EGN model [47], [56], [57] (the classical one-dimensional Gaussian model with non-linear noise discussed in the optical literature) or the split-step Fourier transform.
A. Geometric Shaping via Circular QAM Formats
The general construction of q 2 -CQAM constellations consists in populating q shells with q points that are uniformly distributed on the q-th circle. A construction criterion permits to control spacing and phase-offset between shells. The selection of a particular criterion may be motivated by geometric considerations. It may also be combined with the further optimization of the transceiver design at a given target SNR under non-uniform signaling. Hence, for standard receiver architectures, the construction and the choice of Maxwell-Boltzmann parameters may aim at maximizing the B-CM capacity under suboptimal bit-based estimation. For evolved architectures or advanced fiber channel models, they may be chosen to maximize the CM capacity. Recall that the latter case is considered in [32] and experimented with in [42]; it is indicated as the 'true' CQAM reference in Fig. 2 where several CQAM-like examples are depicted. Let us describe them.
Example 2: '2-dist' Construction. Fig. 2c represents a CQAM constellation that has been constructed based on a 'two-distance' criteria. The greedy construction is performed with respect to d min and the second minimum Euclidean distance given the first two shells (the second at d min being a scaled version of the first). Compared with a 'star' design, this naturally increases the CM capacity while attempting to maintain good properties for Gray labels.
Example 3: 'Hybrid' Construction. Fig. 2b represents a balance between the previous two example. Angular regions have been preserved in addition to the two-distance criteria. Depending upon receiver design and available information, angular region and mapping can be adapted for optimizing bit-based estimation performance, see also [63].
From these examples, we see that, if conventional (mismatched) BICM estimation were to be assumed, then a tradeoff may have to be made as the respective behaviors of CM and S-CM capacities are reversed in the operational SNR region. Indeed, while similar shaping and maximal input entropy have been represented, it appears that the performance behavior is first conditioned by the initial geometric properties of the constellation, then enhanced by a particular set of shaping parameters. For the CM capacity, it is known that minimum Euclidean distance maximization leads to remarkable CM capacity at high SNR. When based on that criterion, CQAM appears to be very efficient. Notice however that, if conventional (mismatched) BICM estimation is to be used for technical reasons, this cannot be fully exploited and a tradeoff may have to be made. In the remainder of this paper, we focus on CQAM constructions that, when shaped, maximizes the CM capacity.
B. Optimization for CM Capacity and Linear AWGN Model
Let us present in more details the type of CQAM-like constructions introduced in [32]. It is solely based on the minimum Euclidean distance and is referred to as q 2 -CQAM in the sequel. The chosen criteria maximizes the figure of merit or the ratio between |X |d 2 min (X ) and x∈X |x| 2 , where d 2 min (X ) = min (x,x )∈X 2 ,x =x |x − x | 2 denotes the minimum squared Euclidean distance. In practice, the minimum distance of the power-normalized constellation X is first maximized via a greedy procedure. Then, Maxwell-Boltzmann shaping is performed such that, for a given SNR, the gap between the CM information rate denoted by I(X; Y ) and the ultimate limit denoted by log(1+SNR) is minimized. An optional stretching step may be performed, see [32]. This optimization procedure is sufficient to devise optimized constellations very close to the Shannon bounds of the core model. More importantly, this simple optimization is guided by operational constraints, i.e., the construction of circular constellations. The q 2 -CQAM format that supports the experimental work conveyed in this paper is represented in Fig. 3b. In terms of CM capacity, the stretched version achieves performance that are less than 0.1dB away from the Gaussian capacity log(1 + SNR). This is illustrated by Fig. 3a where the optimization has been done for target SNRs around 10dB as slightly above. It can be seen that shaped 64-CQAM and shaped 64-QAM have similar performance at the operating point. Notice that these observations concern the CM capacity. The receiver architecture may require specific demapping nodes and estimation methods to take full benefit of the circularly-symmetric format.
C. Optimization for Nonlinear Long-Haul Communications
The values of spectral efficiency of shaped constellations (whether square QAM or circular QAM) are very close to the Shannon capacity for the AWGN channel. For longdistance transmissions however, their performance are tested and reevaluated in the presence of the fiber nonlinear impairments. As previously mentioned, this is justified as the 4-th and 6-th moments of the constellations appear in the expressions of the total effective SNR. By total effective SNR, we mean the SNR of the sampled received signal assuming symbol-bysymbol coherent detector, without nonlinear equalization. In this case, the nonlinear distortions are effectively considered as additive Gaussian noise, but with a variance that scales cubically with channel average launched power, and depends on constellation moments [50]. We compared the performance of 64-CQAM and shaped 64-QAM formats for a nonlinear fiber channel using the theory presented in [50] (see Eq. (123) therein). We assumed a system with 19 channels, modulated at 54.2 GBd, and spaced at 62.5 GHz. The link consisted of 80 km spans of SMF fiber. We theoretically computed the SNR of the central (i.e., the 10-th) channel at the receiver side at the nonlinear threshold (i.e., the optimum launched power that maximizes the SNR) as a function of the number of spans. This is given for each of the two modulation schemes, and for a given span count. Referring to [12], [29], the parameter of the Maxwell-Boltzmann of the PMFs of each scheme varied between 0 and 4, and the probability distribution that maximized the optimum SNR is found for each constellation by exhaustive search. Fig. 3c illustrates the optimized SNR vs distance for both formats. We observe that the two modulation schemes have very similar performance in the nonlinear regime. This observation permits us to assert that the shaping scheme proposed in this work is (at least equally) as robust as the existing solutions to fiber nonlinear impairments.
IV. EXPERIMENTAL SETUP
The experimental setup is shown in Fig. 4a. The transmitter is based on two four-channel digital-to-analog converters (DACs) running at 88 GS/s generating 54.2 Gbaud polarization multiplexed 64-QAM or 64-CQAM, using raised cosine pulses with a roll-off factor of 0.08. The length of the random transmitted sequences are 184320 symbols. In total, we modulate 19 WDM channels with a channel separation of 62.5 GHz using external cavity lasers (ECLs) with linewidths of around 100 kHz. One DAC is used to generate the channel under test and its two second nearest neighboring channels. The second DAC generates the remaining 16 channels. Independent symbol patterns are used for the two DACs. After the dualpolarization I/Q-modulators, we use erbium doped fiber amplifiers (EDFAs) to boost the signal. In the loading channel arm, we use a wavelength selective switch to remove the in-band amplified spontaneous emission noise for the channel under test before combining the signals from the two transmitters. The signals are either noise loaded and detected in a backto-back scenario, or transmitted over the recirculating loop depicted in Fig. 4b. The recirculating loop consists of three spans of conventional single mode fiber (SSMF), EDFAs and a polarization scrambler (PS). A programmable gain equalizer is used to equalize the power of the WDM channels and to filter out the ASE noise that is outside of the total channel count. The signals are detected using a conventional polarization diverse coherent receiver, shown in Fig. 5 and digitized using a 33 GHz 80 GS/s real-time sampling oscilloscope. For a fair comparison between 64-CQAM and 64-QAM without penalty due to potential suboptimal equalization, we use a genie-aided-based digital signal processing (DSP) solution. Notice that, in practice, for future system implementations, pilot-aided DSP solutions are proved to be efficient [41]. Phase recovery follows from simple inverse mapping and standard DSP techniques are applied. The DSP starts with resampling to 2 samples/symbol followed by electronic dispersion compensation (EDC). Timing estimation, as well as polarization demultiplexing and adaptive equalization using a multi-modulus algorithm (MMA) is applied where knowledge of the transmitted data is used to calculate the error function. The signals are then sent to a frequency offset estimation and phase estimation stage. Finally, in this experimental demonstration, a symbol-spaced real-valued decision-directed least mean square (DD-LMS) equalizer is used independently on the signals in the x-and y-polarization to compensate for any remaining imperfections such as transmitter side timing skew. The parameters of the genie-aided DSP are adapted such that the performance is close to that of blind DSP for 64-QAM. To assure a fair comparison, the same parameters are then used for 64-CQAM.
V. RESULTS
The back-to-back results for 54.2 Gbaud are shown in Fig. 6 together with theoretical results. At a target mutual information (CM) of 4.5 bits/symb., we measure a 1.25 dB gain for 64-CQAM over 64-QAM. The target CM has been chosen for illustration purpose but still lies in the same region as the target CM of 4 bits/symb. of the previous sections. We note that 64-CQAM has a 0.7 dB lower implementation penalty compared to 64-QAM. This is most likely due a more efficient use of the DAC resolution when shaping is applied, see, e.g., [43]. Notice that the experimental CM values have been determined knowing the transmitted sequence of the channel under test. This enables to build the signal statistic (estimated input distribution) and the channel model (estimated conditional distribution) associated with the experimental results up to some negligible (quantization) errors while the nonlinear Kerreffects are treated as white Gaussian noise. Fig. 8a shows the information rate I(X; Y ) (CM) as a function of the launch power at 1440 km for the two formats. We observe no apparent difference in the optimal launch power for the two formats in neither single channel nor WDM transmission. The optimal launch power per channel was around 2 dBm for single channel and 0 dBm for WDM. The transmission results are depicted in Fig. 8b for the optimal launch power. Assuming CM at 4.5 bits/symb., 54.2 Gbaud 64-CQAM can be transmitted up to 1750 km in singlechannel transmission at the optimal launch power, and 1100 km with 19 WDM channels transmission. Considering 19 WDM channels, if the formats are compared at CM = 4 bits/symb., the transmission distance can be increased by 480 km by using 64-CQAM which corresponds to an increase of 28%.
In the experiments, 64-CQAM has a slightly lower implementation penalty compared to 64QAM. For the shaped format, clipping is performed at the DAC level. Both formats suffer equally from hardware restrictions due in particular to the non-optimized evaluation board. In order to verify the gains we see in experiments, without being influenced by the implementation penalties, we computed the mutual information of both formats, using formulas for the total variance of nonlinear distortions, which includes the impact of modulation format in the nonlinear regime, see Eq. (123) in [50]. The transmitter and receiver are assumed ideal without implementation penalty, and the receiver DSP consists only of the matched filter. The modeling transmission results are depicted Fig. 7. At each distance, first the maximum SNR at optimum launch power is computed, then the corresponding optimum mutual information is computed. Fig. 7 illustrates the optimum mutual information vs distance for 64-CQAM and 64-QAM. 64-CQAM has a clear advantage over 64-QAM beyond 1500 km. At CM = 4 bits/symb., the transmission reach can be increased by 14% by switching from 64-QAM to 64-CQAM.
VI. CONCLUSION
Interest in circular QAM emanates from a better matching to the polarization-multiplexed WDM fiber model, from the adaptation to non-binary processing, or from other evolved design constraints such as, potentially, flexible rate adaptation for PAS.
Long-haul transmission simulations for shaped CQAM have indeed been compared to simulations for shaped 64-QAM in both the linear and nonlinear regime. Importantly, advanced simulations show that the new schemes have similar performance to the state-of-the-art schemes based on shaped This work demonstrates that advanced shaping schemes such as combined geometric-probabilistic CQAM could be used and may have very interesting performance in practice. Assuming that significant performance gains result from advanced channel modeling and particular constellation geometry, and assuming that coding and modulation can be efficiently translated in high-speed transceivers, this may turn out to be key for the next generation of optical systems.
ACKNOLEDGMENTS
The authors would like to thank L. Schmalen, A. Dumenil, and R.J. Essiambre for valuable comments and suggestions on an early version of this work. The authors are also grateful to the anonymous reviewers for their insightful and valuable comments.
A. Achievable Information Rates
For a memoryless channel model with random input letters X taking on discrete values x ∈ X with probability p X (x) at each channel use, the channel capacity is given by the information rate I(X; Y ). For the sake of simplicity, the term of CM (Coded Modulation) capacity is employed in this paper to refer to I(X; Y ) when the input alphabet X is fixed. For the complex-valued AWGN model, it is as a function of the SNR (ratio between the average constellation power and the additive noise). Modern error-correcting codes closely approach the achievable bounds in practical setups and for large blocklengths. See [61], [62] for capturing potential additional transceiver mismatches as well as [20], [22], [58]- [60] for operational characterizations. For the modulation To this aim, a standard source code (called Distribution Matcher) with rate R DM ∈ (0, 1) is used after an initial split of the information sequence at rate r ∈ [0, 1) to further let the next layer be constraint-free. A peculiarity of PAS is to perform the shaping task up front conventional error-correction encoding. Error-correction encoding with rate R C ∈ (0, 1) is then performed at the middle layer and generate a redundancy sequence (e.g., from linear parities). The right-most encoding layer is the modulation step: the general principle of PAS is to select a region according to the (typically uniform) distribution of the Ss and combine it with a fundamental point labeled at rate R AM ≥ 1 with the (previously shaped) information sequence. Notice U 2 = V 2 . schemes of our running examples involving square or circular QAM constellations, each letter modulates m symbols of the original information alphabet represented by S 1 , S 2 , · · · , S m . In other words, there is a one-to-one mapping µ such that X = µ(S 1 , · · · , S m ). Notice that m = 3 for 8-PAM using binary labels or m = 2 for 8 × 8-QAM constellations using q = 2 3 -ary labels. Using the chain rule and because conditioning reduces entropy, we see that i.e., the CM capacity is never less than the rate (H(S 1 , · · · , S m )− m i=1 H(S i |Y )) + that indicates the system capacity when a maximum a posteriori (MAP) estimator operating at the symbol level is implemented (S-CM capacity). Notice that this expression encompasses the general case of correlated S i s. By iterating the decomposition with S i = S i (B i 1 , · · · , B i ) for example, we also see that the capacity associated with symbol-MAP decoding (S-CM capacity) is not less than the capacity associated with bit-MAP decoding (B-CM capacity) provided that a symbol is labeled by a group of bits. Let us make a couple of observations. First, in the specific example m = 2 of two symbols, the difference between the two is given by I(S 1 ; S 2 |Y ) which, in the CQAM case or in [63], differentiates between amplitude and phase. Second, for independent symbols, the capacity associated with symbol-MAP decoding (sometimes called bit-metric decoding [59]) can be written as m i=1 I(S i ; Y ). Hence, in this case, we may as well use the framework of ICM (Interleaved Coded Modulation) to define the notion of achievable rates for conventional processing. The conceptual view of an infinite interleaver [20], [22] before any alphabet mapping and in conjunction with uniform signaling indeed permits to characterize different system capacities. More generally, it may be convenient to use the GMI framework in [61], [62] where the achievable rates permit to characterize conventional processing mismatches and have therefore an operational meaning. For the sake of clarity, we explicitly characterize the rate as CM or S-CM depending upon the choice of system architecture among those considered in this paper.
B. Probabilistic Amplitude Shaping
PAS originally stands for Probabilistic Amplitude Shaping [31], a method devised in [6], [29] to implement non-uniform signaling [12]. Although, in the presented generalized version, PAS no longer refers to modulating "amplitudes" as such, the original name is conserved for simplicity.
Assume that we want to communicate messages through N independent uses of a communication channel. More precisely, let us denote by (X , p X ) the channel input where a random variable X takes on values x ∈ X according to p X (x). For a source of independent symbols V ∈ V distributed uniformly at random (|V| = q), the number of messages scales as M M def = |V| M = q M where M is the information length. PAS [29] is a layered coding scheme that maps the information symbols into the Xs. The overall coding rate 2 where N is the encoder output length. The basic principle (amplitude sign flipping triggered by a bit when q = 2) of the PAS method as devised in [29] relies on binary channel symmetry. It is then tailored to binaryinput real-valued-output symmetric channels such as the 2 m -PAM AWGN channel (or product of it such as 2 2m -QAM AWGN). It can be extended to q-ary channel symmetry and the generalized scheme is summarized 3 in Fig. 9.
1) Sufficient Constellation: PAS relies on the subdivision of the input alphabet X into J constellation regions {C j } j∈{1,··· ,J} such that X = ∪ j∈{1,··· ,J} C j . Various constellations and subdivisions are PAS-compatible. For simplicity, assume that all constellation regions have same cardinality, i.e., |C j | = |C 1 |, with Pr{C j } = 1 J for any j. Assume further that q = |V| divides J and that q = |U 1 | divides the region cardinality |S 1 |. For each region, the points are identically distributed. PAS is eventually performed on a reduced fundamental region B chosen for example to be B = C 1 . PAS coding consists in mapping labels obtained from the sequence of (uniformly distributed) symbols of type S into regions. Independently, PAS coding maps labels obtained from the (shaped) information sequence of type U 1 into fundamental points. In other words, there is a m such that 2) State-of-the-Art and Legacy Systems: Let us provide here some background on coding in actual optical applications. In practice, forward error-correction (FEC) is typically performed via a (systematic) linear code of rate R C . PAS is said to be compatible with legacy systems because it can be built around a standard (or pre-existing) FEC coding engine (e.g., an LDPC-based system). PAS first focuses on shaping the distribution of points inside the fundamental region using the distribution matcher (DM). To exemplify this, let us use the binary case q = 2. The distribution of the 2 m -PAM amplitudes is shaped to let the distribution of the full constellation behave like the capacity-achieving Gaussian [12]. If the standard PAM modulation rate is R M = m, then PAS modulates the signal amplitudes at the output of the distribution matcher at rate R AM = m − 1. PAS uses (up to very few operational changes) a conventional coding and modulation chain. After the DM, the information sequence is parsed to modulate the point amplitudes while the (uniformly distributed) parity bits (as well as the unshaped information fraction) encode the sign of the PAM amplitudes. The binary case is used for example in [37].
3) General Framework: As depicted in Fig. 9, PAS is seen as a layered coding system. The concatenation chain is divided into three main layers and encoding operations are done in a sequential order. Practical decoding is envisioned to occur in the reversed order. A fractionr def = 1 − r (with in some cases r = 0) of the information stream is first encoded into a sequence (seen as a sequence of symbol packets or labels) with a given (required) distribution (typically Maxwell-Boltzmann as in [12]). Hence, independent identically uniformly distributed symbols are encoded into a symbol sequence which (from parsing) labels the modulated regions at rate R AM . The rate R AM is equal to the number of symbols in an alphabet of size q needed to label a region (for example, R AM = m − 1 in the binary case of [29] where a region is an amplitude, or R AM = 1 in the nonbinary case of [32]). Second, a sequence of redundant symbols, generally obtained from linear combinations of information symbols, is then generated by a linear channel encoder. Dense linear combinations of symbols make that the distribution of resulting sum symbols tends to get asymptotically uniform. Third, the final encoding layer modulates symbols in X by selecting a pair composed of a point (for example representing an amplitude) in the fundamental region according to the label sequence (for example representing a quadrant or an angular region).
4) Compatible Rates:
A compatibility criteria of the set of rates {r, R DM , R C , R AM } is easily obtained from Fig. 9. Consider the layered encoding flow. We see that the system is constrained at the selective node when the end (modulation) layer is processed. The constraint reads N =r RDMRAM M = rRC+rRDM RDMRC M . Its satisfaction implies a dependency between the rates as R C − rR C − R AM + R AM R C + rR AM − rR AM R C − rR AM R DM = 0. When solved for R C , it shows that R C = RAM 1+RAM (1 + r 1−r R DM ), i.e., the choice of the core channel code may be restricted to particular code rates. A first example is the binary case with 2 m -PAM for which it is required to have R C = m−1 m (1 + r r R DM ) ≥ m−1 m (achieved for r = 0). This translates as R C ≥ 1 2 for 16-QAM or R C ≥ 2 3 for 64-QAM (bit-triggered region selection [29]). A second example is the q-ary case with q 2 -CQAM for which R C = 1 2 (1+ r r R DM ) ≥ 1 2 for any q 2 -CQAM (symbol-triggered region selection [32]).
5) PAS Information Rate:
The splitting rate r provides the designer with the degree of freedom that is necessary to satisfy the rate constraint. When solved for r, the compatibility constraint gives r = 1 − RAMRDM RC−RAM+RAMRC+RAMRDM . Therefore the overall PAS coding rate is In our binary and non-binary running examples, this gives R T = mR C − (m − 1)(1 − R DM ) for m-PAM-based schemes and R T = 2R C − 1 + R DM for q 2 -CQAM-based schemes, respectively. Expressed in binary units, those rates express the operational spectral efficiency of the respective coding systems. For example, for the constellations of two real dimensions and 2 m points of our running examples, the respective system capacities in bits per channel uses are for 2 m × 2 m -QAM (bit-triggered) and R b T = 2 log 2 (q)R C − log 2 (q)(1 − R DM ) for q 2 -CQAM (phase-triggered), i.e.,
R b
T = 2mR C − m(1 − R DM ) for (2 m ) 2 -CQAM. Notice that the maximal transmitted entropy is H(X) = H(V 1 ) + H(S) as the region and points within the fundamental region are independent. For our running examples, we see that the binary entropy becomes H(X) ≤ log 2 (q)R DM + log 2 (q) ≤ 2 log 2 (q). This represents the maximal amount of information that PAS may transmit. | 2018-04-03T00:16:02.534Z | 2018-03-06T00:00:00.000 | {
"year": 2018,
"sha1": "3e7839993724516eaad44b4bcad4e93c0ab05dd5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1803.02206",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d2f5bd0d69bd62b34ccaaf1114c291b5ffc7883f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
250680671 | pes2o/s2orc | v3-fos-license | Relativistic many-body XMCD theory including core degenerate effects
A many-body relativistic theory to analyze X-ray Magnetic Circular Dichroism (XMCD) spectra has been developed on the basis of relativistic quantum electrodynamic (QED) Keldysh Green's function approach. This theoretical framework enables us to handle relativistic many-body effects in terms of correlated nonrelativistic Green's function and relativistic correction operator Q, which naturally incorporates radiation field screening and other optical field effects in addition to electron-electron interactions. The former can describe the intensity ratio of L2/L3 which deviates from the statistical weight (branching ratio) 1/2. In addition to these effects, we consider the degenerate or nearly degenerate effects of core levels from which photoelectrons are excited. In XPS spectra, for example in Rh 3d sub level excitations, their peak shapes are quite different: This interesting behavior is explained by core-hole moving after the core excitation. We discuss similar problems in X-ray absorption spectra in particular excitation from deep 2p sub levels which are degenerate in each sub levels and nearly degenerate to each other in light elements: The hole left behind is not frozen there. We derive practical multiple scattering formulas which incorporate all those effects.
Introduction
Many-body X-ray Absorption Fine Structure (XAFS) theory has been developed on the basis of many-body scattering theory [1,2] and Keldysh Green's function theory [3,4,5] in the framework of non-relativistic theory. In contrast to XAFS analyses, the relativistic effects are crucial in the analyses of XMCD spectra even for light transition elements. An excellent review article is available now [6]. Brouder et al. have developed a one-electron XMCD theory based on Gesztesy expansion, which represents the Dirac Green's function in terms of full non-relativistic Green's function [7]. A similar one-electron relativistic XMCD theory is applied to K-edge XMCD analyses [8]. Alternative interesting one-electron relativistic approaches are also used in XMCD analyses where electron-electron interaction is included within mean-field approximation [9,10].
Some recent works discuss the large deviation of the intensity ratio L 2 /L 3 from the branching ratio 1/2. This observation was ascribed to be primarily due to the electron core-hole interaction. A simple approach to include core hole effects was applied by calculating the final state wave functions for a core-hole potential [7]. A more sophisticated approach has been developed in the framework of the time-dependent density functional theory and linear response formalism [11]. Application to the L 2,3 absorption spectra of 3d transition metals demonstrates that electron core-hole interaction intermixes the L 2 and L 3 spectra strongly affecting the branching ratio.
Further applications to more complicated systems are found in literature [12]. Similar approach is also proposed by use of time dependent density functional theory [13]. These approaches are practically useful, but it is hard for us to relate them to the local field (radiation field screening ) effects.
In addition to these effects, we consider the degenerate or nearly degenerate effects of core levels from which photoelectrons are excited. In XPS spectra, for example in Rh 3d sub level excitations, their peak shapes are quite different: This interesting behavior is explained by corehole moving [14]. Onodera discussed asymmetry line shape of X-ray absorption spectra at Na L 2 and L 3 edges influenced by the nearly degenerate core-hole effects [15]. We discuss similar problems in X-ray absorption spectra in particular excitation from deep 2p sub levels which are degenerate in each sub levels and nearly degenerate to each other: The hole left behind is not frozen there.
In this work we develop a new many-body XMCD theory based on quantum electrodynamics (QED) where relativistic Keldysh Green's functions are extensively used because of their wide applicability. So far the present author has developed a relativistic QED XAFS theory in the framework of Keldysh Green's function theory [16,17]. There main interests are to develop a sound many-body XAFS theory which also includes some important relativistic effects. Here we show that the above theoretical framework can naturally take the two important manybody effects ( screening effects and core-hole degenerate effects ) into account in the QED and Keldysh Green's function theoretical framework, and we derive a practical XMCD formula which incorporates those two effects in relativistic multiple-scattering theory.
Relativistic QED Theory for X-ray Absorption Processes
We define the nonequilibrium photon Green's function D µν by use of the functional derivative of < A µ (1) > with respect to external current j ext , (µ, ν = 0, 1, 2, 3) which is also written as correlation function for space components ( T c is the path ordering operator on the Keldysh closed contour, i, j =1,2,3 ), When t 2 is on + leg ( +∞ → −∞), p 2 = 1, and −1 on − leg ( −∞ → +∞). Only the time component D 00 = D 00 has singular part v (bare Coulomb potential) where δ c (t 1 − t 2 ) is the delta function on the closed path. Electron Green's function for the Dirac field is defined by which satisfies the Dyson equation written in a.u. (c = 137) The time integration is taken over the closed path c, −∞ → +∞ → −∞. The electron selfenergy Σ describes both electron-electron and electron-photon interaction in the QED theory in the Coulomb gauge [18] Of course all of these are meaningless, unless they are renormalized. The X-ray absorption rate of the photons in the state k = (k, s) is obtained by use of the time derivative of the averaged photon number We should note that the rate w(k) has different contributions from X-ray emission and also scattering in addition to the absorption. We now define d i (k; r)(i = x, y, z) by use of the photon polarization vector e(k) (e(k) ⊥ k, s = 1, 2; V is the volume for the normalization box), really includes the averaged photon number < n k (t) > which contributes to the X-ray absorption intensity [16,17]. Our main task is thus to systematically calculate the photon Green's function d >lm , which satisfies the Dyson equation slightly changed from the conventional one where matrix product AB is defined as is the free photon Green's function shown later, and we have defined The photon Green's function used in eq. (8) is now given by The first term of eq. (8) cannot describe the electron-photon interaction. To obtain the X-ray absorption rate, we calculate the time derivative of the second term, which yields the following formula by use of infinitesimally small positive η to take causality into account From eq. (8) to the above equation, we change the time integral from ∫ c dt to ∫ ∞ −∞ dt and take Fourier transform of it. Explicit formulas of the free photon Green's functions show that only d > 0 and d < 0 have terms linear in the averaged photon number < n k >, which contribute to the X-ray absorption processes, for example, (k = (−k, s)) The linear term in < n k + 1 > contributes to the X-ray emission processes. We pick up all terms contributing to the X-ray absorption, which yields a fundamental formula to describe relativistic X-ray absorption rate [16,17] w(k) ∝ −Im We first calculate the lowest order term in eq. (11) By using the Gordon relation [19] and a spectral representation of g < which neglects unphysical negative energy states, we have a compact formula for the absorption rate w(k), We have defined the intrinsic amplitude S n , the annihilation operator b associated with the core level φ c , and the difference of positive physical energies ε n = E 0 (N ) − E n (N − 1). Here ∆ k is the widely used one-electron electron-photon interaction operator, ∆ k ∝ rY 1,±1 (r) for the circularly polarized X-rays (k ∥ z ). In the above equation X-ray absorption from the core state ( four spinor) φ c is considered. The spectral representations of g r and g > provides another expression for w(k) [16] The relativistic 4 × 4 retarded Green's function g r with Hartree potential and selfenergy Σ r can be expanded up to the relativistic order in terms of correlated nonrelativistic 2 × 2 Green's functions by use of nonrelativistic kinetic energy operator T e = p 2 /2 [20], The quasi nonrelativistic one-electron Green's function g r 11 satisfies the closed Dyson equation for the quasi nonrelativistic electron selfenergy Σ r 11 . The relativistic electron-self energies Σ 12 and Σ 21 are in the order of Q, and Σ 22 is in the order of c −2 . We write the core function φ c in terms of large component ϕ c and small component χ c , Both ϕ c and χ c are eigenstates of J 2 , L 2 and J z , and are given in terms of Pauli spinor y l jµ and radial functions g c and f c . We thus obtain a useful many-body relativistic X-ray absorption formula The first term has finite contribution even in the nonrelativistic limit c → ∞, whereas the others vanish in that limit. We next take the second term in eq.(11) PDP into account. The space element (PDP ) lm is written down as As electron-electron interaction is much stronger than electron-photon interaction in conventional XAFS measurements, only the first term plays an important role. By noticing the relation (3) and P + PW P = P + P vP + P (vP ) 2 + . . ., we obtain an important relation by applying the Langreth rule A symmetric relation for the irreducible polarization P r and P a P r (r 1 , r 2 ; ω) = P a * (r 2 , r 1 ; ω) leads to the following relativistic formula for the X-ray absorption intensity including the radiation field screening, We thus obtain a practical formula to calculate the X-ray absorption intensity including the screened radiation field by replacing ∆ k by ∆ sc k . We define the dynamically screened electronphoton interaction operator ∆ sc k as It is better to note that the linear response theory cannot derive eq. (18) [11] because it includes only linear term of P > , whereas infinite order terms are renormalized in eqs. (18) and (19). To incorporate the radiation field screening in XAFS formula (18), we only use the Langreth rule for the calculation of (P + PW P ) > . In contrast we have to use the mixed photon Green's functions like D i0 and D 0i to incorporate it in the photoemission formula [21]. The inverse dielectric function (ε −1 ) a (ω k ) plays an important role near edge region in X-ray absorption processes. This suggests that we should also include the radiation field screening in the analyses of XMCD spectra. A very similar XAFS formula has been obtained in terms of inverse dielectric function for the radiation field screening in nonrelativistic theory [28]. We, however, note that relativistic effects are partly included in P a in eq. (19), but they are small enough. We can thus use nonrelativistic approximation for (ε −1 ) a (ω k ) , which is explicitly written by use of As ω is close to the core threshold ω ≈ ε 0 , the second term can be neglected because of large energy denominator. The reducible polarization π a is thus simplified, after we restrict the excited states p to the core hole states on c where ϕ c is the large component of φ c , ρ c is the density matrix which describes the electronic structure of ionized systems: One core-electron is ejected from the core level φ c but outer valence electrons are not relaxed. Near threshold η is replaced by finite positive value Γ/2 taking the lifetime effects of the core hole [23]. By use of these approximations, the screened electronphoton interaction operator ∆ sc k explicitly gives rise to resonant features near threshold, whose explicit expressions will be separately shown later for different core excitations.
So far we have used an approximation for the four-spinor Dyson amplitude g ′ n ≈ S n φ c in eq. (13), which works quite well for deep core X-ray absorption. For L 2,3 core excitation, some examples show the breakdown of the above simple approximation. The spinor function g ′ n satisfies the equation with bound state boundary condition To solve the above equation we can safely apply degenerate perturbation theory, since we can where b µ is the electron annihilation operator for the core level φ µ and we notice that All energies ε 0 µ are quite close to ε 0 . We thus expect that the electron selfenergy Σ can mix the degenerate hole states each other.
L 23 -edge XMCD
Relativistic effects split 2p core levels into two different sub-levels with j c = 1/2 and 3/2. XMCD formulas are considerably different for the excitation from heavy and light elements; for the latter L 2 and L 3 edges are close enough and their hole states are nearly degenerate.
XMCD for heavy absorbers
In this case the energy separation between the L 2 -and L 3 -edges is sufficiently large. The L 2 -edge XMCD is thus not influenced by the L 3 -edge excitation because the core degenerate effects can be neglected. In each atomic region the density matrix ρ c can be approximated by where electron distribution in each atomic core region is assumed to be spherically symmetric. This simple approximation works so well for spherical atoms like Gd. Substitution of this density matrix into eq. (22) yields an explicit formula for the screened electron-photon interaction, We should note that the radial part A ± (ω; r) depends on the photon energy ω and the circular polarization. For the L 2 -edge excitation (j c = 1/2), we have The radial function G c and F ± originate from the first and the second term in [ ] of eq. (22): F ± (r) depends on the X-ray circular polarization through spin polarization in the Xray absorbing atom. In the case where it is nonmagnetic, they does not depend on the circular polarization, F + = F − . By use of the core radial part, they are defined .
For the L 3 -edge excitation (j c = 3/2), we have We again obtain the relation F + = F − for the X-ray absorption at nonmagnetic atom, as observed before.
We can obtain g ′ n by diagonalizing the matrix < φ µ |βΣ r |φ µ ′ >, which is approximated by < ϕ µ |Σ 11 |ϕ µ ′ > (see eq. (23), ϕ µ = g d (r)y 1 1/2,µ (r)), where only the large components are taken into account. The quasi nonrelativistic electron selfenergy Σ 11 has exchange V ex and polarized GW term GW p (W = v + W p ) in lowest order. For the exchange potential we only need the density matrix near the nucleus which is approximated spherically symmetric distribution but spin dependent The matrix < ϕ µ |V ex |ϕ µ ′ > is already diagonal, The diagonal elements depend on µ if there exits spin polarization at the X-ray absorption site; in that case the core threshold is splitted into two. Equation (27) is only applicable to the XMCD from spherical ions like Gd 3+ (4f 7 ; 8 S 7/2 ) and Mn 2+ (3d 5 ; 6 S 3/2 ). For the XMCD from nonspherical ions like Ce 3+ (4f 1 ; 2 F 5/2 ), we should slightly generalize eq. (27) By using this density matrix, we can obtain a diagonal matrix in terms of Gaunt integral where The electron selfenergy whose lowest order term V ex thus splits the core hole threshold depending on µ. The photoelectron wave k ↑ µ and k ↓ µ depend on µ and the spin polarization through the spin dependent E 0 . Including these many-body effects and applying spin-dependent multiple scattering theory, we have a practical XMCD formula where +pol. → −pol. means the matrix elements calculated for the − circular polarization of the first parenthesis, and we have defined Γ in terms ofĜ LL ′ = [G(1 − tG) −1 ] AA LL ′ ; G and t are widely used free propagator and T matrix in angular momentum representation which are familiar to us in XPD and XAFS calculations [8] The radial matrix element R l jc (k ↑ µ ; ω) ± is given by use of the radial part g jc of large component of the core function, R l (k ↑ µ r) of photoelectron wave with orbital angular momentum l, and the radial part A mp of ∆ sc k (see eq. (25); A mp depends on g jc ), The spin-dependence of the core hole wavefunction was stressed by Ebert [24]. For the L 3 -edge, the XMCD spectra is given in terms of Γ ms .
The polarization dependent Γ are slightly different from those given above for the L 2 -edge XMCD, for example, In case the X-ray absorbing atom is nonmagnetic, the radial matrix element R l jc (k; ω) does not depend on the circular polarization. In eq. (31) the core function g jc is different for the different cores at L 2 and L 3 thresholds. For light elements the difference is small. We thus obtain a simple relation which works so well for XMCD from light 3d elements with small orbital moments In other cases we observe the breakdown of the relation (33).
XMCD for light absorbers
In this case the nearly degenerate effects between L 2 and L 3 edges have to be taken into account. We thus have to calculate inter sub shell matrix elements like < g 1/2 y µ |V ex |g 3/2 y ν > where we use abbreviation for Pauli spinors y ν = y 1 3/2,ν , y µ = y 1 1/2,µ . In the spherical approximation (27), we obtain by use of the radial matrix elements ρ ′ms ll ′ similar to ρ ms ll ′ but with mixing of g 1/2 and g 3/2 This mixing is negligible when the X-ray absorbing atom is nonmagnetic because ρ ′↑ ll ′ = ρ ′↓ ll ′ . The mixing factor a is the same for µ = ±1/2, so that we drop the subscript µ. The diagonalization of the 2×2 matrix gives larger energy splitting than ε so ( the energy splitting of L 2 and L 3 edges in the Hartree approximation; it is about 6.5eV for Ti), and also gives mixing between g 1/2 y 1 1/2µ and g 3/2 y 1 3/2,ν . More general density matrix (28) also provides very similar diagonal matrix to that shown by eq. (35). The mixing thus has influence on radial part for the one-electron states with µ = ±1/2: The radial parts with ν = ±3/2 are not affected by the mixing. For the levels with µ = ±1/2, the mixed radial functions g ′ 1/2 and g ′ 3/2 are written in linear combination where the mixing coefficients are determined by diagonalizing the matrix (35). Even for Ti ε so (≈ 6eV) is much larger than the lifetime broadening of L 23 levels (≈ 0.2 eV). One pole approximation (25) should be quite good, but we can easily generalize A ± in eq. (25) to multipole approximation . These considerations yield XMCD formula similar to the previous formulas: For ∆I 1/2 (ω), the radial function g 1/2 is replaced by g ′ 1/2 in eq. (31), and for ∆I 3/2 (ω) the radial function g 3/2 is only replaced by g ′ 3/2 when the sum over ν runs over ν = ±1/2 in eq. (32). So far we have considered the degenerate effects on the 4 spinor g ′ n , which split the degenerate energy ε d (L 2 level energy ) and ε c (L 3 level energy ) into 2 and 4 different levels which explicitly depends on µ and ν. These effects can also mix the inter sub levels with the same µ and ν. Furthermore we should investigate these effects on photoelectrons. To describe the photoelectron propagation in solids after the core excitation, the quasi nonrelativistic retarded Green's function g r 11 (ε) in eq. (18) plays a crucial role where h sc and Σ r = Σ r 11 describe elastic scatterings of photoelectrons : The latter describes the photoelectron damping responsible for photoelectron mean free path [26] and also core degenerate effects on photoelectron propagation. In the present case it is sufficient to consider the projected Green's function. For the description, we now introduce the projection operators P µ and P ν , which project photoelectron scattering states into the states under the influence of core hole states on µ and ν. To obtain an X-ray absorption formula including the degenerate effects on photoelectrons, we should calculate the diagonal parts P µ g r P µ ( hereafter we use abbreviation g r for g r 11 ). Direct calculation yields with aid of projection operator algebra [23,25] The first term of eq. (37) g µ P µ describes the X-ray absorption processes where hole left behind is fixed even after the core excitation, the second the hole is moving inside the same sub level, while the third the hole is moving to another sub level. We can also include all these effects in the multiple scattering XMCD formulas (30) and (31). We should add a caution concerning the Dyson orbital g ′ n which is defined by eq. (13) and satisfies eq. (23). In the Hartree-Fock frozen approximation it is simply reduced to the initial level wavefunction, however, it is affected both from initial and final states as defined by eq. (13) beyond the simple approximation: The energy ε n corresponds to the exact core hole threshold energy for the nth core hole state in the nonrelativistic approximation.
K-edge XMCD
In this case the degenerate core effects can be neglected. The screened electron-photon interaction operator is again written as eq. (25), and G and F ± are now now given by G(r) = G c (r)/3, F ± (r) = (G ↑ 11 (r) + G ↓ 11 (r))/3 The factor F ± does not depend on the circular polarization, and that is also the case for A ± defined in eq. (25). So far our main concern has been focused on the first term of eq. (17), because the splitted shells caused by the relativistic effects mainly contribute to the XMCD. On the other hand the relativistic effects for photoelectrons play a crucial role in K-edge XMCD [7,8]. The second term of eq. (17) includes the operator Q(V H − ω)Q, which is rewritten [7] for spherically symmetric potential V H . The spin-orbit coupling term, the last term, can give finite contribution to the XMCD, whereas the first two terms give no contribution to the XMCD. We obtain multiple scattering XMCD formula at K-edge Here it is convenient for us to separately calculate the K-edge XMCD, atomic XMCD ∆I atom 0 and multiple-scattering (MS)-XMCD (the second term of eq.(38)). Of course we expect no atomic XMCD in the case where the X-ray absorbing atom is nonmagnetic. In contrast MS-XMCD is finite even in those cases if some of nearby atoms are magnetic. The spin-dependent radial integrals are defined by use of A(ω; r) which is independent of X-ray polarization where g Al is the radial part of the propagator at the X-ray absorption site A, For K-edge XMCD, main frame of one-electron relativistic XMCD theory still works. We can, however, explicitly use optical potentials for excited photoelectrons.
Concluding Remarks
In this paper we derive a new many-body XMCD theory based on relativistic quantum electrodynamics. Our discussion is separately given for spherical core ( K-edge) and nonspherical cores (L 2 and L 3 edges), because the relativistic effects is essential to split L 23 shell into sub levels L 2 and L 3 for the latter. In K-edge excitation the relativistic effects on photoelectrons are essential to give rise to finite XMCD. These characteristic features are already observed in the previous XMCD theories, although the optical potential cannot be taken into account in those one-electron theories [7,8].
The present theoretical framework enables us to take the relativistic many-body effects into account such as radiation field screening and degenerate effects of core levels, which are explicitly investigated here. The radiation field screening is also important near core thresholds as well as plasmon excitation region [27], as observed in MARPE analyses [22]. The radiation field screening provides energy dependent effective electron-photon interaction operator A ± (ω; r)Y 1,±1 (r). One of remarkable features of the radial part A ± (ω; r) depends on circular polarization when the X-ray absorbing atom is spin-polarized, and also photon energy ω. It strongly deviates from r only near threshold. Near degenerate effects are incorporated in the present theoretical framework. In the L 2 and L 3 edge XMCD, they split each of the degenerate sub levels. In XMCD from light elements like 3d transition metals, inter sub level mixing with the same quantum number µ should be taken into account. The radiation field screening and the near degenerate effects both contribute to strong deviation from the branching ratio, and also can give rise to different XMCD spectra from the simple rule (33). These many-body effects can be incorporated in the framework of multiple scattering theory developed so far. Both of the effects are not so influential in K-edge XMCD spectra. Other optical field effects can naturally be incorporated based on this relativistic QED theory as pointed out in our previous paper [28]. Detailed numerical calculations will be demonstrated in the forthcoming paper. | 2022-06-28T03:18:14.415Z | 2009-01-01T00:00:00.000 | {
"year": 2009,
"sha1": "a60e8b2230fc370e1305f810d8b18b4d6895c2b9",
"oa_license": null,
"oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/190/1/012014/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "a60e8b2230fc370e1305f810d8b18b4d6895c2b9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
58592778 | pes2o/s2orc | v3-fos-license | Altered VEGF Splicing Isoform Balance in Tumor Endothelium Involves Activation of Splicing Factors Srpk1 and Srsf1 by the Wilms’ Tumor Suppressor Wt1
Angiogenesis is one hallmark of cancer. Vascular endothelial growth factor (VEGF) is a known inducer of angiogenesis. Many patients benefit from antiangiogenic therapies, which however have limitations. Although VEGF is overexpressed in most tumors, different VEGF isoforms with distinct angiogenic properties are produced through alternative splicing. In podocytes, the Wilms’ tumor suppressor 1 (WT1) suppresses the Serine/arginine-rich protein-specific splicing factor kinase (SRPK1), and indirectly Serine/arginine-rich splicing factor 1 (Srsf1) activity, and alters VEGF splicing. We analyzed VEGF isoforms, Wt1, Srpk1, and Srsf1 in normal and tumor endothelium. Wt1, Srpk1, Srsf1, and the angiogenic VEGF164a isoform were highly expressed in tumor endothelium compared to normal lung endothelium. Nuclear expression of Srsf1 was detectable in the endothelium of various tumor types, but not in healthy tissues. Inducible conditional vessel-specific knockout of Wt1 reduced Wt1, Srpk1, and Srsf1 expression in endothelial cells and induced a shift towards the antiangiogenic VEGF120 isoform. Wt1(−KTS) directly binds and activates both the promoters of Srpk1 and Srsf1 in endothelial cells. In conclusion, Wt1 activates Srpk1 and Srsf1 and induces expression of angiogenic VEGF isoforms in tumor endothelium.
Besides the transcriptional regulation of VEGF by WT1, WT1 is also involved in Vegf RNA splicing in murine hematopoiesis [18] and in podocyte cell lines [19]. Different VEGF isoforms are generated via alternative splicing of exons 6, 7, and 8 of the VEGF gene resulting mainly in human VEGF 189, 165, and 121 variants (Vegf 188, Vegf 164, and Vegf 120 in mice, respectively). Pro-angiogenic isoforms (VEGF-A xxx ) are generated by proximal and anti-angiogenic (VEGF-A xxx b) forms by distal splice site selection in exon 8 (reviewed in [1,20]). Lack of Wt1 in murine hematopoietic progenitor cells results in a shift towards the Vegf 120 isoform, which is associated with apoptotic cell death [18]. In human podocyte cell lines, WT1 binds and suppresses the promoter of the splicing factor kinase SRPK1. In WT1 mutant podocytes, SRPK1-mediated hyperphosphorylation of the RNA binding protein SRSF1 results in a shift from the anti-angiogenic VEGF 165b towards the pro-angiogenic VEGF 165a isoform [19].
Endogenous VEGF in endothelial cells is important to regulate key vascular proteins and maintain endothelial cell homeostasis [21], but little is known about VEGF isoform expression in endothelial cells from different organs, tumors, and their regulation. Therefore, we determined VEGF isoforms, Wt1, Srpk1, and Srsf1 expression in normal and tumor endothelial cells. We generated an inducible conditional endothelial cell-specific knockout mouse model for Wt1 to analyze the effect on Srpk1 and Srsf1 expression and Vegf isoform distribution.
We show that, in tumor endothelium, Wt1, Srpk1, and Srsf1 are upregulated compared to normal lung endothelium. Wt1 functions as direct activator of both Srpk1 and Srsf1, and affects Vegf isoform distribution in endothelial cells. Disrupting Wt1 in endothelial cells reduces Srpk1 and Srsf1 expression and alters Vegf isoform distribution, which might contribute to the antitumor activity upon targeting Wt1.
Animals
All animal work was conducted according to national and international guidelines and was approved by the local ethics committee (PEA-NCE/2013/106). Wt1 Lox/Lox and Tie2-CreERT2 animals were crossed to generate Tie2-CreERT2; Wt1 Lox/Lox mice [22]. All animals were backcrossed four times onto the C57/BL6 genetic background. The genotype of animals was identified by PCR using the following oligonucleotides and PCR conditions: Cre- Age-matched Tie2-CreERT2; Wt1 Lox/Lox male and female mice were injected for one week intraperitoneally with either sunflower oil (vehicle) or Tamoxifen dissolved in sunflower oil in a dose of 33 mg/kg per day [23]. Age-matched single Tie2-CreERT2 transgenic animals injected with Tamoxifen served as additional controls for Cre and Tamoxifen effects. One week after the last Tamoxifen or vehicle treatment, 1 × 10 6 B16F10 or LLC1 tumor cells were injected subcutaneously. Tumors and organs were collected after three to four weeks. C57/BL6 animals were used for isolation of endothelial cells from lungs or tumors. In these animals, tumors were induced by subcutaneous injection of 1 × 10 6 LLC1 tumor cells.
Endothelial Cell Isolation
Mouse lung and tumor endothelial cells (EC) were isolated from C57/BL6 mice as previously described [24,25]. Alternatively, B16 or LLC1 tumors were isolated from Tie2-Cre ERT2 ; Wt1 Lox/Lox mice treated with Tamoxifen or vehicle. Briefly, lung and tumor tissues were cut into small fragments and digested with 1 mg/mL collagenase A and 100 IU/mL type I DNase (Roche Diagnostics, Meylan, France) for 45 min at 37 • C. ECs were then purified from the cell suspension using a rat anti-CD31 antibody (clone MEC 13.3; BD Biosciences, San Jose, CA, USA) conjugated to Dynabeads (Life Technologies, Courtaboeuf, France) using a magnetic particle concentrator and cultured on 0.2% type I collagen-coated plates (Sigma Aldrich, St. Louis, MO, USA) in DMEM medium supplemented with 20% FCS, 100 IU/mL penicillin, and 100 µg/mL streptomycin. Endothelial cell purity was confirmed by FACS analysis using Alexa Fluor 647 anti-mouse VE-cadherin antibody (Clone: BV13; BioLegend, San Diego, CA, USA) and anti-mouse Alexa Fluor 488 Fab 2 recognizing the VE-cadherin antibody.
RT-PCR and Quantitative RT-PCR
Total RNA was isolated using the Trizol reagent (Invitrogen). First-strand cDNA synthesis was performed with 0.5 µg of total RNA using the Thermo Scientific Maxima First Strand cDNA synthesis kit (Thermo Scientific, Illkirch, France). The reaction product was diluted to 100 µL and 1 µL of the diluted reaction product was taken for real time RT-PCR amplification (StepOne plus, Applied Biosystems, Foster City, CA, USA) using the SYBR ® Select Master Mix (Applied Biosystems). Expression of each gene was normalized to the respective arithmetic means of Gapdh (NM_001289726.1), Actnb (NM_007393.5), and Rplp0 (NM_007475.5) expression. Vegf isoform expression was determined as described using identical PCR conditions and primers [18,26]. Vegf PCR products were analyzed on agarose gels with 100 bp molecular marker (Life Technologies) to verify that the PCR products correspond to the predicted size. Primer sequences are listed in Table 1.
Tissue Samples and Immunohistology
Paraffin-embedded samples, cut at 3 µm, were used for immunohistochemical detection of SRSF1. For immunohistology, after heat-mediated antigen retrieval at pH 6 and quenching of endogenous peroxidase activity, SRSF1 was detected using a polyclonal goat antibody from Santa Cruz (Heidelberg, Germany) in a dilution of 1:100, and the EnVisionTM Peroxidase/DAB Detection System from Dako (Trappes, France). For human melanoma tumors, the DAB substrate was replaced by VIP substrate (Vector, CliniSciences, Nanterre, France). The study adhered to the principles of the Declaration of Helsinki and to Title 45 of the U.S. Code of Federal Regulations (Part 46, Protection of Human Subjects). In total, 40 paraffin-embedded human tumor samples (10 lung cancers, 10 melanomas, 10 pancreas cancers, and 10 colon cancers) were used for this study. Negative controls were obtained by incubation of samples with a goat IgG Control (Invitrogen, Courtaboeuf, France). Sections were counterstained with Hematoxylin (Dako Trappes, France) and analyzed by two independent investigators, one of them was an experienced pathologist. Slides were viewed under an epifluorescence microscope (DMLB, Leica, Wetzlar, Germany) connected to a digital camera (Spot RT Slider, Diagnostic Instruments, Sterling Heights, MI, USA).
Cloning and Transient Transfection Experiments
The Srsf1 promoter construct was a kind gift of S.V. Graham [28]. The Srpk1 promoter construct was described previously [19]. As vector backbone, pGl3 basic (Promega, Charbonnières-les-Bains, France) was used for both constructs. A co-transfected beta-Galactosidase construct was used to normalize for differences in transfection efficiencies [29]. Each promoter construct was co-transfected with Wt1(−KTS) or Wt1(+KTS) expression constructs in C166 mouse endothelial cells using Lipofectamine LTX reagent (Thermo Scientific, Courtaboeuf, France) (n = 12 each). A putative Wt1 binding site was deleted from the Srsf1 promoter construct using the Quik Change II site directed mutagenesis kit (Stratagene, Agilent Technologies, Massy, France) with the following oligonucleotides: 5 -GTGGGGAGGGTGACGTTGAACGTAGCCCT-3 ; antisense: reverse complement. The deletion construct for the Srpk1 promoter has been published recently [19]. Deletion constructs were again co-transfected with Wt1(−KTS) or Wt1(+KTS) expression constructs (n = 12 each).
Statistical Analysis
Data are expressed as means ± S.E.M. Student's t-tests (Instat, GraphPad, San Diego, CA, USA) were performed to determine statistical significance. A p-value of less than 0.05 was considered significant.
Wt1, Srpk1, and Srsf1 Are highly Expressed in Tumor versus Lung Endothelium
We isolated endothelial cells from lungs and tumors of wild-type mice by CD31 labeling and magnetic cell sorting and analyzed Wt1, Srpk1, and Srsf1 expression by quantitative RT-PCR ( Figure 1a). In agreement with our previous reports [22,29], we found quantitative significantly higher Wt1 expression in endothelial cells from tumors compared to normal lung endothelium. Although WT1 has been reported to suppress SRPK1 expression in human podocyte cell lines [19], we found that Srpk1 expression in tumor endothelium was higher compared to lung endothelial cells and also that Srsf1 expression was increased in tumor endothelial cells. Higher Srsf1 mRNA expression was equally unexpected as the established function of Srpk1 is the phosphorylation of Srsf1 [30], which results in nuclear import of Srsf1 [31] and affects VEGF splicing [19]. Therefore, we next analyzed VEGF splice variants, i.e., Vegf 188, Vegf 164, and Vegf 120 as well as Vegf 164 a and Vegf 164 b by reverse transcription (RT)-PCR using established oligonucleotides and PCR conditions [18,26] (Figure 1b-d). The Vegf 188 variant was highly expressed in lung endothelium, but not in tumor endothelial cells, while the angiogenic Vegf 164 variant showed increased expression in tumor compared to normal lung endothelial cells. No significant differences were observed for Vegf 120 between the two cell types ( Figure 1b, and quantification in Figure 1c). The antiangiogenic Vegf 164b isoform was barely detectable in lung endothelial cells, and the fraction compared to total Vegf 164 was also not different in tumor endothelium (Figure 1d). The finding that Vegf 188 is only expressed in the lung is in agreement with the literature [32,33]. Bacic et al. reported predominant expression of Vegf 188 in rat heart and lung, while. in all other tissues, this isoform was the least abundant. Unfortunately, only normal tissues, but no tumors have been investigated in this study [32]. In the lung, VEGF is expressed on alveolar epithelial type II cells, vascular endothelium and alveolar macrophages, but isoform expression in the different cell types has not been reported [34]. The role of VEGF 189 for cancer progression is highly controversial with reports showing the highest vessel density and poorest prognosis in tumors overexpressing VEGF 189 [35,36], while others showed less metastases and improved survival in breast cancer cell lines overexpressing VEGF 189 [37]. High Vegf 164 expression in tumor endothelium is in line with a pro-angiogenic phenotype of endothelial cells in tumors. In agreement with the high Srpk1 and Srsf1 expression in tumor endothelial cells, we detected increased levels of Vegf 164a [19,38,39], but only low expression of Vegf 164 b in tumor as well as lung endothelial cells. It has been described that, in the normal kidney, colon, bladder smooth muscle, lung, and pancreatic islets, VEGF b isoforms predominate, while, in colorectal carcinoma, bladder cancer, melanoma, prostate cancer cell lines, and de-differentiated podocytes, angiogenic VEGF a isoforms predominate [19,40,41] (reviewed in [1]). To our knowledge, little is known about the expression of Vegf a and Vegf b isoforms in endothelial cells of different vascular beds. As pro-angiogenic Vegf is required for endothelial cell survival [42], it is not surprising that we mainly detected angiogenic Vegf a variants in endothelial cells. Unfortunately, given the relatively small number of cells isolated using our magnetic separation protocol, we were not able to confirm Vegf 164 a and Vegf 164 b expression differences between tumor and lung endothelial cells on the protein level by ELISA.
Srsf1 Protein Is Differentially Expressed in Normal Tissue Endothelium Compared to Tumor Endothelium
Since we detected lower Srpk1 and Srsf1 expression in isolated endothelial cells from lungs compared to tumors, we next addressed the questions whether this corresponds to the situation in vivo and might represent a more general phenomenon in normal healthy tissues compared to tumor samples.
We used immunohistochemistry for Srsf1 on multiple normal mouse tissue samples (Figure 2a) and on human lung, pancreas, and colon cancer and melanoma sections (Figure 2b). In normal tissues, we detected strong nuclear immunoreactivity for Srsf1 in hair bulbs of the skin, the alveolar epithelium of the lung, cardiomyocytes and fibroblasts in the heart, mainly endocrine, but also exocrine cells of the pancreas, tubules and glomeruli of the kidney, follicles and stroma of the ovary, and some neurons in the brain. Hepatocytes of the liver showed weaker nuclear staining for Srsf1. In all these analyzed normal tissues, Srsf1 was rarely detectable in the nuclei of endothelial cells (green arrows in Figure 2a). To our knowledge, Srsf1 expression in such a variety of different normal mouse tissues has not been reported before. In contrast, in different tumors, i.e., lung, pancreas, and colon cancer, and in melanomas, intense nuclear SRSF1 staining was observed in tumor cells, which is in agreement with the reported overexpression of SRSF1 in different cancer types and its function as proto-oncogene [43,44]. In agreement with our results on isolated endothelial cells, we detected nuclear SRSF1 expression in most of the endothelial cells in tumors of different origin (arrows in Figure 2b).
Inducible Vascular-Specific Knockout of Wt1 Abolishes Nuclear Endothelial Srsf1 Expression
As it has been described that Wt1 regulates Srpk1 [19] and Srpk1 phosphorylates Srsf1, which results in nuclear import of the protein [30,31], we were interested in deciphering the role of Wt1 for the high nuclear Srsf1 expression in tumor endothelial cells. For this purpose, we used Tie2-CreERT2 mice crossed with Wt1 Lox/Lox animals as reported before [22], induced vessel-specific knockout of Wt1 by Tamoxifen injection, and implanted syngenic B16 melanoma or LLC1 lung cancer cells subcutaneously. Tie2-CreERT2; Wt1 Lox/Lox animals injected with vehicle (sunflower oil) and Tie2-CreERT2 transgenic animals injected with Tamoxifen before tumor cell implantation served as controls. Tumor samples were collected before complete regression occurs in this model [22] and analyzed for Srsf1 expression by immunohistochemistry. Comparable to human tumors, significant nuclear Srsf1 expression was detected in tumor and endothelial cells in control animals (Figure 3, left and middle), but Srsf1 was barely detectable in endothelial cells of tumors from Tie2-CreERT2; Wt1 Lox/Lox animals injected with Tamoxifen (green arrows in Figure 3, right). Expression of Srsf1 in B16 melanoma cells and LLC1 lung tumor cells was unaffected by vessel-specific knockout of Wt1. The overall aspect of reduced immunoreactivity for Srsf1 in tumors of Tie2-CreERT2; Wt1 Lox/Lox + Tamoxifen animals compared to controls might be attributed to increased lymphocyte infiltration, necrosis, and larger fibrotic areas with reduced number of tumor cells as reported recently [22].
Knockout of Wt1 in Tumor Endothelium Affects Srpk1, and Srsf1 Expression and Vegf Splicing
To determine quantitative expression differences for Wt1, Srpk1, and Srsf1 and to estimate differences in Vegf splicing, we used a comparable approach as mentioned above and isolated endothelial cells by magnetic sorting from tumors of Tie2-CreERT2; Wt1 Lox/Lox animals treated with
Knockout of Wt1 in Tumor Endothelium Affects Srpk1, and Srsf1 Expression and Vegf Splicing
To determine quantitative expression differences for Wt1, Srpk1, and Srsf1 and to estimate differences in Vegf splicing, we used a comparable approach as mentioned above and isolated endothelial cells by magnetic sorting from tumors of Tie2-CreERT2; Wt1 Lox/Lox animals treated with Tamoxifen and vehicle-injected controls. Quantitative RT-PCR analyses revealed significantly reduced Wt1 expression in endothelial cells from tumors of Tie2-CreERT2; Wt1 Lox/Lox mice injected with Tamoxifen versus vehicle-treated controls to a comparable extend as reported earlier [22].
In addition, Srpk1 and Srsf1 RNA expression was lower in endothelial cells from tumors of Tie2-CreERT2; Wt1 Lox/Lox mice injected with Tamoxifen compared to controls (Figure 4a). Next, we determined Vegf isoform expression in endothelial cells from tumors of Tie2-CreERT2; Wt1 Lox/Lox mice injected with Tamoxifen compared to vehicle-treated controls by PCR as described [18]. The Vegf 188 isoform was barely detectable, while the Vegf 164 and Vegf 120 represented the major isoforms expressed in endothelial cells (Figure 4b,c). This is in agreement with the results presented in Figure 1. Knockout of Wt1 in endothelial cells in the tumors did not significantly affect the Vegf 188 and Vegf 164 isoforms, but induced a relative increase in the Vegf 120 isoform (Figure 4c). No significant differences were observed for the Vegf 164a and Vegf 164b isoforms (data not shown). Vegf 120: 89 bp) was determined on isolated endothelial cells (n = 3 each) as described [18]. (c) Quantification of relative band intensities revealed relative higher Vegf120 isoform levels, while Vegf164 and Vegf188 were not significantly affected by the knockout of Wt1. Data are expressed as means ± S.E.M. * p < 0.05, ** p < 0.01.
To investigate whether these observed differences are specific for endothelial cells or reflect differences in the tumors of Tie2-CreERT2; Wt1 Lox/Lox mice injected with Tamoxifen compared to vehicle-treated controls, we isolated RNA from B16 melanoma and LLC1 tumors from the two groups of mice. For both tumor types, no significant expression differences for Wt1, Srpk1, and Srsf1 could be detected by quantitative RT-PCR from whole tumor RNA of Tie2-CreERT2; Wt1 Lox/Lox + Tamoxifen animals compared to controls (Figure 5a). In addition, no significant differences in Vegf 188, Vegf 164, and Vegf 120 isoform expression could be detected in B16 and LLC1 tumors of Tie2-CreERT2; Wt1 Lox/Lox + Tamoxifen animals and vehicle-treated controls (Figure 5b,c). As endothelial cells represent only approximately 6% of the cells in the investigated tumor types [22], it is not surprising that, although we observed significant differences for Wt1, Srpk1, Srsf1, and Vegf isoforms in isolated endothelial cells, this is not reflected in whole tumor samples. Furthermore, it supports the specificity of the findings for endothelial cells.
Increased relative expression of the Vegf 120 isoform has already been reported in Wt1-deficient hematopoietic progenitor cells [18]. As hematopoietic progenitor cells contribute to endothelial cells [45], our result of increased relative expression of the Vegf 120 isoform in endothelial cells with knockout of Wt1 is in agreement with this study. Interestingly, the increase in Vegf 120 in Wt1-deficient hematopoietic progenitor cells has been linked to apoptosis and reduced hematopoietic potential of these cells, which could be rescued by Vegf treatment [18]. We reported increased apoptosis also in endothelial cells with knockout of Wt1 [22]. Thus, our results in endothelial cells correspond also in this aspect to hematopoietic progenitor cells. As Vegf isoforms expression showed no significant differences in whole tumor RNA preparations, but increased Vegf 120 was detectable in isolated endothelial cells, our results support the hypothesis that mainly endogenous VEGF in endothelial cells is important to maintain endothelial cell homeostasis [21]. Relatively moderate changes in Vegf isoform distribution upon knockout of Wt1 and clear reduction in Srpk1 and Srsf1 expression might be explained by a multitude of splicing factors, which beside Srsf1 act in endothelial cells [46]. Nevertheless, as it has been shown that normal VEGF levels and isoform expression are required for normal embryonic development [7][8][9][10][11], it is not surprising that changes in Vegf isoform distribution upon knockout of Wt1 in endothelial cells contribute to endothelial cell apoptosis and vascular regression in our tumor models [22]. In addition, we cannot rule out the possibility that Vegf isoform protein level differences were more pronounced, which we could not determine due to the limited amount of material.
Wt1 Activates Srpk1 and Srsf1 in Endothelial Cells
As WT1 represses SRPK1 in podocytes [19], but we observed increases in Srpk1 and Srsf1 in tumor endothelial cells with high Wt1 expression compared to lung endothelial cells and a decrease of Srpk1 and Srsf1 expression upon endothelial-specific knockout of Wt1, we investigated whether Wt1 might be an activator instead of a repressor of Srkp1 and Srsf1 promoter activity in endothelial cells. For this purpose, we transiently co-transfected Srpk1 or Srsf1 promoter constructs [19,28] in the pGl3 basic luciferase reporter vector together with WT1(−KTS) or WT1(+KTS) expression constructs in C166 endothelial cells. These two WT1 variants differ by the presence/absence of three amino acids (KTS) in the zinc finger domain of the molecule. SRPK1 and SRSF1 promoter activity was stimulated by WT1(−KTS). In contrast, WT1(+KTS), which has a role in posttranscriptional RNA processing and pre-mRNA splicing rather than transcriptional regulation [47,48], did not significantly change promoter activities (Figure 6a-d). A SRPK1 promoter construct with deletion of the identified WT1-binding site [19] showed higher basal activity compared to the wild-type promoter construct when transfected in C166 endothelial cells. This might indicate that besides WT1 other transcription factors, which act as repressors bind to this region. Co-transfection of the SRPK1 promoter construct with deletion of the identified binding site [19] with WT1(−KTS) or WT1(+KTS) expression constructs in C166 endothelial cells abolished activation of the promoter construct ( Figure 6b). Interestingly, the SRSF1 promoter contained multiple repetitions of known WT1-binding elements (ggagg) (Figure 6e). Deletion of this predicted WT1-binding site in the SRSF1 promoter construct resulted in higher basal activity, but abolished activation by WT1(−KTS) ( Figure 6d). As physical interaction of WT1 with the SRPK1 promoter has been shown already [19], we focused for chromatin immunoprecipitation assays (CHIP) only on the SRSF1 locus. We used a rabbit monoclonal antibody against WT1, which confirmed direct binding of WT1 to the SRSF1 promoter, but not to 3 UTR sequence of SRSF1. An antibody against Acetyl-histone 3 and input DNA served as positive controls and normal rabbit serum as negative control (Figure 6e-g).
Inhibition of SRPK by WT1 in podocytes, but activation in endothelial cells, is not surprising as it is well known that WT1 might act as activator or repressor of transcription in different cell types (reviewed in [49,50]). Co-factors that might be involved in this differential regulation in podocytes and endothelial cells remain to be identified. [19] in the presence of WT1(−KTS) or WT1(+KTS) expression constructs. Transient transfections were performed using C166 cells (n = 12 each). The promoterless luciferase expression construct (pGl3basic) served as a negative control. Co-transfected beta-Galactosidase was used to normalize for differences in transfection efficiencies. (b) Relative [19] in the presence of WT1(−KTS) or WT1(+KTS) expression constructs. Transient transfections were performed using C166 cells (n = 12 each). The promoterless luciferase expression construct (pGl3basic) served as a negative control. Co-transfected beta-Galactosidase was used to normalize for differences in transfection efficiencies. (b) Relative luciferase activity of a SRPK1 promoter reporter construct with deletion of the identified WT1-binding site (∆WTB) in the presence of WT1(−KTS) or WT1(+KTS) expression constructs (n = 12 each). (c) The published SRSF1 promoter construct [28] in pGl3basic was co-transfected with WT1(−KTS) and WT1(+KTS) expression constructs in C166 endothelial cells (n = 12). Beta-Galactosidase served to normalize for differences in transfection efficiency. (d) ∆WTB indicates reporter constructs with deletion of the predicted WT1-binding site in the SRSF1 promoter construct. Transfection experiments were performed as in (c) (n = 12). (e) Schematic representation of the putative WT1-binding site in the SRSF1 promoter. Positions of the cloned promoter relative to the transcription start site, the position and sequence of the putative WT1-binding site and positions of the oligonucleotides used for CHIP analyses are indicated. For the promoter-deletion construct (d), the indicated WT1-binding site was removed from the promoter reporter construct. (f) Chromatin immunoprecipitation (ChIP, n = 4) was performed using a rabbit monoclonal antibody against WT1 or anti-acetylhistone H3 antibody as positive control. Normal rabbit serum served as a negative control. Input DNA was used as additional positive control for quantitative PCRs on the SRSF1 promoter and the respective 3 UTR sequence. (g) Representative agarose gel photographs of semi-quantitative ChIP PCR experiments for the SRSF1 promoter sequence (upper picture) and the respective (3 UTR). Data are expressed as means ± S.E.M. ** indicates p < 0.01.
Conclusions
We show here that Wt1, Srpk1, Srsf1, and the angiogenic Vegf 164a isoform are highly expressed in tumor endothelial cells compared to normal lung endothelium. Knockout of Wt1 in endothelial cells reduces Srpk1 and Srsf1 expression and induces a shift towards the Vegf 120 isoform. Wt1 acts as an activator instead of as a repressor of Srpk1 in endothelial cells. Wt1 double secures VEGF splicing as it directly activates Srpk1 and Srsf1. Inhibition of Srpk1 and Srsf1, and alterations in Vegf splicing, which induce tumor endothelial cell apoptosis, might contribute to the antitumor activity upon targeting Wt1. | 2019-01-22T22:32:58.576Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "5e610faab19b6411c43a190a93e37d7796f1f23c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4409/8/1/41/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "104c774510032c5cad65cc93d9e4caddd6b4fb09",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
266672170 | pes2o/s2orc | v3-fos-license | Combining a deep learning model with clinical data better predicts hepatocellular carcinoma behavior following surgery
Hepatocellular carcinoma (HCC) is among the most common cancers worldwide, and tumor recurrence following liver resection or transplantation is one of the highest contributors to mortality in HCC patients after surgery. Using artificial intelligence (AI), we developed an interdisciplinary model to predict HCC recurrence and patient survival following surgery. We collected whole-slide H&E images, clinical variables, and follow-up data from 300 patients with HCC who underwent transplant and 169 patients who underwent resection at the Cleveland Clinic. A deep learning model was trained to predict recurrence-free survival (RFS) and disease-specific survival (DSS) from the H&E-stained slides. Repeated cross-validation splits were used to compute robust C-index estimates, and the results were compared to those obtained by fitting a Cox proportional hazard model using only clinical variables. While the deep learning model alone was predictive of recurrence and survival among patients in both cohorts, integrating the clinical and histologic models significantly increased the C-index in each cohort. In every subgroup analyzed, we found that a combined clinical and deep learning model better predicted post-surgical outcome in HCC patients compared to either approach independently.
Introduction
Hepatocellular carcinoma (HCC) represents the third leading cause of cancer-related death worldwide. 1,2Surgical treatment varies, with liver transplantation being the primary intervention for cirrhotic patients with early-stage HCC 3,4 while surgical resection is the preferred treatment for patients with preserved underlying liver function.][11] Various clinical and pathologic features have been shown to predict HCC recurrence and patient prognosis following liver resection or transplantation.3][14] Our methods to stratify the risk of HCC recurrence have evolved over time, beginning with a landmark paper defining the Milan criteria, a system based solely on the pre-operative radiographic measurement of tumor extent, which has remained the benchmark for assessing transplant suitability in patients with HCC. 15 Alternative prognostic models, such as HALT-HCC, 16 have been proposed that are based exclusively on clinical variables such as AFP level.Purely histologic prognostic models have also been developed, such as the Recurrence Risk Assessment Score (RRAS). 17More recently, hybrid models have been developed that integrate variables from multiple disciplines to better predict HCC recurrence.The RETREAT score 18 is one example, which combines both clinical and pathologic features.These stepwise advancements in HCC prognostication have enabled us to continuously improve post-surgical surveillance strategies and identify patients who would benefit from adjuvant therapy.
Artificial intelligence (AI) and machine learning represent the next step in modeling HCC outcome.AI is being increasingly utilized in modern practice, [19][20][21] and computational approaches to prognostic modeling have shown tremendous promise.We describe in this study an AI approach that integrates clinical variables with a deep learning model to predict tumor behavior in HCC patients who underwent liver resection or transplantation.
Patients and samples
We retrospectively identified patients with HCC who underwent liver transplantation or surgical resection at the Cleveland Clinic during a 17-year period from 2002 to 2018.Demographics, underlying liver disease, history of locoregional therapy, pre-operative serum AFP level, and followup data were obtained from the electronic medical records.Pre-operative imaging was reviewed.The imaging study performed most recently prior to any intervention (either chemoembolization or surgery if no locoregional therapy was performed) was reviewed.If multiple imaging modalities were obtained, MRI was preferentially reviewed over CT scans.In the resection cohort, between 1 and 3 representative digital slides of H&E-stained sections were available for each HCC case (svs format, 40× magnification, 385 slides in total from 169 patients).In the transplant cohort, 1 digital slide of a H&Estained section was available for each HCC case (svs format, 40× magnification, 300 slides in total from 300 patients).The use of patient samples from the Cleveland Clinic was approved by the institutional review board.
Convolutional neural networks for predicting patient survival
We used a deep-learning algorithm called "SCHMOWDER", 21 which was specifically designed for the processing of whole-slide images (WSI).SCHMOWDER automatically identifies very localized survival-related patterns on slides and calculates a risk score for each WSI analyzed in 3 successive steps: a pre-processing step, a tile-scoring step, and a prediction step.First, segmentation is performed using a UNet neural network to separate tissue from background on the WSI.The background is discarded and the tissue area is divided into small squares called "tiles" that are 112×112 micrometers in size (224 pixels×224 pixels).2048 features are then extracted from these tiles with a convolutional neural network, pretrained using the self-supervised learning algorithm MoCo v2, described in previous work by Dehaene et al. 22 Those 2048 features per tile are then fed into the tile-scoring and prediction step.Whenever multiple slides from the same patient are available, the model is applied independently to every slide and the predictions are averaged to obtain a patient-level risk score.
Tile interpretability
The contributions of individual tiles to the final model prediction were computed using Shapley values. 23,24Those are naturally suited to our histology pipeline and were adapted as follows.A slide can be viewed as a set of N tiles, and we consider every possible way of subsampling those.For each subsampling, we obtain a subset of n tiles (n varying from 1 to N), and we compute the prediction of the model for this subset.In the framework of Shapley values, the contribution of a given tile i is defined as the average difference between the predictions obtained from subsets containing i and those obtained from subsets that do not contain i.This exact formula is intractable, since there are 2N possible subsets, with N as large as 50 000.In practice, we sample a maximum of 10 000 random subsets to obtain a robust estimate.400 tiles (200 from the resection cohort and 200 from the transplant cohort) predictive of high-risk and low-risk were extracted and reviewed blindly by an expert hepatobiliary pathologist (DR) to assess for the presence of specific histologic features in tumor or non-tumoral tissue.The qualitative variables of high predictive value were compared using Z-tests of proportions, and the Holm-Sidak procedure was used to correct for multiple testing.
Statistical analysis
Survival analyses were performed using univariate Cox proportional hazard models implemented in the lifelines package of Python.Log-rank tests were used to compare survival distributions between stratification subgroups.We used Harrel's concordance index (C-index) as a metric for assessing the predictive performance of our deep learning and clinical models.The C-index evaluates whether the ranking of the model's predictions is consistent with the ranking of the survival times of patients.Because of censored data, it is possible to rank 2 patients i and j with survival times ti and tj only if: -ti>tj and patient j has had an event (death from disease or recurrence in our case).-ti<tj and patient i has had an event.
Such a pair of patients is called "admissible," and this pair is considered correctly classified by the model ("concordant") if their risk score ri and rj satisfy: -ri>rj if ti<tj (the risk should be higher for patients with a shorter survival).-ri<rj if ti>tj.HCV, Hepatitis C virus; HBV, Hepatitis B virus; NAFLD, Non-alcoholic fatty liver disease; NASH, Non-alcoholic steatohepatitis; AFP, Alpha-fetoprotein; RFS, Recurrence-free survival.
The C-index is defined through the following formula: C-index=number of concordant pairs / number of admissible pairs.A C-index of 0.5 indicates a random performance, while a C-index of 1 indicates a perfect concordance between predictions and observations.These results were validated on the discovery data set with the following cross-validation strategy: 5 stratified folds with 5 repeats.Folds were stratified based on censoring.C-indexes reported here are the average over the 25 folds.p-values were computed using bootstrap: patients were randomly sampled with replacement 10 000 times and the cross-validation average was computed for every sample, after which a Z-test was performed on the bootstrapped C-index difference.To assess the statistical significance of the stratification between high-risk and low-risk subgroups, we used a logrank test as implemented in the python library lifelines.
Patient characteristics
We identified 300 patients with HCC who underwent liver transplantation and 169 patients who underwent liver resection.Patient demographics are summarized in Table 1.Approximately, half (48%) of transplanted patients received pre-operative locoregional therapy, which primarily consisted of chemoembolization.None of the patients who underwent resection received pre-operative therapy.Nearly, half (44.8%) of patients in the transplant cohort had multiple tumors on pre-operative imaging, compared to 34% in the resection cohort.Pre-operative alpha-fetoprotein (AFP) levels were available for all 300 patients in the transplant cohort with a mean AFP of 63.1 ng/mL (median of 9.5 ng/mL).AFP levels were Outcome data was available for all patients, with a median follow-up of 7 and 3 years for the transplant and resection cohorts, respectively.At the end of the follow-up period, 44% of transplanted patients and 41% of resected patients had died.Patient deaths were attributable to HCC recurrence at a rate of 27% in the transplant cohort and 47% in the resection cohort.More than half of patients in each cohort remained alive at the end of follow-up.While 96% of surviving post-transplant patients showed no evidence of recurrent HCC, only 52% of surviving patients in the resection cohort were HCC-free.
Model development
In both the resection and transplant cohorts, we extracted a maximum of 20 000 randomly selected tiles (small image patches of 224×224 pixels) from each available WSI (out of a maximum of 65,042) and used a pretrained convolutional neural network to extract relevant features from each tile before training our models.To delineate the tumor area on H&E slides from the 2 Cleveland Clinic cohorts, we applied a tumor detection model that had been trained on previously annotated slides from the Mondor cohort. 21This tumor detection model is a ResNet50 neural network, trained to distinguish tiles containing tumors from those containing non-tumoral tissue.This detection model was trained on 240 000 tiles and tested on 60 000 tiles.Its performance was measured by the area under the receiver operating characteristic curve (ROC-AUC), which evaluates the capacity of the classifier to discriminate between 2 classes (ROC-AUC=0.5 for a random classifier, ROC-AUC=1 for a perfect classifier).The performance of the model on the test set was 0.98.This was then applied to unannotated slides from the Cleveland Clinic to separate tumor from non-tumoral areas.
The survival prediction model (Fig. 1A) was first trained with cross-validation for 20 epochs on a cohort of patients treated by resection at hospital Henri Mondor to predict overall survival, as described by Saillard et al. 21(C-index=0.78,std 0.07).Then, the model was finetuned for 10 epochs separately on each of the 2 Cleveland Clinic cohorts to predict either recurrence-free survival (RFS) or disease-specific survival (DSS).For both pretraining and fine-tuning, smooth C-index loss 23 was used as a training objective.
We assessed the discriminatory power of our deep learning model for predicting RFS or DSS by cross-validation.On the resection cohort, this deep learning model was predictive of both RFS (average C-index of 0.61) and DSS (average C-index of 0.72) (Fig. 2).To compare the performance of this model to a separate model considering only clinical variables, we performed univariate and multivariate analyses of the collected clinical data.The presence or absence of microvascular invasion was the most predictive variable overall, and the multivariate Cox regression of clinical data showed similar performance to the deep learning model in both RFS and DSS, with C-indexes of 0.62 and 0.72, respectively.However, the integration of the deep learning model with the clinical data outperformed both modalities alone.This was done by averaging the predictions of the deep learning model and the Cox regression of clinical data (Fig. 1B).This combined clinical and histologic model reached a C-index of 0.64 and 0.77 when predicting RFS and DSS, respectively, on the resection cohort.
In the transplant cohort, applying the separate histologic and clinical models, as well as the combined model, showed good performance on both target outcomes.The deep learning model reached an average C-index of 0.72 for RFS and 0.73 for DSS (Fig. 2).The multivariate Cox regression of clinical variables outperformed every univariate prediction for both RFS and DSS (average C-index of 0.72 and 0.71, respectively).However, the combined clinical and histologic model significantly outperformed both the multivariate clinical and deep learning histologic models independently, reaching a C-index of 0.76 for RFS and 0.77 for DSS.This combined model performed better on the subset of transplant patients who did not receive locoregional therapy (C-index of 0.83 and 0.87 for RFS and DSS, respectively) but still maintains good predictive performance even when locoregional therapy was administered prior to transplantation (C-index of 0.71 and 0.72 for RFS and DSS, respectively) (Supplementary Fig 1).
In order to compare our results with an independent prognostic system, we also calculated the RETREAT score 18 for all cases in our transplant cohort.Since the RETREAT system uses recurrence as an endpoint rather than survival, only RFS was considered.The RETREAT score reached a C-index of 0.69 to predict RFS, while our combined model showed significantly better predictive performance (C-index of 0.76, p=.03).
As the output of our model is a continuous score, we stratified the population into high-risk and low-risk groups based on the score assigned to each patient.The median score of each model was used as the threshold for stratifying patients into these 2 subgroups.The clinical, histologic, and combined models were all able to separate the population groups accurately.In the resection cohort, the combined model stratified the population with hazard ratios (HR) of 2.00 (p-value=.0059)to predict RFS and 5.55 (p-value=4.10-6) to predict DSS (Figs. 3A, 4A).In the transplant cohort, the HR was 4.17 (p-value=2.10-5) to predict RFS and 4.73 (p-value=8.10-5) to predict DSS (Figs. 3B, 4B).Of note, this combined model was also able to accurately stratify the subgroups of transplanted patients who did or did not receive locoregional therapy (Supplementary Fig 2).
In order to better understand the deep learning model's risk assessment criteria, we extracted the 400 most predictive tiles (100 tiles associated with high-risk scores and 100 associated with low-risk scores from each cohort) (Fig. 5).These tiles were blindly reviewed by an expert hepatobiliary pathologist (DR), who documented the presence or absence of fifteen histologic features in tumor areas and five features in nontumoral areas (Tables 2 and 3).The histologic parameters identified by the deep learning model as most predictive of HCC recurrence or death were the presence of macronucleoli (P=1.4×10−3 ), a nuclear-tocytoplasmic ratio greater than 50% (P=1.1×10−2 ), significant nuclear pleomorphism (P=9.4×10−4 ), and necrosis (P=2.3×10−5 ).The model also determined that the presence of lymphocytic inflammation was a low-risk feature, both within areas of tumor (P=1.7×10−9 ) and in non-tumoral tissue (P=1.2×10−4 ).Despite the preprocessing, some artifacts (like tissue folding) remained present in our dataset.To assess whether these had any impact on the model's predictions, the presence of artifacts was also annotated by the pathologist.Of the 400 tiles analyzed, 21 had artifacts, and they were equally present in high-risk and low-risk tiles (12 and 9 tiles respectively, p-value non-significant), indicating that tissue artifacts had no impact on the model's predictions.
Finally, in each cohort, we applied the UMAP (Uniform Manifold Approximation and Projection) algorithm to visualize the similarities and differences between high-and low-risk tiles (Fig. 6, Supplementary Fig 3).We used the previously characterized set of 400 tiles extracted from both cohorts.This set was augmented with a random selection of tiles (1000 per cohort) covering the entire distribution of predicted risks.UMAP was fitted on the features of the resulting set of 2400 tiles to obtain a common representation for resections and transplants.A clustering algorithm (KMeans) was then applied to divide tiles into 5 clusters, corresponding to various histologic features.In resections, low-risk tiles are mostly located in cluster 0, which is characterized by a significant lymphocytic infiltrate, while low-risk tiles in the transplant cohort are much more morphologically diverse.Most tumor tiles in both cohorts belong to clusters 3 and 4, with the latter containing the high-risk histologic features of macronucleoli, nuclear pleomorphism, and high nuclear-to-cytoplasmic ratio.Some high-risk tiles from the transplant cohort are also located in cluster 1, consisting mainly of necrotic areas.
Discussion
In this study, we built upon a previously developed deep learning model derived from HCC resection specimens. 21We expanded this model with the addition of clinical data and applied it to new cohorts, which included both HCC resections and transplant specimens.This combined model showed excellent predictive power, significantly improving upon the RETREAT score when assessing tumor recurrence in our transplant cohort.
Our deep learning model utilized a weakly supervised approach, and the specific features present in high-risk and low-risk tiles were blindly characterized by an expert hepatobiliary pathologist.Given that high-risk histologic features of HCC have been previously described in numerous other studies, 17,[25][26][27][28][29] this step allows us to compare the model's criteria with documented high-risk histologic parameters as a quality check.Our model considered the presence of a high nuclear-to-cytoplasmic ratio, nuclear pleomorphism, macronucleoli, and necrosis as high-risk features, correlating with tumor recurrence and poor survival.6][27][28][29] Our model also considered the presence of lymphocytic inflammation to be a low-risk feature.4][35] Overall, the concordance between our histologic model's risk evaluation and the overall body of pathology literature on this topic helps to confirm validity of the model's output.
Methods of determining pretransplant eligibility and posttransplant prognosis have evolved over the past few decades.Traditional eligibility systems involve only the radiographic measurement of tumor extent, 15,36 while more recent pretransplant eligibility systems have begun to incorporate an interdisciplinary approach that includes both lab testing and biopsy results, such as the extended Toronto criteria. 37Models that predict posttransplant outcome have similarly begun to consider criteria spanning multiple disciplines.While well-documented histologic features such as microvascular invasion and tumor differentiation remain prognostically significant, outcome models using only clinical parameters have also been developed, 16,38 and the recent RETREAT system combines both clinical and histologic variables to improve the prediction of HCC recurrence following transplantation.The adoption of AI in this field has taken a similar incremental approach, 19,20,[39][40][41] with some predictive algorithms built solely on clinical variables and other deep learning models derived from whole-slide images alone.In this study, we show that the combination of both clinical and histologic inputs improved the predictive performance compared to either modality alone.We also demonstrated the expandability of this deep learning model, as the initial series was trained on HCC resections alone, yet showed excellent performance when applied to transplant patients in this study.
Finally, the addition of clinical data significantly increased the model's predictive performance, demonstrating that AI modeling can be improved not only through the addition of more cases, but by incorporating clinical variables and reproducing the evolving interdisciplinary approaches seen in non-AI prognostic models.Future integration of radiographic imaging and genomic data may further optimize the efficacy of this model and help realize a more personalized approach to HCC surveillance after surgery.
Funding
The authors received no specific funding for this work.
Fig. 1 .
Fig. 1.Flow chart showing methodology of the study (A) and schematic of model development (B).Models were first developed in a series of patients with HCC treated by surgical resection at Henri Mondor University Hospital (Créteil, France) to predict overall survival.Transfer learning was then used to predict recurrence-free survival and disease-specific survival in 2 cohorts of patients with HCC treated by surgical resection and liver transplant respectively at Cleveland Clinic (Cleveland, Ohio, USA).
Fig. 2 .
Fig. 2. Predictive performance of the RFS and DSS models in resection (left) and transplant (right) cohorts, as measured by C-index.In both cohorts, the combined model outperforms both the separate clinical and histologic model.In the transplant cohort, the combined model also outperforms the RETREAT score.AFP=preoperative Alpha-Feto-Protein.p-values are as follows: *: <.05, +: <.1, −: >.1.
Fig. 5 .
Fig. 5.A heatmap visualization is shown from the resection cohort (A) with examples of high-risk and low-risk tiles.The bottom set of images contains an example heatmap from the transplant cohort (B) with corresponding high-risk and low-risk tiles.
Fig. 6 .
Fig. 6.UMap visualization of 1200 tiles per cohort, including the 200 high-risk and low-risk tiles that were reviewed, and 1000 intermediate tiles.High-risk and low-risk tiles are visualized on both cohorts (A).KMeans clustering with 5 clusters was applied to the selected tiles (B).Cluster 0 is characterized by the presence of lymphocytes, cluster 1 by fibrosis, and cluster 2 by necrosis.Clusters 3 and 4 contain the majority of tumor tiles, with cluster 4 composed mainly of high-risk patterns such as macronucleoli, nuclear pleomorphism, and increased nuclear-to-cytoplasmic ratio.High-risk and low-risk tiles are indicated by larger circles and a darker shade.
Table 1
Patient characteristics.
Table 2
Histologic features associated with high or low risk (tumor tiles).
Table 3
Histologic features associated with high or low risk (non-tumor tiles). | 2023-12-31T16:08:46.344Z | 2023-12-01T00:00:00.000 | {
"year": 2023,
"sha1": "8418d4091dc1b7ea6a39a6218dd17f831b53f4a7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jpi.2023.100360",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "60e4c2e650a52e7dce684b62cf806c3d1608c4d6",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": []
} |
141282222 | pes2o/s2orc | v3-fos-license | Southern Strategies : Preaching , Prejudice , and Power
This paper considers how 'preaching prejudice ' builds a constituency of like-minds by marginalizing otherson grounds of race and sexuality, for exampleand then instructs this constituency regarding political behavior. This discussion is part of a larger project on the construction of social values for political gain but here I specifically draw attention to the historical racism marking much of Protestant messaging in the American South and to how this racism became the foundation for the Republican Southern Strategy from the 1970s onwards. In doing so, I take as a case study the well documented racism associated with the history of the Southern Baptist Convention. The SBC historical narrative exemplifies the racism which underpinned the Southern Strategy. This is interesting because the SBC continues to be a key political actor among social conservatives in the South. This historical narrative indicates how 'preaching prejudice' became a political tool fueling the racism of Nixon' s campaign and seasoning subsequent campaigns. The paper then suggests that the most recent innovation of this familiar, well honed political tool can be located in contemporary discourse on same-sex marriage.
Nixon's Southern Strategy
During the American Civil War, the Republican Party had been antislavery; unsurprisingly then, most postbellum white Southerners were Democrat.By the end of World War II, however, white Southerners became dissatisfied with Truman's plan, To Secure These Rights, which endorsed racial equality in employment, voting rights and desegregation in the armed forces.Some, 'Dixiecrats,' made various attempts at supporting third party candidates but, after defeat, returned to the fold of the Democratic Party.According to historian Dan Carter (in Anderson 1998, Al), it was the race-fueled campaign of independent candidate George Wallace who "laid the foundation for the dominance of the Republican Party in American society through the manipulation of racial and social issues in the 1960s and 1970s." 1 Political historian Merle Black marks the speed with which Southern voters, finding a viable alternative, shifted allegiances: "in 1952 the South was the most important example of a one-party political system in the United States: the Democratic Party claimed 77% of southern voters," however, by the end of the 1960s, "fewer than 60% of southern voters were Democrat" and, in 2002, "Democrats claimed only 36% of the region's voters" (2004,1004).
The key to this political realignment was the Republican Party's Southern Strategy to build upon post-Kennedy, civil rights discontentment and lure somewhere."That "somewhere" turned out to be the cultural revolutiona revolution that became the "culture war." The story of the Southern Strategy did not begin with Kevin Phillips, nor in the aftermath of civil upheaval of the 1960s.The South had been marked for centuries as a place where the politics of othering engendered a sense of community: where those with power were well versed in rhetorical constructions of racial and moral difference which simultaneously erected an 'enemy' and posited an image of the South as a homogenous 'constituency' of the morally righteous.
2 Here, I offer an account of the Southern Strategy that locates it within a broader historical context, and in doing so, maps a singular historical narrative about power and prejudice which continues to be articulated from Southern evangelical pulpits.
Powerful Preaching
Fallowing the tradition of the Great Awakening, eighteenth century Methodist circuit riders and Baptist missionaries of Northern Reformed Protestantism migrated to the American South to spread the good news of the possibility of a personal relationship with Jesus Christ.In the process of sharing this gospel from the North to the South, ministers became aware of cultural differences and began to articulate the gospel in a language and cadence that appealed to potential converts.For example, historians (Snay 1989;McKivigan and Snay 1998;Harvey 1998;Smith 1997) agree that evangelical missionaries articulated the gospel with language that justified slavery and glorified Southern culture.Nixon's political strategist now turned historian of what he labels American Theocracy, Kevin Phillips (2006, 109) notes that: "Before the Baptists and Methodists could make evangelical religion dominate below the Mason-Dixon Line, they had toand did-shed notions that were perceived as radical, such as opposition to slavery and enmity to social hierarchies."3 A half-century later, many mainline Protestant denominations split over the issue of slavery only to reunite during Reconstruction.However, the Southern Baptists, unable to endorse principles of racial equality, rejected the possibility of reconciliation with their Northern counterparts.The resulting Southern Baptist Convention (SBC) has come to represent the core of evangelical voters and, in turn, the ground troops of Republican politics (Guth et al. 2006;Lindsey and Hackett 2008). 4Historical accounts, such as Nancy Ammerman's Baptist Battles and Ellen Rosenburg's The Southern Baptists offer critical narratives about the development of SBC and its place in Southern culture and politics.In Religion and the American Civil War, Paul Harvey points to the role of Baptist preachers, who by the early 19th century were articulating a "Christian proslavery apologetic"-arguing, for example, that God would use slavery to teach superior people to care for inferiors entrusted to them and to bring the gospel to the heathen ( 1998,169).Such views became an "unchallenged orthodoxy among white southern evangelicals" (Harvey 1998, 170).After the Civil War, the shame of defeat at the hands of the Yankees became a tribulation to be endured until God revealed his plan for the exultation of the South.Southern Baptists weaved together individual salvation and the conservation of the social order, so unlike N orthem Protestant social reformers, they were certain God was on the side of the Confederacy."Southern Baptists," writes Harvey, "preached the political ideology of white supremacy from the pulpit" (1998,177).They constructed their position on slavery and Reconstruction as a battle between order and disorder, a defence of the natural order and religious liberty. 5Baptist leaders built a political constituency by calling upon church members to vote for politicians who would represent "the white man's interest" (Harvey 1998, 177).During Reconstruction, evangelicals as well as Southern politicians utilized Old Testament readings regarding the oppression of God's people as balm for the wounds of Southern pride.Despite occasional references to Christian slaves as 'brother' or 'sister,' "Southern Baptists never accepted African Americans as equals in their churches" (Harvey 1998, 170).As the SBC membership grew and as race remained a key component of Southern politics, this refusal solidified their political positioning as the voice of white Southern Christian politics.By the tum of the century, the Southern Baptist Convention, as representatives of the white evangelical church, articulated a distinctive Southern identity: "we are a different people, a different blood, a different climate, a different character, different customs, and we have largely different work to do in this world" (Harvey 1998, 178).Their religious affiliation provided a shared sense of identity based on shared morality.Phillips describes SBC as the "state or 'established' Church of the South" that "thrives on the evangelical link between cultural domination and pursuit of membership growth" (Phillips 2006, 157).Oram Smith, academic historian and now President of a Family Research Council organization in South Carolina, observes: "Eventually, the notion of a Christian or even Baptist religion becomes a civic religion, a mentality so biding that Southern Baptists begin to think of themselves as the cultural majority with the goal not of rejecting society (as some small, sect-like religious conservatives have done), but of absorbing it.In such a world, Baptist clergy and lay leadership have no interest in taking stands against Southern cultural norms.They are not motivated to oppose the cultural or appear unpatriotic about the region, for to a greater and greater extent, they are the culture, they are the region" (Smith 1997, 3 9).
Beliefs in Southern exceptionalism have deep roots in religious justifications of slavery and postbellum rhetoric of a South that would 'rise again.' This "put a strong psychological imprint on its future" and in turn on the future of America (Phillips 2006, 150).As many Southerner pioneers and farmer-preachers pushed west, they took with them conservative religious values and evangelical missionary zeal.Southern pioneers believed "that the religious and political destiny of our nation is to be decided in the West" and hope to transform the wild frontier into a "Garden of the Lord," filled with churches and schools to provide for the West's moral, spiritual and civic health (PBS 2010).Commenting on this history, Phillips concludes that "by the 1830s ... evangelical Protestantism had won the soul count below the Mason-Dixon Line" and those settling in the West planted deep religious roots which have grown into the demographically traceable Southemization of the American West (2006,167,170).Mark Shibley correlates the growth of Southern style religion across the U.S. directly with patterns of high levels of in-migration from the South in the late twentieth century ( 1991).
The patterns of regional spread of social conservative Christianity are significant politically as by the tum of the new century "religion was by far the strongest predictor" for Presidential electoral success (Norris and Inglehart 2004, 94).
6 John Egerton's (1974) The Americanization of Dixie-published after Nixon's landslide in the South-pointed not just to the Americanization of the South, but importantly it called attention to the Southemization of American religion. 7Much of the terrain of America has experienced Southern migration, and as evangelical Christians have spread across the South and West, political discourse came to resonate with the tones of Southern Christianity and to suggest the foundations of 'red state' political culture.
Political Preaching
Conventional accounts of conservative Christian political involvement maintain that after the Scopes trial, humiliated fundamentalists and Biblical .literalists stepped away from political activism.However, more contemporary histories document the growth and rise of cultural influence of fundamentalist, holiness, Pentecostal, and evangelical churches and organizations (Williams 2010)."From the 1940s-1960s-in decades when segregation and civil-rights demonstrations were roiling in the South-the SBC saw the number of its adherents nearly double from just over five million to just under ten million" (Phillips 2006, 154 ). 8 Phillips observes that "evangelical, fundamentalist and Pentecostal religion far from evaporating or stagnating in backwater during the early twentieth centu7, seem to have been a gathering force, like an incoming tide" (2006,115).Key to the historical narrative considered here is the expanding and maturing role of Southern evangelicals as political actors.
In many ways, Nixon's Southern Strategy capitalized on an already growing friendship between politicians and Southern evangelicals.Rev. Billy Graham counselled every President from Eisenhower to Clinton (Williams 201 O; Lindsey 2007).In endorsing the Eisenhower campaign, he expressed a deep connection with the vice-presidential candidate Nixon and considered him to be a "sincere" and "splendid churchman" (Williams 2010, 25).However, Graham's moderate approach to politics, and civil rights particularly, dissatisfied many Southern Christians.In the aftermath of Nixon's presidency, Southern evangelicals were disappointed and somewhat ambivalent about the Republican Party.The election of the Democratic candidate Jimmy Carter in 197 6 attests that many were tempted by this Southern Baptist school teacher (Williams 2010, 126-27).But President Carter did not live up to expectation as a Christian conservative cultural warrior.By the early 1980s, Christian social conservatives established a strong footing in the public square from which to articulate their theopolitical message more professionally.The development of grassroots political strategies that could 'use the pews' informed work both inside the church walls as well as encouraged individual Christians to 'extend the pews '-to look beyond the isolated congregation and be more politically engaged.
Briefly, I want to draw attention to three identifiable tools of evangelicals and other Christian conservatives-"speakability," theo-political pedagogy, and activism as "parapublic institutions"-in order to foster insight about the role conservative Christians play in setting the frame of Southern, and possibly American, political discourse.These examples are but three aspects of, what William Connolly labels the "resonance machine" (Connolly 2008).First, "speakability" refers to the way in which issue framing enables those fluent in the discourse to participate in discussion but those unable, or unwilling, to speak with the same nuance or technicality are unheard or disempowered. 10As noted, the postbellum South sermons weaved together feelings of despair and defeat with the Old Testament message of tribulation before coming into the Kingdom of God.Southern clerics defined the relationship between religion and politics by translating the political conflict into religious terms (Snay 1993).Confederate defeat was constructed as part of a grand design that God was working out for his people-"we" will rise again.Literal Biblical interpretations framed the socio-political situation giving conservative Christians a particular language with which to interpret political events.
Similarly, with the contemporary cultural dominance of evangelicals in the South, and areas of Southernization, conservative Christians engage in political discourse framing it in Biblical tones.For example, references to "state's rights" have come to signify a postbellum condemnation of federal involvement in Southern affairs, e.g.desegregation, mandatory curriculum, welfare provision.Where white Southerners once had been largely a group of European immigrants needing and endorsing federal welfare programs as Democratic voters, with increasing numbers of Hispanic and African-Americans Southern citizens having similar socio-economic needs, welfare came to be equated with handouts.I I Opposition to federal programs reiterated a Reformed Protestant theology of self-help and individual responsibility (Manow 2004).Arguments for state's rights are bound theologically to an individual responsibility to God-social programs undermine individual responsibility and therefore ones relationship to God (Phillips 2006, 168).In the backdrop of urban riots of the late 1960s, Nixon played upon a fear of African-Americans and of the overreach of the federal government.Calls for law and order, state rights and opposition to school desegregation and bussing were Nixon's dog-whistle politics (Greenburg 2007;Brown 2004).
12 Even now, conservative Christian Republican presidential contenders, such as Rick Perry and Mike Huckabee, articulate support for "state's rights" and in doing so draw upon a theologically based rejection of federal interventions to combat racial discrimination.
The articulation of a political problem through religious moral codes also enables Republican politicians to appropriate issues in order to gain support of conservative Christians.For example, while it is common place for politicians to engage in 'dog-whistle politics,' in Tempting Faith, David Kuo, strategist for President George W. Bush, outlined exactly how "God Talk" enabled the political seduction of conservative Christian voters (2006)."God Talk" acts as a "surreptitious code inserted into their [politicians'] campaign speeches as a way to appeal to targeted evangelical voters without alienating non-evangelicals who were likely not to pick up on the code language" (Calfano et al. 2013).God Talk gives Republican politicians a speakability-coding and articulating political positions in a language accessible to conservative Christians and (intended to be) inaccessible to those not fluent in the discourse.Interpreting political moments through Biblical language was pioneered by Christian missionaries moving from North to South, honed by conservative Christian political actors since the Civil War and appropriated by contemporary Republican politicians reliant upon Southern Christian conservative votes.
Second, evangelicals are called to share a theological message and they have cultivated the professional production of this message from reprinting Bibles for Confederate soldiers to contemporary multi-media industrial complexes such as the Christian Broadcasting Network.In an impressive array of ways, and with significant financial support, evangelists such as Pat Robertson and publications such as Christianity Today provide spiritual direction and interpretations of political and cultural events for millions of Americans (Herman 1998;Maddux 2010).The National Religious Broadcasters, established in 1944, lobbied the FCC for less regulation and more airtime for religious programming.Now NRB brings together thousands of 'Christian Communicators' to "transform culture through the application of sound biblical teaching" (www.nrb.org).In 1974, Bill Bright, founder Campus Crusade for Christ, and John Conlan, sought to bring Christians into politics through Third Century Publishing which produced such books as In the Spirit of '76: The Citizen's Guide to Politics.Third Century Publishing was a "new awakening of evangelical political consciousness" (Williams 2010, 122).Today, the most effective political outcome of this evangelical boom is the extensive distribution of Family Research Council 'voter guides' into every pulpit, pew and local congregation across America. 13 Educating individual believers in political language and intervention techniques mirrors more highly sophisticated academic pursuits training young pastors, lawyers, politicians and lobbyists at Southern Christian institutions such as Bob Jones University and Liberty University.
Finally, in his research on church-state relations in Europe, Peter Katzenstein conceptualizes mainline churches not as interest groups but as "parapublic institutions" with a heightened status and investment in both public and private sectors (1987,(58)(59)(60).A similar duality of political privilege can be observed in U.S. church-state relations.For example, Joseph Gusfield's Symbolic Crusade outlines how the American temperance movement was one method by which abstinence from alcohol served as a symbol of white, Protestant cultural dominance over the increasingly multicultural and immigrant population of the United States in the post-World War I period (1963). 14Historian Mitchell Snay argues that since Reconstruction, Southern ministers have demanded to chart the political course when moral issues were at stake (Snay 1993).As I demonstrate elsewhere, contemporary examples of this can be evidenced in the theological and economic investment of defining and providing welfare (Norris and lnglehart 2004;Wilson 2013Wilson , 2009)).As •major stakeholders in the political economy of care, religious institutions frame political discussions regarding welfare in order to defend their investments in private/charitable provision of care.
Undoubtedly, those that employ these tools do not all sing from the same political or theological hymn sheet (Williams 201 O; Lindsey 2007; Berlet 1995).Conflicts, differences and power struggles between individual elites, denominations and organizations mark the Christian Right (Dowland 2009).For example, while Billy Graham and the National Association of Evangelicals dined at the White House, fundamentalists such as Bob Jones, Jr., Jerry Falwell, and SBC elites differentiated themselves from Graham's ecumenical approach.In declaring the 'culture war,' most Christian conservative leaders acknowledged the need for, what Francis Schaeffer termed, a "co-belligerent politics," particularly on wedge issues such as abortion or homosexuality.Strategically targeted political alliances such as the Christian Coalition and the Moral Majority defined themselves through the use of these wedge issues against a shared political 'other,' broadly painted as secular culture.
Christianity Today executive editor and theologian, Timothy George (1997) described co-belligerency as "an ecumenism of the trenches." 15In his excellent history of the 'family values agenda,' Seth Dowland observes that key to recent co-belligerent activism was the construction of a 'majority' which they achieved "by convincing themselves that they represented a majority of Americans-and by convincing enough Americans that a liberal minority had •launched a covert war on the family" (2009,631 ).The cobelligerency strategy fuels coalitions developing between economic and social conservatives from Ronald Reagan's 'New Right' to the current Christian Right/Tea Party collation symbolized by Ralph Reed's 'Faith and Freedom' organization (Wilson and Burack 2012;Posner 2008).Cobelligerent politics is a rational, expedient choice to join forces temporarily to defeat a common enemy.While not a conspiracy of minds, but against enemies, the co-belligerent Christian Right now represents a significant political force setting the discursive frame of Republican, if not American, 1 . . 16po ltlCS.
Hollow Apologies and Renewed Southern Strategies
Alongside the development and maturation of this resonance machine, however, it became politically untenable to appear racist.In 2005, RNC chairman, Ken Mehlman acknowledged the use of a Southern Strategy since 1968 and admitted that Republicans had used race as a wedge issue to win white Southern votes: "Some Republicans gave up on winning the African-American vote, looking the other way or trying to benefit politically from racial polarization ....I am here as Republican chairman to tell you we were wrong."In 2009, the RNC elected its first African-American chairperson, Michael Steele, and soon he, too, confirmed the use of the Southern Strategy: "For the last 40-plus years we had a 'Southern Strategy' that alienated many minority voters by focusing on the white male vote in the South.Well, guess what happened in 1992, folks, 'Bubba' went back home to the Democratic Party and voted for Bill Clinton."There are a number of observations to be made about these acknowledgments but perhaps the most obvious is that they are not apologies.
Additionally, it is interesting to note who exactly was thrust onto the public stage to articulate this message.Steele was not successful in securing a second term in 2011 largely due to scandals regarding his approval of excessive expenses for limos, private jets, a bondage themed nightclub and general incompetence.The media rhetoric surrounding Steele was about "spending," "sex scandals," "strip clubs" and portrayed him as "smug," "poor" at his job, "dumb," and "out of his league."Despite Steele's election to the RNC leadership, after two years of providing a welcoming image for potential Republican African-Americans, the campaign to move him out of office tapped into racist stereotypes about African-American men.Moreover, there is substantial evidence of racism with the overlapping constituencies of the Republican Party, the Christian Right and the Tea Party (Wilson and Burack 2012).Racism-manifest in the birther movement, references to President Obama as "aloof' and the First Lady as "uppity"continues to play a significant part in Republican and Christian Right discourse.
What has been noticeable over the last twenty years is the way in which the political tools used to marginalize, or 'other,' have been deployed against LGBT citizens.Antigay rhetoric has not replaced racism.However, the familiar political tool of preaching prejudice continues to ensure power in a more racial diversity-but overwhelmingly heterosexual-demographic.To begin to explore this point, consider the context of the 'apology' for the Southern Strategy.The original acknowledgement of the strategy was not articulated by an African-American man but by a white, middle aged, man working directly with President Bush: Ken Mehlman.Given his proximity to the President-and history as a manager of the Bush-Cheney reelection campaign-his words might have indicated a new direction in Republican politics (Herbert 2005).However, upon leaving his post, Mehlman identified himself as a gay man and thus enabled some to dismiss the 'apology' as not reflective of the desires of the Republican leadership.As someone who had worked alongside Karl Rove to place anti-gay initiatives on election ballots in 2004 and 2006, Mehlman was vilified by gay activists who despised that such a homophobic campaign had been "run by one of the nation's worst closeted individuals" (Balcer 2011 ).Criticism also came from conservative Christians.Tony Perkins, president of the Family Research Council, clarified that Mehlman' s coming out helped explain "the scandalous failure" of the Republican establishment to fight same-sex marriage (Zemike 2010).Robert Morrison, Senior Fellow for Policy Studies at the Family Research Council, challenged Mehlman's interpretation of the Southern Strategy stating that it was not about using racism but about ensuring "electoral votes in the Solid South" (Mantyla 2010 ).
Steele's and Mehlman's interventions suggest a strategic shift in framing.While one interpretation is that Republicans were signaling an end to using racism as a political strategy, I think this is an inaccurate reading of events.Racism, blatant or subtle, will continue to win significant numbers of white votes for Republican candidates, but, as noted below, the culture war has engendered an additional political 'other' with the potential to build and solidify a larger constituency of diverse conservative religious groups at a much smaller political cost or risk of alienating potential allies.Employing the tools now familiar to conservative Christian politics-speakability, theopolitical pedagogy and political activism inside and outside religious institutions-the resonance machine seems to have prioritized constructing 'gays' as political enemies threatening American life.That does not imply that racism has lessened as a constituency building tool.Instead it is to acknowledge that Republican politics at the tum of the 21st Century (re-)launched a strategic attack on one group of American citizens and that their success in the South and other Southemized red states confirmed that, while some Republicans may articulate remorse for racist political strategies, most Republicans-conservative Christians, in particular-advocated an anti-gay political strategy.The emergence of this strategy can be traced historically and culminated in legislation 'defending' heterosexual marriage.Such a narrative, outlined below, is not entirely distinct from Nixon's Southern Strategy and can be seen as a development, a nuancing-a contemporary redeployment.
As the father of Nixon's Southern Strategy, Phillips emphasized the need to be aware of precisely how many votes are crucial and from which groups of people.Even in 2002, former advisor to Republican Presidents and one-time Presidential candidate Pat Buchanan acknowledged that the Southern Strategy worked-with Republicans continuing to take about 60 percent of the white vote and as long as this continues, few other votes are needed (2002).For some conservative Christians this may be unpalatable or given the continued growth of African-American and Hispanic evangelical churches/constituencies it may be politically undesirable.Even the leaders of tea party organizations recognize the need to move away from a racist image (FreedomWorks 2010; Wilson and Burack 2012).However, just as the socio-political credibility of a Southern Strategy based on racial prejudice began to wane, the Christian Right, and in turn the Republican Party, had constructed a more modem political enemy that has paid substantial political dividends in the South and Southemized U.S. The family values agenda with its roots, language and cultural cadence inspired conservative Christians to action most recently against same-sex marriage (SSM).Arguably, SSM is the jackpot of political issues: states rights, endless pedagogical potential based on Old Testament theology, a speakability across denominations and a culturally palatable enemy to ensure the long term political cohesion of conservative Christian community.
Since the early days of Falwell's Moral Majority, 'gay rights' has served as a rallying cry to conservative Christians.Anita Bryant's 1977 campaign in Dade County against a measure prohibiting employment discrimination became a template for associating gay rights with some sort of risk to children and to the institution of the family.Theo-political discourse soon established gay rights as 'special rights' granted to minorities who threaten good Americans and interpreted HIV and AIDS as the manifestation of that threat.The Homosexual Agenda, writes Christian conservatives Alan Sears and Craig Osten, "whether you realized it or not ... affects your marriage, it threatens your children, and if we don't do something soon, it will drastically limit your religious liberty" (2003,12).By 1992, Buchanan rallied the Christian grassroots of the Republican National Convention by declaring war: "There is a religious war going on in our country for the soul of America.It is a cultural war, as critical to the kind of nation we will one day be as was the Cold War itself."As Buchanan warned, the "struggle for the soul of America" was reaching a crescendo.
One year later, the Hawaii Supreme Court decision, Baehr v. Lewin, found it unconstitutional to deny gay and lesbian citizens the right to marriage.Conservative Christians constructed this decision as posing significant danger to the family and, before the appeal was heard in Baehr, anti-gay marriage legislation began to spring up in other states.The regulation of marriage had been traditionally the domain of the state government-an issue of states' rights.By September 1996 The Defense of Marriage Act had swept through Congress and was signed into law by President Clinton.As a pre-emptive strike, it sent a signal to states to not grant marriage licenses to same-sex couples.Completely at odds with the historical defence of states' rights, and in what Ball evidences as a "severe and harmful backlash," the threat of same-sex marriage warranted federal intervention to ensure that marriage remained exclusively heterosexual.Federal law could not recognize same-sex couples as married and any state doing so would have that decision 'contained' within its own borders (2006,1494).Barry Adam notes that DOMA successfully repositioned gay men and lesbians as "the enemy of the common folk" (2003,269).
In 2003, the Massachusetts Supreme Court decided, in Goodridge v. Department of Public Health, that same-sex couples should be allowed to marry and directed the legislation to make this possible.The state's Catholic Bishops issued a statement referring to the court's decision as "a national tragedy" (Ball 2006(Ball , 1501)).In the same year, the U.S. Supreme Court struck down state sodomy laws, Lawrence v. Texas.Given the inability to 'protect' themselves through sodomy laws and the possibility that states might be forced to recognize same-sex marriages under 'full faith and credit,' conservative Christians across the U.S. supported state constitutional amendments defining marriage as only a relationship between a man and a woman.This "two-part strategy" first pushed for a federal amendment to take away power from the state to decide marriage regulations then pressed for state constitutional amendments banning same-sex marriage (Dao 2004;Leonard 2004).
The campaign against same-sex marriage, or more widely against gay and lesbian citizens, constructs homosexuality as a sin, as a threat to the family and to the nation.From Falwell famously blaming 'homosexuals' for the terrorists attacks of 9/11 to FRC directives in Culture Impact literature, the political discourse is framed to construct gay men and lesbians as 'other,' as a 'threat,' as un-American, and as a theologically clear 'enemy.'Those challenging such constructions with a language of civil rights or discrimination are rendered as threats themselves.For example, Family Research Council Culture Impact policy goals instruct evangelicals that homosexuality is a choice and therefore those who "misuse civil rights laws to protect homosexual conduct and gender identity disorder" are political enemies (Cureton 2011).Any other outcome would "mandate the employment of homosexuals in inappropriate occupations ... employers in the area of education and childcare would be required to hire homosexuals"; civil rights based legislations would "destroy employer's rights to set dress and grooming standards for their employees ... that is culturally appropriate for the employee's biological sex"; civil rights would "pave the way for legalization of counterfeit same-sex 'marriage' ... forcing same-sex 'marriage' on every .h . ,,17state m t e umon.
According to the Family Research Council, evangelicals have "dual citizenship" and with "dual commissions"-defined as the familiar "Great Commission" to spread the "Good News" and the more recently articulated "Cultural Commission" in which "God delegated the development of culture and society to humankind ... to overcome evil with good" which includes "exerting a positive influence on public policy and government."As parapublic political actors, evangelical Christians are to be engaged not just in lobbying government but in participating, leading and becoming government in order to ensure conservative Christian ideals are reflected in policy and law.The outcome of their efforts, and the wholesale endorsement of this rearticulated Southern Strategy by Republican politicians, is that 31 states have amendments defining marriage as between a man and a woman.Some have gone further than others.Including recent ballots in North Carolina, voters in 20 American states have ratified amendments to state constitutions banning recognition of all forms of relationship rights (i.e., marriage, civil unions, domestic partnerships, reciprocal benefits, etc.) for same-sex couples: Alabama, Arkansas, Florida, Georgia, Idaho, Kansas, Kentucky, Louisiana, Michigan, Nebraska, North Dakota, Ohio, Oklahoma, South Carolina, South Dakota, Texas, Utah, Virginia, and Wisconsin. 18
Conclusion
The swift, professional, legal and cultural challenge in the wake of Baehr and Goodridge should not have been a surprise.Constituency building through a politics of othering has a long history in the Confederacy.
19 Just as Phillips had pointed out in the late 1960s, political strategy is about "holding together the largest number of ethnic prejudices."Anti-gay prejudices hold together a large number, and in some states an overwhelming majority, of constituents.Just as Phillips acknowledged in the 1960s, and Buchanan restated in 2002, Republicans do not need the African-American vote.Likewise they do not need the gay vote.They are happy for Democrats to be seen as gay-friendly and to paint this as un-Christian or anti-religious freedom (Adam 2003, 274). 20These political tools have been honed over centuries of racism.They have become the tools not just of local preachers or Southern politicians but of a highly professional industry directed at a perceived cultural, political, and theological 'other.'In locating the resonance machine, Connolly focuses on the emergence of the New Right (2008).But the narrative and strategies deployed by conservative Christians draw upon a much deeper cultural history.As such, the political situation may be more historically entrenched, and more difficult to overcome, than imagined.NOTES 1 George Wallace, according to Carter, had been "the master teacher and Richard Nixon and the Republican leadership that followed were his students."2 Here I rely specifically on Norman Fairclough's (1989) analysis of how narratives about "the other" explicitly or implicitly construct narratives about "us/them" that create "perceptions of constituency" and "perceptions of enemies."See also Brewer (1999).Phillips describes SBC "an eight-hundred-ton dinosaur in the parlor of American Protestantism, and over the last century the fastest-growing major church in the United States" (2006,149).5 Mitchell Snay (1993, 11) concurs that Southern clerics "essentially translated the political conflict into religious terms."6 Also they note "committed evangelicals are far more likely to live in small towns or rural areas, especially in the Southern and Midwest." 7 See also Gregory (1998). 8 The SBC statement on the nation in 1968 led to enough controversy that slowed growth but by the 1990s surveys were showing growth in the west, and southern parts of mid-western states.
9 Williams (2010) comments on the popularity of evangelist Billy Graham, particularly the involvement of media mogul William Randolf Hirst.Graham was careful not to
3
Phillips reflections on the history of the Republican Party in relation to conservative Christianity is particularly of interest given his historical proximity to events. 4 | 2019-05-01T13:07:18.088Z | 2013-11-01T00:00:00.000 | {
"year": 2013,
"sha1": "cf3718fed3d5d98e1f405a6c7a29fd17e37bbf49",
"oa_license": "CCBYNCSA",
"oa_url": "https://journals.shareok.org/arp/article/download/1016/985",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "cf3718fed3d5d98e1f405a6c7a29fd17e37bbf49",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Political Science"
]
} |
21126765 | pes2o/s2orc | v3-fos-license | Annual cycle of ozone at and above the tropical tropopause : observations versus simulations with the Chemical Lagrangian Model of the Stratosphere ( CLaMS )
Multi-annual simulations with the Chemical Lagrangian Model of the Stratosphere (CLaMS) were conducted to study the seasonality of O 3 within the stratospheric part of the tropical tropopause layer (TTL), i.e. above θ=360 K potential temperature level. In agreement with satellite (HALOE) and in-situ observations (SHADOZ), CLaMS simulations show a pronounced annual cycle in O 3, at and aboveθ=380 K, with the highest mixing ratios in the late boreal summer. Within the model, this cycle is driven by the seasonality of both upwelling and in-mixing. The latter process occurs through enhanced horizontal transport from the extratropics into the TTL that is mainly driven by the meridional, isentropic winds. The strongest in-mixing occurs during the late boreal summer from the Northern Hemisphere in the potential temperature range between 370 and 420 K. Complementary, the strongest upwelling occurs in winter reducing O3 to the lowest values in early spring. Both CLaMS simulations and Aura MLS O 3 observations consistently show that enhanced in-mixing in summer is mainly driven by the Asian monsoon anticyclone.
Introduction
A quantitative understanding of transport across the tropical tropopause layer (TTL) which acts as a "gateway to the stratosphere" plays a key role in determining the stratospheric concentrations of water vapor and other chemical species (Fueglistaler et al., 2009a). The TTL, which roughly Correspondence to: P. Konopka (p.konopka@fz-juelich.de) extends vertically between 350 and 420 K, is laterally confined by the subtropical jets, which vary seasonally both in their intensity and meridional position (e.g. Haynes and Shuckburgh, 2000;Konopka et al., 2007).
A direct consequence of transport is the composition of air within the TTL that shows a strong seasonality. In particular, at the tropical tropopause (i.e. p≈100 hPa or θ≈380 K, ppressure, θ-potential temperature), high water vapor (H 2 O), high ozone (O 3 ), low carbon monoxide (CO) during summer (seasons relative to the Northern Hemisphere) alternate with low H 2 O, low O 3 and high CO during winter (Folkins et al., 2006;Schoeberl et al., 2006;Randel et al., 2007).
In our work, we focus on the seasonality of O 3 in the TTL with its pronounced summer maximum Konopka et al., 2009). We discuss the question of how the horizontal transport from the extra-tropics into the stratospheric part of the TTL across its lateral boundaries (in the following we denote this kind of transport as in-mixing), modulates the vertical upward transport in the tropics, the socalled upwelling (see Fig. 1). More precisely, we understand in-mixing as the nearly isentropic net transport of air masses from the extratropics into the TTL. We show in this paper that this transport is mainly driven by meridional advection that reveals a strong seasonality with a much stronger transport from the Northern Summer Hemisphere in agreement with some previous purely diagnostic studies (Chen, 1995;Haynes and Shuckburgh, 2000;Ploeger et al., 2009).
The seasonality of O 3 is shown as an example in Fig. 2 where the observations of the Southern Hemisphere ADditional OZonesondes (SHADOZ) network are used to determine the fractional annual cycle of O 3 in the tropics, O 3 / O 3 , (where O 3 is the annual mean, O 3 =O 3 − O 3 ). In the derivation of the seasonal Fig. 1. Upwelling versus in-mixing. The seasonality of O 3 within the TTL is determined mainly by the semi-annual cycle of photochemical production (as the Sun crosses the equator twice per year) and the annual cycle of transport with the strongest and weakest upwelling in boreal winter and summer, respectively. In this paper it is discussed how the annual cycle of upwelling interacts with the annual cycle of in-mixing, i.e. with the horizontal transport of midlatitude air masses into the TTL with the strongest contribution from the Northern Summer Hemisphere. cycle of O 3 we follow the procedure described in Randel et al. (2007) where ozone and temperature observations of the seven SHADOZ station closest to the equator (see subpanel in Fig. 2 for their geographical position) are considered, but instead of pressure we use potential temperature as the vertical coordinate. In particular, p-related observations of each station are transformed to θ-levels using the measured temperatures and then averaged over all seven stations for each θ−level.
A clear annual cycle of O 3 / O 3 , with the highest values of O 3 / O 3 in late summer and early fall in the θrange between 370 and 430 K, can be diagnosed from the SHADOZ data. The lowest values appear approximately 4-5 months earlier. The pattern of this very pronounced cycle is similar to the analysis on the p-levels (Chae and Sherwood, 2007;Randel et al., 2007), although O 3 / O 3 is significantly smaller with peak anomaly 0.5 versus 0.3 during summer by using p and θ as the vertical coordinate, respectively (Fueglistaler et al., 2009a;Konopka et al., 2009). As discussed by Konopka et al. (2009), a significant part of the variability of O 3 / O 3 on the p-levels is a seasonal adiabatic process (as the p-levels move relative to the θ-levels during the year because of the seasonal cycle in temperature) that can be removed by using potential temperature θ as the vertical coordinate.
The seasonality of O 3 above the level of zero clear sky radiation (Q=0 level around θ≈360 K as discussed in Gettelman et al., 2004) was recently explained as a consequence of the annual cycle in the tropical upwelling with the strongest and weakest upwelling in winter and summer, respectively . Randel et al. (2007) showed that the Randel et al., 2007).
annual cycle of upwelling is approximately in phase with the well-known seasonal variation of the tropical temperatures with the highest (lowest) temperatures during the summer (winter). In particular, they assumed the tropics to be wellisolated from the extra-tropics, i.e. that in-mixing from the extratropics into the TTL is negligible. Recently, Schoeberl et al. (2008) followed the same arguments in order to explain the fluctuations in tropical trace gases observed by HALOE and Aura MLS instruments within the TTL. However, as shown by our previous study based on the HALOE and SHADOZ observations of O 3 and on a simple conceptual model of transport and photochemistry , the observed seasonality of O 3 on θ−levels, with the highest values during boreal summer, cannot be understood solely by photolytical O 3 production in slowly rising air masses which are well-isolated from the extratropics. By quantifying the photochemical production of O 3 in ascending air and by using the SHADOZ climatology to estimate the tropospheric O 3 mixing ratio, Konopka et al. (2009) determined the residual variability in observed O 3 and interpreted this residuum as being caused by in-mixing.
Evidence of in-mixing into the TTL is not new (e.g. Volk et al., 1996;Avallone and Prather, 1997;Folkins et al., 1999). Using ozone sondes and aircraft observations of N 2 O/O 3 correlations, signatures of stratospheric contributions were found within the TTL above 14 km and well below the tropopause, thus identifying this region as a transition zone separating the troposphere from the stratosphere (Tuck et al., 1997). Recently, Marcy et al. (2007) showed that more than 60% of their airborne in situ HCl observations within the TTL (up to ≈100 pptv below θ≈390 K) are of stratospheric origin and that this HCl is well-correlated with O 3 .
Using the Chemical Lagrangian Model of the Stratosphere (CLaMS) (McKenna et al., 2002;Konopka et al., 2007), we discuss here how well the annual cycle of O 3 above the tropical tropopause derived from satellite (HALOE) and in situ observations (SHADOZ) can be reproduced by this model. Whereas most of the published model studies on this topic are based on conceptual 2D-models, where the meridional transport is mostly neglected (see e.g. Read et al., 2008, and the citations therein), we show within a full 3-D study that inmixing significantly influences the composition of the lower tropical stratosphere, in particular the mixing ratios of O 3 in summer. We also show that the summer Asian monsoon anticyclone drives such horizontal transport of sub-and extratropical air masses into the stratospheric part of the TTL.
Configuration of CLaMS
Multi-annual, global CLaMS simulations of the whole troposphere and stratosphere (from the ground up to θ=2500 K) follow the model set-up described by Konopka et al. (2007) and cover the time period from October 2001 to December 2005 with 100 km horizontal resolution and the highest vertical resolution of 400 m around θ=380 K. The horizontal winds are driven by the operational analysis of the European Centre for Medium-Range Weather Forecasts (ECMWF).
Above 100 hPa, the potential temperature θ is employed as the vertical coordinate of the model and the cross-isentropic velocityθ is derived from a radiation calculation using the Morcrette scheme under clear sky conditions (Morcrette, 1991). Below 100 hPa, the model smoothly transforms from θ to a hybrid pressure-potential temperature coordinate ζ (i.e. below ≈300 hPa and above 100 hPa, the ζ surfaces are parallel to p-and θ-surfaces, respectively) and, consequently, gradually includes the large-scale vertical velocitẏ p from ECMWF (Mahowald et al., 2002).
In particular, the concept of the hybrid vertical velocity discussed in Konopka et al. (2007) mixes below p=100 hPa the ECMWF vertical velocityṗ withθ derived from the clear-sky radiation calculation. In this approach, the radiation-related contribution of clouds and the contribution of latent heat toθ could not be taken into account (because these terms are not archived by ECMWF). This approximation causes a gap in the annual mean tropical upwelling between θ=350 and 360 K with negative vertical velocity in this part of the atmosphere. Ploeger et al. (2009) have recently shown that using the ERA-Interim re-analysis where, in contrast to the ECMWF operational analysis discussed here, these terms are available, the gap in tropical upwelling can indeed be closed.
However, this so-called diabatic approach, where the vertical velocity is derived from a diabatic heat budget, violates the continuity equation guaranteeing a mass-conserving transport. Here, in this paper, we correct the vertical velocity by using the condition that the zonally and annually averaged total mass fluxes should vanish at each θ-level (Rosenlof, 1995) (see Appendix). In this way, we obtain mass-conserving transport on an annual scale. Moreover, the correction removes the unrealistic, negative values of the vertical velocity between θ=350 and 360 K. It is worthwhile to note that the vertical velocity in this part of the atmosphere is in any case very small (if not the smallest) and are a subject of current scientific debate (Krüger et al., 2009;Ploeger et al., 2009).
To illustrate how this procedure works, we show in the left panel of Fig. 3θ=θ (dθ/dζ ) averaged over the 2002-2005 period for the corrected vertical velocity (i.e. following the procedure suggested by Rosenlof (1995), see Appendix) and compare in the right panel this corrected case with the vertical velocity discussed in Konopka et al. (2007) (reference case). In the left panel, the zonal and latitudinal (±10 • N) average ofθ (corrected case) is plotted as a function of θ and season whereas the annual mean is shown in the right panel (black and red line forθ in K/d and mm/s, dash-dotted and solid line for the reference and the corrected case, respectively). The velocity w in mm/s was inferred fromθ in K/d using w=θ·dz/dθ with dz/dθ from the tropical climatology .
Thus, the part of the TTL around θ=360 K, with a minimum inθ, couples the convection-dominated troposphere (semi-annual cycle ofθ) with the radiation-dominated stratosphere (annual cycle ofθ). Note that the annual cycle of tropical upwelling is a consequence of the hemispheric asymmetry of the land-sea distribution and of the orography which lead to hemispheric differences in the distribution and intensity of the wave drag driving the Brewer-Dobson circulation. In particular, the lowest tropical temperatures in winter correspond to the strongest wave drag in the Northern Hemisphere. On the other hand, the semi-annual cycle of convection is a consequence of a simple fact that the Intertropical Convergence Zone (ITCZ) roughly follows the Sun which crosses the equator twice per year.
Compared to the vertical velocity used in Konopka et al. (2007), the corrected vertical velocity enhances the mean tropical upwelling below the tropopause and removes the gap of the negative velocities around θ=350 K (the upper and lower edges of this gap are denoted in the left panel of Fig. 3 by two thick, dash-dotted white lines). Furthermore, the corrected vertical velocity also slightly decreases the upwelling above θ=380 K towards values which agree fairly well with the upwelling estimated from the upward propagation of the tape-recorder signals (Mote et al., 1998;Niwano et al., 2003) that is expected to be of the order of 0.3 mm/s at θ=450 K (thick green dashed vertical line in the right panel of Fig. 3). In the following, we use the corrected vertical velocity as a default configuration and discuss the sensitivity of our results on the vertical velocity by using the reference configuration described in Konopka et al. (2007). . White thin lines are isobars. The thick dash-dotted lines areθ =0 isolines of the reference case described in Konopka et al. (2007) (i.e. between these lines the vertical velocity is negative). Right: The corresponding annual mean ofθ for the corrected case (solid) compared with the reference case (dash-dotted). Black and red lines denoteθ in K/d and mm/s, respectively. The green vertical line approximates the upwelling at θ =450 K with w≈0.3 mm/s derived from the upward propagation of the tape-recorder signal (Mote et al., 1998;Niwano et al., 2003).
CLaMS simulations versus observations
To study the seasonality of O 3 , CLaMS simulations with and without chemistry, i.e. for O 3 and P-O 3 (passively transported O 3 ), are now considered. For the chemistry of O 3 , only photolytical ozone production and the HO x -driven O 3 loss cycle in the lower stratosphere are taken into account. The OH concentrations are prescribed from a 2-D climatology (Grooß, 1996). Passively transported ozone (P-O 3 ) allows the contribution of transport to be estimated, in particular that of in-mixing. O 3 and P-O 3 within the boundary layer (the lowest model layer) are set to zero. Because of this simplified chemistry all air parcels above θ=500 K are prescribed from the HALOE climatology (Grooß and Russell, 2005). In this way O 3 and P-O 3 are only calculated between the Earth's surface (both set to zero) and θ=500 K (both set to HALOE climatology, see also Fig. 1). In Fig. 4, the seasonality of O 3 (top) and P-O 3 (middle) at θ=380 K derived from CLaMS is compared with the 10-year climatology of the Halogen Occultation Experiment (HALOE, bottom) (Grooß and Russell, 2005). Some differences between the CLaMS results and HALOE are obvious, in particular in the Southern Hemisphere where the contribution of the ozone hole is not reproduced in CLaMS (no halogen-induced chemistry in this version of CLaMS). However, a remarkable similarity in the annual pattern of the tropical O 3 can be diagnosed for both data sets, in particular the maximum of HALOE O 3 between July and November around the equator is reproduced fairly well by CLaMS O 3 although the absolute values are overestimated.
Furthermore, the same seasonality as in CLaMS O 3 can also be diagnosed in P-O 3 , i.e. in passively transported O 3 where any effect of chemistry is excluded. Because P-O 3 is set to 0 at the Earth's surface (in the same way as O 3 ), the enhanced values of P-O 3 can only originate from the stratosphere. In addition, the tropical values of P-O 3 at θ=380 K level are of the same order as the values of O 3 indicating that transport rather than chemistry drives the seasonality of O 3 at the tropical tropopause.
To compare CLaMS results more quantitatively with observations, we plot in Fig. 5, the seasonality of O 3 on θ=380 and 420 K. Here, SHADOZ observations are shown (beige) and CLaMS results are determined at the geographical locations of the seven SHADOZ stations considered (red and black for O 3 and P-O 3 , respectively). The gray line denotes the seasonality of O 3 obtained from the HALOE climatology (Grooß and Russell, 2005) averaged within the latitude range ±10 • N. The vertical lines around the SHADOZ data show the total variability between the seven stations considered. The difference between the HALOE and SHADOZ climatology probably results from the fact that the HALOE observations cover the ±10 • N latitude range almost uniformly whereas the SHADOZ climatology is biased by the geographical positions of the seven stations considered, of which five are located in the Southern Hemisphere (see also Konopka et al., 2009). The difference between the O 3 and P-O 3 time series is a measure of the chemical O 3 production and, as expected, the contribution of the photolytically formed O 3 grows with increasing altitude. The shape of the seasonality derived from CLaMS shows a clear maximum in late summer that can also be diagnosed from the HALOE and SHADOZ observations. Furthermore, the fractional annual amplitude, O 3 / O 3 , derived from CLaMS, decreases above θ=380 K with increasing altitude in agreement with the SHADOZ observations shown in Fig. 2, although CLaMS absolute values are considerably higher than the observations and there is a phase shift between CLaMS and SHADOZ O 3 maximum that also increases with altitude (top panel of Fig. 5). A remarkable feature of the CLaMS time series is that O 3 and P-O 3 show exactly the same seasonality with a pronounced maximum in summer and a weaker second maximum around February. Moreover, the percentage of P-O 3 at θ=380 K compared with the total simulated O 3 is larger than 50%, in particular in summer, indicating that, at least in the model, transport rather than chemical production determines not only the seasonality of the O 3 cycle but also a significant fraction of the ozone budget. Thus, although CLaMS overestimates the semi-annual cycle of O 3 , the model results reproduce the observed seasonality.
A strong station-to-station variability of the SHADOZ data, in particular below 360 K, is much higher than the corresponding variability of the CLaMS time series (not shown). Here, CLaMS underestimates the variability of the convection-driven transport, at least at the location of the SHADOZ stations considered. This is plausible because below 100 hPa CLaMS uses the large-scale ECMWF vertical winds which do not give sufficient consideration to the effect of the localized convection that can penetrate deeply into the stratosphere (Ricaud et al., 2007). Finally, we discuss which transport process in the model (horizontal or vertical advection or the diffusive part of transport, i.e. mixing) is responsible for the seasonality of P-O 3 . We employ a model set-up that allows the contribution of stratospheric O 3 to the O 3 in the TTL to be quantified (see middle panel in Fig. 4 and the black lines in the bottom panel of Fig. 6 quantifying P-O 3 at θ=380 K within the ±10 • N band). In particular, we trace back the origin of the air responsible for the strong annual and weak semi-annual cycle of enhanced P-O 3 at θ=380 K. For this purpose, model simulations are carried out with two artificial meridional transport barriers which are set in both hemispheres along the ±15 • N latitude and which extend vertically between the Earth's surface and the 420 K isentrope (thick cyan and orange lines in the top panel of Fig. 6).
CLaMS transport can be understood as a consecutive succession of pure advective (24-h trajectories) and mixing steps applied for all Lagrangian air parcels. In our sensitivity studies, we set the P-O 3 value of each CLaMS air parcel to zero if the trajectory of this air parcel crosses equatorwards one of the two artificial transport barriers (see the idealized red trajectory in Fig. 6, crossing the cyan meridional barrier). In this way we set the advective, equatorward transport to zero, so only the diffusive transport across the barrier or the transport from above the upper edge of the barriers can influence P-O 3 in the tropics (note that P-O 3 at the Earth's surface is set to zero). In our first model simulation, both transport barriers are active, so only the diffusive flux across the barriers and the total (i.e. advective+diffusive) flux from above θ=420 K can contribute to the tropical ±10 • N seasonality of P-O 3 at θ=380 K. However, our simulations show that the contribution of all these fluxes is smaller than 1% and is therefore negligible. On the other side, CLaMS simulations with mixing switched off (i.e. pure trajectory transport, not shown) show the same seasonality of P-O 3 although the absolute values of P-O 3 are significantly higher.
Thus, we conclude that advective transport across the barriers determines the seasonality of P-O 3 at θ=380 K whereas the diffusive transport is responsibly for the absolute values of P-O 3 rather than for its annual or semi-annual cycles. Note that in our idealized study with advective, equatorward transport set to zero, the corresponding diffusive transport is higher compared with the case when advection is not switched off (because P-O 3 gradients across the barrier are larger). Thus, even when diffusive transport is artificially enhanced, its impact on the seasonality of P-O 3 within ±10 • N range is negligible.
In the second model study, we quantify the contribution of both hemispheres to the ±10 • N seasonality of P-O 3 at θ=380 K (black line in the bottom panel of Fig. 6). By switching on only one of the artificial transport barriers, the relative contributions (in %) of the northern (only the cyan barrier is active) and Southern Hemisphere (only the orange barrier is active) were determined. The results are shown in the bottom panel of Fig. 6 with the right axis as the reference for the percentage and with the cyan and orange colors marking the contribution of the northern and Southern Hemisphere, respectively. Thus, more than 90% of P-O 3 is transported from the Northern Hemisphere in summer and more than 60% of the weak maximum in February is caused by transport from the Southern Hemisphere.
In the last sensitivity study, we quantify those vertical regions of the barriers (without resolving any longitudinal dependence) through which the contribution of the advective transport to the seasonality of P-O 3 at θ=380 K is largest. Here, a 30 K "window" is defined in each artificial transport barrier (yellow segments in the top panel of Fig. 6) through which the trajectories can pass. By varying the vertical position of these windows, which can be shifted between the Earth's and the θ=420 K surface, the θ-range with the strongest contribution to the seasonality of P-O 3 at θ=380 K can be found. Our simulations show that about 80% of the P-O 3 seasonality at 380 K is due to transport across the windows extending between 370 and 400 K (that is comparable with the pure trajectory study discussed by Ploeger et al. (2009)).
Thus, although the minimum of P-O 3 in the tropics at θ=380 K (see middle panel in Fig. 4 or bottom panel of Fig. 6) is due to seasonality of up-welling, the maximum of P-O 3 can only be understood as a consequence of inmixing with the most dominant contribution from the Northern Hemisphere in summer. On the other hand, the weaker winter maximum diagnosed in the model is hardly present in the HALOE/SHADOZ observations. In CLaMS this signal is caused by enhanced equatorward transport from the Southern Hemisphere into the tropics during austral summer.
In-mixing and the Asian monsoon anticyclone
The importance of in-mixing for the composition of air around the tropical tropopause can also be deduced from the structure of the zonal winds shown in the middle panel of Fig. 4 (white lines are plotted for 20, 15 and 10 m/s, from thick to thin line, respectively). Here, a difference between the Northern and Southern Hemisphere is obvious. In both hemispheres the subtropical jet is weaker during the respective summer season than during the respective winter season (Chen, 1995) with the smallest zonal winds in the Northern Hemisphere in summer. The subtropical jet in the Southern Hemisphere forms an effective transport barrier during the whole year even if some weaker signatures of in-mixing from the Southern Hemisphere can be seen during the austral summer (see enhanced P-O 3 in February, March at 20 • S). The most pronounced hemispheric asymmetry of the climatological flow pattern in the vicinity of the tropopause originates from the Asian summer monsoon that manifests as a strong anticyclone in the upper troposphere (Dethof et al., 1999;Randel and Park, 2006;. This nearly stationary summer circulation extends well into the lower stratosphere up to about 20 km (or θ=420 K) and effectively isolates the air masses of tropospheric origin inside from much older, mainly stratospheric air outside this anticyclone (Park et al., 2008). Furthermore, this circulation significantly weakens the subtropical jet making it permeable for meridional transport (Haynes and Shuckburgh, 2000).
In Fig. 7 the horizontal distribution of O 3 at θ=380 K is shown for the winter months, December to February (DJF, left) and summer months, June to August (JJA, right) derived from the CLaMS simulations (2002-2005, top) and observations measured by the Microwave Limb Sounder (MLS) on the NASA Earth Observing System (EOS) Aura (Schoeberl et al., 2006) (bottom). The MLS profiles (Version 2.2) sampled between August 2004 and December 2008 were first interpolated on the model θ-levels (using ECMWF temperature) and then both CLaMS data and MLS observations were averaged within 250×250 km bins. The precision and the uncertainty of MLS data amounts to 40 and 50 ppbv at 100 hPa, respectively .
To mimic the 4 km vertical resolution and the averaging kernel of the MLS retrieval procedure, the CLaMS data are averaged over three isentropic levels, 370, 380, and 400 K. The comparison between the CLaMS and the MLS data shows that although very similar patterns are resolved by both data sets, the O 3 mixing ratios measured by MLS are higher than the corresponding CLaMS values (with the exception of the Southern Hemisphere in DJF where CLaMS overestimates MLS mainly due to missing the halogeninduced O 3 loss forming the ozone hole and propagating downwards in austral summer). Although the absolute difference increases towards the poles, the relative difference is of the order 10% and thus in rough agreement with the relative accuracy of the MLS retrieval discussed in Livesey et al. (2008).
Nevertheless, the similarity of the patterns between MLS and CLaMS, with the lowest O 3 values over the equator in winter and with a clear signature of the Asian monsoon anticyclone in summer, confirms the seasonality of transport resolved by CLaMS. In particular, the summer distribution of O 3 shows a very pronounced wave-2 structure with the highest values of O 3 in the eastern flank of the Asian monsoon and over Central America. While the Asian monsoon anticyclone is also obvious in the wind pattern, it is not clear to what extent the second maximum over Central America is driven by the North American monsoon (black arrow in the top right panel of Fig. 7).
A weaker signature of in-mixing at θ=380 K can also be inferred due to the Australian or South American monsoon in winter (black arrows in the top left panel of Fig. 7). The latter is mainly driven by the quasi-stationary Bolivian high circulation that has some similarities with the Asian summer monsoon anticyclone (Zhou and Lau, 1998). It is noteworthy that the MLS data shows these signatures more clearly than CLaMS, with some indication of the anticyclonic flow over South Africa, although it is difficult to estimate the impact of the retrieval on the vertical resolution. In contrast to much weaker anticyclones on the Southern Hemisphere, the signature of the Asian monsoon can also be diagnosed at θ=420 K, both in MLS and CLaMS data (not shown).
In-mixing into the TTL in the Southern Hemisphere was also deduced from the ozone profiles measured with the Systeme d'Analyse par Observation Zenithale (SOAZ) UV-vis spectrometer and flown onboard long duration balloons in the latitude range 20±5 • S . These profiles show an increased O 3 variability in the TTL with a maximum at 14-15 km immediately below the tropopause (Borchi et al., 2005).
As in the MLS data, the impact of the Asian monsoon on the seasonality of O 3 should be noticeable in the SHADOZ observations, in particular in those which are located downwind (in the climatological sense) of the Asian monsoon anticyclone (see red filled circles in Fig. 7 marking the geographical location of the Kuala Lumpur and Nairobi stations). In Fig. 8, the seasonality of O 3 (green) observed at Kuala Lumpur (101.7 • E,2.73 • N) and Nairobi (36.8 • E,1.27 • S) is compared with the corresponding CLaMS time series of O 3 (red) and P-O 3 (black) for θ=380 (bottom) and 420 K (top). The model reproduces the observed seasonality fairly well, although, both the annual cycle and the decrease of the ozone maximum with distance from the Asian monsoon are more pronounced in the model than in the observations.
Remarkable is also the temporal shift of this maximum towards autumn with increasing downwind distance, e.g. O 3 maximum in Nairobi at θ=420 K is about 1-2 months later than in Kuala Lumpur (the observed shift is slightly larger than that resulting from the model). This is consistent with the picture that if a zonally asymmetric process like the Asian monsoon is responsible for the ozone seasonal cycle, this should translate into a phase shift between the ozone maxima at different SHADOZ stations. In particular the phase shift at the tropical station slightly southern of the equator (Nairobi) is expected to be later than at the station that is more directly influenced by the Asian monsoon anticyclone (Kuala Lumpur). For typical zonal winds of the order 10 m/s, this phase shift is expected to be comparable with the time that air at the equator needs to circle around the world, i.e. ≈1.5 months.
In summary, for an aqua-planet with negligible tropospheric sources of O 3 one would expect that summer and winter O 3 distributions are symmetric to each other. However, the real atmosphere shows an annual cycle in the tropical upwelling that is approximately anticorrelated to the wellknown seasonal variation of the tropical temperatures, i.e. the slowest and fastest upwelling occurs during the summer and winter, respectively . Our study shows that as a complement to to the annual cycle of upwelling, the seasonality of in-mixing, mainly in summer from the Northern Hemisphere and mainly driven by the Asian monsoon anticyclone, significantly modulates tropical O 3 between 380 and 420 K potential temperature.
Discussion
Based on a simple conceptual model of transport and photochemistry, Konopka et al. (2009) argued that the observed seasonality of O 3 with the highest values during boreal summer, cannot be understood merely by photolytical O 3 production in slowly rising air masses which are well-isolated from the extratropics. Utilizing CLaMS, we have discussed the contribution of horizontal transport from the extratropics into the TTL, referred here as in-mixing, to the seasonality of O 3 in the TTL. In particular, the use of the passively transported O 3 , P-O 3 , allows the impact of horizontal transport to be quantified more precisely. As we showed in our sensitivity studies where the advective part of transport was switched on and off, the seasonality of in-mixing (which is strongest from the Northern Hemisphere in late summer) together with the seasonality of upwelling (which is strongest in April) determines the seasonality of P-O 3 at θ=380 K. Furthermore, based on our sensitivity studies, an impact of advective and/or diffusive transport in the tropics from above θ=420 K on air masses at θ=380 K was also excluded. This is because, at least in the model, upwelling with persistent positive (upward) values ofθ determines transport above the Q=0 level (≈360 K). We conclude that the seasonality of in-mixing is mainly driven by meridional, isentropic winds, i.e. by advective rather than by the diffusive transport. The diffusive transport determines much more the absolute value of in-mixed air and not the annual or semi-annual cycle of in-mixing. Thus, in-mixing itself can be understood as irreversible transport in the sense that air masses which crossed the lateral boundary of the TTL equatorwards do not move back but ascend into the stratosphere with the large-scale Brewer-Dobson circulation.
A remarkable result of the CLaMS simulations is the percentage of P-O 3 compared with the total O 3 (Fig. 9) that can be used as a measure of in-mixing. More precisely, the fraction P-O 3 /(O 3 +O 3trop ) was derived from the model with O 3trop estimating the maximum contribution of the troposphere to the seasonality of O 3 in the TTL (in CLaMS the lower boundary of O 3 is set to 0). O 3trop is assumed to be 40 ppbv. With this assumption O 3 +O 3trop roughly reproduces the SHADOZ climatology at θ=360 K (not shown).
Thus, Fig. 9 clearly shows a minimum of in-mixed O 3 in spring and a strong enhancement (up to 60%) of in-mixing during late summer and early fall within the θ-range between 370 and 420 K. This seasonality of transport was also suggested by Yang et al. (2008) who found that during boreal summer the upward mass flux in the tropics has a near-zero minimum around 70 hPa and, consequently, the mass flux below this level is decoupled from that above.
Note that CLaMS simulations with non mass-conserving vertical velocity reduce the numbers of in-mixing shown in Fig. 9 although its pattern with a pronounced maximum in summer remains unchanged. Because the corrected vertical velocity not only removes the gap in upwelling discussed in Konopka et al. (2007) Fig. 9 to be more reliable. Also the mixing itself that can can be switched off in CLaMS simulation has a strong impact on the numbers (but not on the seasonality) shown in Fig. 9. The origin of enhanced in-mixing can be traced back to a strongly disturbed zonal flow on the Northern Hemisphere in summer, mainly by the persistent Asian monsoon anticyclone (wave-2 pattern, see Fig. 7), in qualitative agreement with idealized, isentropic studies of transport (Chen, 1995;Plumb, 1996;Haynes and Shuckburgh, 2000;Ploeger et al., 2009). On the other side, persistent anticyclones during the Southern Hemisphere summer (Bolivian high, or Australian monsoon) although much weaker than the Asian monsoon anticyclone, also contribute to in-mixing into the TTL (Borchi et al., 2005).
Finally, in addition to the described scenarios of the vertical velocities, sensitivity studies were carried out with CLaMS driven by the ERA-Interim meteorology with vertical wind derived from the temperature tendencies due to radiation (clear sky and clouds) and latent heat (Fueglistaler et al., 2009b;Ploeger et al., 2009). All these studies show consistently that although the absolute values of in-mixing (i.e. of P-O 3 ) or of the fractional annual amplitude O 3 / O 3 or of its phase depend on the vertical wind and on the intensity of mixing considered in the model, the presented seasonality of O 3 is a very robust feature of all our simulations.
Conclusions
Multi-annual simulations with the the Chemical Model of the Stratosphere (CLaMS) suggest that in-mixing from the northern extra-tropics in summer, in particular in connection with the summer Asian monsoon, significantly influences the composition of air within the TTL between 380 and 420 K potential temperature. Our study also shows that the picture of a TTL, "well-isolated" from the extratropics, should be revised, in particular for altitudes between 370 and 440 K, by a picture where both the annual cycle of upwelling and of horizontal transport determine the seasonality of O 3 and of other relevant species in the TTL.
Annually averaged mass conservation
Although the diabatic approach for the vertical velocities allows us to define vertical transport from the diabatic heat budget, the greatest disadvantage of this procedure is the fact that the continuity equation is not fulfilled. A much weaker requirement than the rigorous validity of the continuity equation is the condition that the annually averaged total mass fluxes vanish at each θ-level (Rosenlof, 1995), i.e. where λ is the latitude. The mass density σ on an isentropic surface is given as σ =−(1/g)∂p/∂θ (Andrews et al., 1987) andθ denotes the isentropic vertical velocity. Here, the zonally and annually averaged values of σ andθ are used (although the averaged values of the product σθ would be more appropriate, the resulting difference is negligible) We modify the general procedure formulated by Rosenlof (1995) and correctθ byθ→θ+cf (λ) with f (λ)=cos(kλ/λ 0 ), k=π/2 for |λ|<λ 0 and 0 elsewhere (here λ 0 was set to 50 • N). Inserting of the corrected vertical velocity into Eq. (A1) yields a condition for the constant c, i.e. c=−R/H with The weighting function f can be eliminated by setting f =1 and λ 0 =90. In general c is different for each θ surface. In this paper, we use the constant and latitude-weighted correction above and below θ=360 K, respectively. | 2018-05-07T15:26:44.719Z | 2009-09-01T00:00:00.000 | {
"year": 2009,
"sha1": "30af98ef7339dfc2de3c804f7b136ba9fee6bda5",
"oa_license": "CCBY",
"oa_url": "https://www.atmos-chem-phys.net/10/121/2010/acp-10-121-2010.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ae47e9380aa09480e66e84ce80f11429a923b778",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
202731797 | pes2o/s2orc | v3-fos-license | Sulfated polysaccharide of Sepiella maindroni ink targets Akt and overcomes resistance to the FGFR inhibitor AZD4547 in bladder cancer
Rapid appearance of resistance to fibroblast growth factor receptor (FGFR) inhibitors hampers targeted regimens in bladder cancer. In the present study, we evaluated whether SIP-SII, a sulphated derivative of the polysaccharide in Sepiella maindroni (spineless cuttlefish) ink used in traditional Chinese medicine, could attenuate resistance to FGFR inhibition in bladder cancer cells. In vitro assays indicated that SIP-SII reduced cell viability and migration, restricted cell cycle progression, and increased apoptosis in parallel with decreased AKT phosphorylation and downregulation of CDK4, MMP2, and Bcl-2 in RT112 and JMSU1 cells. Synergistic effects on cell viability were observed when SIP-SII was combined with the small-molecule FGFR inhibitor AZD4547. Specific Akt targeting by SIP-SII was suggested by the fact that neither Akt knockdown nor the selective PI3K inhibitor BKM120 enhanced the inhibitory effects of SIP-II, while expression of a constitutively active Akt mutant rescued SIP-SII effects. Furthermore, subcutaneous transplantation of RT112 xenografts confirmed the superiority and tolerability of combined SIP-SII and AZD4547 administration over monotherapy regimens. The present study thus provides pre-clinical evidence of the ability of SIP-SII to improve FGFR-targeted therapies for bladder cancer by inhibiting Akt.
INTRODUCTION
Bladder cancer is one of the most common cancers worldwide with an estimated 549,393 new cases and 199,922 deaths reported yearly [1]. Men are 4 times more likely than women to be diagnosed with the disease. The general 5-year survival rate for patients with bladder cancer is approximately 77%; however, this rate is reduced to 35% after local dissemination and/or regional lymph node metastasis, and to 5% when distant metastasis develops. In recent years multiple signaling pathways involved in bladder cancer progression have been identified as druggable targets. These include the PI3K/Akt/mTOR pathway, the RTK/RAS/MAPK pathway, and the JAK/STAT pathway [2]. The first targeted therapy for metastatic bladder cancer, the pan-FGFR inhibitor erdafitinib, has recently received FDA approval. Other drugs under current investigation to treat recurrent or refractory bladder cancer include the pan-FGFR inhibitors BGJ398 [3] and AZD4547 [4], aimed at tumors with FGFR3 mutation or fusion, and the mTOR inhibitor everolimus for refractory bladder carcinoma [5]. A common issue with targeted therapies is intrinsic or acquired resistance of cancer cells. AKT hyperactivation, MET overexpression, BRAF fusion, and activation of canonical MAPK-ERK signaling have been implicated in resistance to FGFR inhibitors [6][7][8][9]. Therefore, discovery and evaluation of potential compounds that can reverse FGFR inhibitor resistance is critical for improving targeted regimens.
Cephalopod ink has long been used as a traditional medicine in both Eastern (China) and ancient Western cultures [10]. SIP-SII is a sulfated polysaccharide in ink from Sepiella maindroni (spineless cuttlefish) which exhibits wide therapeutic potential based on its anti-tumor, anti-inflammatory, and immunomodulatory activities [11,12]. Recently, SIP-SII was shown to decrease pulmonary metastasis in a mouse melanoma model by inhibiting ICAM-1-mediated cell adhesion and bFGF-induced angiogenesis [13]. On the other hand, Jiang et al. reported that SIP-SII suppressed cell migration and invasion by targeting the EGFR/ PI3K/MMP2 and EGFR/MEK/MMP2 axes in human epidermoid carcinoma KB cells [14]. The same investigators showed that SIP-SII binds to plasma membrane EGFR in ovarian cancer SKOV3 cells, hampering its activation and leading to downregulation of EGFR-mediated p38/MAPK and PI3K/Akt/mTOR cascades [15]. Although a number of studies provided a rationale for the use of SIP-SII as an anticancer agent and a potential therapy to overcome FGFR inhibitor resistance, its pharmacological actions on bladder cancer remain unexplored. In the present study, therefore, we used in vitro and in vivo experiments to explore the potential of SIP-SII to overcome resistance to the FGFR inhibitor AZD4547 in bladder cancer cells carrying active FGFR and hyperactive AKT mutations.
SIP-SII impairs proliferation and migration and attenuates Akt signaling in bladder cancer cells
To assess the effects of SIP-SII on bladder cancer cell viability, dose-response experiments were performed on RT112 and JMSU1 cells using the MTT assay. Viability was reduced by approximately 50% when RT112 and JMSU1 cells were treated with 6.73 µM and 7.39 µM SIP-SII, respectively ( Figure 1A). Hence, IC50 concentrations of 2.5 µM and 5 µM were respectively selected to verify SIP-SII's inhibitory effects on cell growth. For the FGFR inhibitor AZD4547, IC50 values of 1.25 µM and 1.28 µM were estimated for RT112 and JMSU1 cells, respectively ( Figure 1A). Time-course viability experiments were further conducted on RT112 and JMSU1 cells exposed to different concentrations of SIP-SII for 12, 24, 36, or 48 h. As shown in Figure 1B, SIP-SII suppressed cell growth in a dose-and timedependent manner.
The transwell migration assay was next performed to determine the effects of SIP-SII on cell migration. After incubation with SIP-SII for 24 h, the number of migrating RT112 cells decreased significantly compared to the control group ( Figure 1C). Similar results were observed in JMSU1 cells ( Figure 1D). Western blotting analysis was further used to detect the expression of total Akt, phospho-Akt, CDK4, Bcl-2, and MMP2. As shown in Figure 1E, after exposure to SIP-SII, the levels of phospho-Akt, CDK4, Bcl-2, and MMP2 declined in a dose-dependent manner, while total Akt showed little variation. Densitometric gel analyses confirmed these results (Supplementary Figure 1A). These data showed that SIP-SII inhibits bladder cancer cell proliferation and migration, while decreasing Akt activation and downstream signaling.
SIP-SII hampers proliferation and migration of bladder cancer cells in an Akt-dependent manner
Previous studies reported that SIP-SII exhibited multiple anti-tumor effects such as suppression of EGFR, FGF, and intercellular adhesion molecule (ICAM)-mediated pathways in ovarian cancer cells and epidermoid carcinoma cells [13] [15]. To verify that Akt inhibition mediates the inhibitory effects of SIP-SII on bladder cancer cells, Akt-targeted siRNAs (si-Akt) were introduced into RT112 and JMSU1 cells. The silencing efficiency of si-Akt was verified by western blotting (data not shown). Cells were treated with control siRNA, 5µM SIP-SII, or the combination of SIP-SII and si-Akt, and MTT and transwell assays were performed 24 h later. SIP-SII alone and in combination with si-Akt repressed cell viability relative to the control group. However, dual treatment did not amplify the inhibition induced by SIP-SII alone, both on cell growth (Figure 2A) or cell migration ( Figure 2B and 2C). Western blotting was further performed to determine the activation of Akt and the expression of effector molecules. As shown in Figure 2D and 2E, the decrease in phospho-Akt expression was identical in SIP-SII-treated cells, either when applied alone or in combination with si-Akt, whereas total Akt expression fell significantly only in the latter. Three independent experiments (Supplementary Figure 1B) showed that CDK4 and MMP2 decreased by 16%, Bcl-2 decreased by 25%, phospho-AKT was reduced by 70%, and total AKT expression declined 75% in the combination group. These results indicated that SIP-SII suppressed cell growth and migration by inhibiting Akt activation.
Akt overexpression reverses SIP-SII effects on bladder cancer cells
To further verify that the inhibitory effects of SIP-SII on bladder cancer cell growth and migration relied on inactivation of Akt, a constitutively active Akt mutant, Akt T308D S473D (Akt DD), was introduced into RT112 and JMSU1 cells. Transfected cells were then treated with 5 µM SIP-SII for 24 h, and the MTT assay was performed to assess cell viability. As shown in Figure 3A, Akt DD completely reversed SIP-SII-induced inhibition of cell viability. Moreover, Akt DD expression also attenuated the inhibition of cell migration elicited by SIP-SII ( Figure 3B and 3C). Akt DD transfection efficiency was confirmed by western blotting ( Figure 3D and 3E). After transfection, the expression of CDK4, AGING Figure 1C). These data strongly suggest that the inhibitory actions of SIP-SII on bladder cancer cells are dependent on Akt inhibition.
Dual treatment with SIP-SII and AZD4547 enhances the anticancer effects elicited by single inhibitors
Resistance to AZD4547 in RT112 and JMSU1 cells, carrying respectively a FGFR3-TACC3 translocation and FGFR1 amplification, is mediated by constitutively active PI3K/Akt signaling [7,16]. To investigate the potential of SIP-SII in overcoming AZD4547 resistance, RT112 and JMSU1 cells were exposed to these inhibitors, either singly or in combination (Table 1), and the MTT assay was performed to determine compounded drug effects through estimation of the combination index ( Table 2). The IC50 of combined drug exposure was lower than that measured for AZD4547 alone both in RT112 cells (0.43 µM vs 1.25 µM) and in JMSU1 cells (0.47 µM vs 1.28 µM). In turn, CI values were 0.7037 for RT112 and 0.7407 for JMSU1 (Table 2, indicating that the effects of the drug combination were synergistic. These results are exemplified in Figure 4A, showing enhanced inhibition of cell viability by combined treatment, compared with single exposure to SIP-SII (5 µM) or AZD4547 (100 nM).
Transwell assays were also performed to assess the effects of the combined inhibitors on cell migration. As shown in Figure 4B and 4C, migration declined by 70% after combination treatment, compared to the 50% reduction elicited by SIP-SII and AZD4547 separately. Furthermore, as sown in Figure 4D, cell cycle analysis indicated that the proportion of RT112 cells in G1 was significantly lower after combined drug treatment than after AZD4547 treatment alone (0.75% vs. 0.84% respectively; P < 0.01). Similar results were observed in JMSU1 cells (0.65% vs. 0.82% respectively; P < 0.01, Figure 4E). Additionally, the JC-1 assay was performed to evaluate apoptotic rates post-exposure to single or dual inhibitors. As shown in Figure 4F, the percentage of apoptotic RT112 cells increased from 0.69% in the AZD4547 group to 1.68% in the combination group (p < 0.01). Similar results were obtained in JMSU1 cells (0.93% vs. 1.84%, respectively; P < 0.01, Figure 4G). These results indicate that combination therapy decreased bladder cancer cell migration and promoted cell cycle arrest and apoptosis to a larger extent than AZD4547 monotherapy.
SIP-SII specifically targets Akt
The effects of combined SIP-SII and AZD4547 treatment on Akt activation and expression of downstream signaling molecules were explored through western blot. As shown in Figure 5A, the expression of phospho-Akt, CDK4, Bcl-2, and MMP2 in RT112 cells was downregulated to a greater by SIP-SII and AZD4547 combined, compared to AZD4547 alone. Similar results were obtained in JMSU1 cells. These results were validated by gel analysis (Supplementary Figure 1D), indicating that combination therapy might overcome AZD4547 resistance by blocking Akt-mediated pathways. To further explore SIP-SII specificity, the selective PI3K inhibitor BKM120 was used to inhibit AKT activation. RT112 cells were treated for 24 h with DMSO (vehicle), 5 µM SIP-SII, 100 nM AZD4547, or SIP-SII combined with both AZD4547 and 0.5 µM BKM120. Western blots results showed that phospho-AKT, CDK4, Bcl-2, and MMP2 decreased significantly, while total AKT showed little change in cells treated with the three inhibitors ( Figure 5B). Addition of BKM120 did not further enhance the suppression of AKT phosphorylation elicited by SIP-SII, indicating specific inhibition of Akt signaling by SIP-SII. These findings were corroborated by gel analysis (Supplementary Figure 1E). In addition, the effects of the combined inhibitors on cell growth were evaluated through MTT assays. Figure 5C shows that the combination of BKM120 and AZD4547 enhanced growth inhibition compared to AZD4547 alone. In contrast, dual treatment with BKM120 and SIP-SII neither enhanced nor reduced the suppression induced by SIP-SII. On the other hand, migration assay results indicated that dual treatment with BKM120 and AZD4547 promoted a stronger inhibition than that elicited by AZD4547 alone, whereas the inhibition induced by combined BKM120 and SIP-SII exposure was similar to that produced by SIP-SII alone. These results further imply that SIP-SII inhibits bladder cancer cell proliferation and migration by targeting Akt.
Combination of SIP-SII and AZD4547 enhances growth inhibition of RT112 xenografts
To assess whether the SIP-SII and AZD4547 combinatorial regimen suppresses tumor growth in vivo, subcutaneous RT112 xenografts were generated in C57BL/6 mice. When tumors reached a mean volume of ~100 mm 3 , mice were divided into four groups and treated respectively with saline, SIP-SII, AZD4547, or the combination of SIP-SII and AZD4547. As shown in Figure 6A, tumor growth was initially reduced by each inhibitor alone, but the effects diminished gradually over time. In contrast, sustained tumor regression was observed during combined SIP-SII and AZD4547 administration. In parallel with decreased tumor volumes, tumor weights in the drug combination group were significantly reduced compared to the control group (0.47 g vs. 1.7 g; P < 0.01) while those in the monotherapy groups showed a more modest reduction (0.84 g and 0.86 g for SIP-SII and AZD4547, respectively; P < 0.01; Figure 6B). Body weights were not significantly altered by any treatment, suggesting that the interventions were well tolerated ( Figure 6C).
After tumor excision, the expression of phospho-Akt was verified by immunohistochemistry. As shown in Figure 6D, phosphorylation of Akt decreased remarkably in the SIP-SII group and in the combined treatment group. Furthermore, western blots demonstrated a stronger decrease in the expression of CDK4, Bcl-2, and MMP2 in the combination group, compared to each single treatment ( Figure 6E and Supplementary Figure 1F). Thus, results of in vivo experiments suggested that combination of SIP-SII and AZD4547 induced gradual and sustained tumor regression by concomitant inhibition of FGFR and Akt.
DISCUSSION
Akt hyperactivation is a common mechanism underlying resistance to FGFR inhibitors in cancers of the bladder with FGFR hyperactivation or overexpression. Our study suggests a potential novel strategy to overcome such resistance by showing that SIP-SII, a chemically sulfated polysaccharide isolated from the ink of the cuttlefish Sepiella maindroni, inhibits Akt activation and sensitizes bladder cancer cells to the anti-tumor actions of the FGFR inhibitor AZD4547.
Bladder carcinomas typically carry a large number of DNA mutations, surpassed only by lung cancers and melanoma [17]. Among the DNA alterations commonly found in tumors of the bladder, PIK3CA, FGFR3, and ERBB2/3 mutations constitute promising targets for targeted therapies [2, 17]. Among the most relevant signaling pathways investigated in animal models of bladder cancer are the EGFR-RAS-MAPK [18], FGFR3-RAS-MAPK [19], VEGF-RAS-MAPK [20], PI3K-Akt-mTOR [21,22], AR-PI3K/Akt [23], and STAT3-Survivin [24] pathways. Among those, receptor tyrosine kinases (EGFR, FGFR, and VEGFR) signaling through Ras-MAPK or Ras-PI3K-Akt axes are the most frequently hyperactive pathways implicated in bladder cancer progression [25][26][27]. Many compounds have been developed and are being tested in pre-clinical studies and clinical trials for bladder cancer. These include the FGFR inhibitors erdafitinib (approval), BGJ398 (Phase 1), and AZD4547 (Phase 1), the PI3K-beta inhibitor GSK2636771 (Phase 1), and the EGFR inhibitors erlotinib (Phase 2) and afatinib (Phase 2). However, common challenges to trial success include limited response rate, lack of treatment effect, and rapid occurrence of drug resistance, which reflect the large genetic heterogeneity of bladder cancer. About two-thirds of all non-muscle invasive bladder cancers carry activating FGFR3 mutations [28] while more than 40% of muscle-invasive bladder cancers overexpress FGFR3 [29]. The frequency of activating FGFR3 mutations and gene-fusion events (e.g. FGFR3-TACC3, FGFR3-BAIAP2L1, and FGFR3-JAKMIP1) provides a solid rationale for the success of erdafitinib, the first FGFR inhibitor approved by the FDA. However, rapid onset of erdafitinib resistance restrained its therapeutic success. Compared to more rare mutations in the RAS family, activating mutations of the PI3K-Akt axis appear in approximately 20% of bladder carcinomas, conferring resistance to FGFR inhibitors. Besides, mutational hyperactivation of FGFR2, overexpression of MET, and the JHDM1D-BRAF fusion [8,9] have also been found to contribute to intrinsic and acquired resistance to FGFR inhibitors.
In this study we show that SIP-SII inhibits growth and migration of bladder cancer cells, while also potentiating the inhibitory effects of the small molecule pan-FGFR inhibitor AZD4547. SIP-SII exhibits broad anti-tumor effects. For instance, it was shown to repress lung metastasis of B16F10 melanoma xenografts in mice via inhibition of MMP2 [13], and to inhibit EGFR-Ras-MEK-MMP2 and EGFR-PI3K-MMP2 pathways in an EGF-dependent manner in KB cells [14]. Nevertheless, the effects of SIP-SII on bladder cancer cells with active PI3K-Akt signaling had not been explored. We demonstrated that SIP-SII exposure reduced the expression of MMP2, CDK4, and Bcl-2, hinting at potential mechanisms underlying impaired cell migration, promotion of cell cycle arrest, and increased apoptosis following AKT inactivation. The fact that neither siRNAmediated Akt silencing nor co-treatment with the PI3K inhibitor BKM120 further increased the inhibitory effects of SIP-SII, while forced expression of a constitutively active Akt mutant reversed such effects, proves that SIP-SII effectively inactivates Akt at low micromolar doses in bladder cancer cells.
Our xenograft model expanded the findings obtained in vitro, demonstrating that the combination of SIP-SII and AZD4547 overcome AZD4547 resistance and significantly reduced tumor growth compared to each monotherapy regimen, without patent adverse effects. Therefore, in future studies the selectivity, pharmacological dynamics, and maximum tolerated dose of SIP-SII should be investigated in detail. In addition, experiments in bladder cancer cells harboring FGFR3 fusions and activating PI3K/Akt mutations should also be conducted to explore the efficacy of SIP-SII in combination with FGFR inhibitors under more restrictive mutational landscapes.
Transwell migration assay
The transwell migration assay was performed as reported before, using membranes with 8 μm pore size [33]. In brief, 3x10 4 bladder cancer cells suspended in 50 μL of serum-free DMEM were seeded in the upper chambers and treated with test compounds. The lower wells were filled with 600 μL of DMEM containing 10% FBS. After a 24 h incubation, cells on the upper surface of the membrane were removed with cotton swabs and cells that migrated to the lower surface were fixed, stained, and counted under an inverted microscope. Five random fields in each group were recorded.
Western blotting
Attached cells were washed twice with ice-cold PBS and then lysed with lysis buffer for 30 min on ice. Total protein content (for cell and tissue samples) was determined using a BCA protein assay kit (ab102536, Abcam, Cambridge, UK). Equal amounts of proteins were resolved by 10% SDS-PAGE and transferred onto polyvinylidene fluoride (PVDF) membranes. The membranes were blocked with 5% skim milk in TBS-T for 1 h and incubated with specific primary antibodies (1:1000) at 4 °C with gentle shaking overnight. Membranes were washed three times with TBS-T, reacted with secondary antibodies conjugated to HRP (1:2000; ab205719 and ab205718, Abcam, Cambridge, UK), and antibody:protein complexes detected by enhanced chemiluminescence (Pierce; Thermo Fisher Scientific, Inc.). Data obtained from three independent experiments were analyzed with ImageJ software (v 1.52p).
Cell cycle analysis
Cell cycle distribution analysis was performed using the Propidium Iodide (PI) Flow Cytometry Kit (ab139418, Abcam, Cambridge, UK) following the manufacturer's protocol. Untreated cells were used as control. Cells were prepared at a density of 1x10 4 per well in 6 well plates and exposed to test reagents for 24 h at 37ºC. After harvesting and preparation of single-cell suspensions, cells were fixed, stained with PI, and analyzed on a FACSCalibur cytometer (BD Biosciences, San Jose, CA, US). Cell cycle distribution analysis was performed on three separate experiments using BD CellQuest™ Pro Analysis software (BD Biosciences, San Jose, CA, US).
Apoptosis analysis
For apoptosis analysis, cells (2x10 5 ) were seeded in 6well plates and allowed to attach overnight. Experimental treatments were applied, and cells were then harvested and stained with the mitochondrial membrane potential reporter JC-1 (ab141387, Abcam, Cambridge, UK). JC-1 fluorescence was assessed by flow cytometry in three individual experiments.
Xenograft model
Animal experiments were approved by the Institutional Animal Care and Use Committee of Shengjing Hospital of China Medical University (Shenyang, China). Nude mice (male, 18-22 g) were obtained from the Experimental Animal Centre of Shengjing Hospital of China Medical University. RT112 cells (5×10 6 in 0.2 mL of saline) were subcutaneously injected into the right axilla of mice. When tumor volumes reached ~100 mm 3 mice were randomly divided into four groups (n = 3 per group): control (saline, vehicle), SIP-SII (30 mg/kg/d), AZD4547 (30 mg/kg/d), and combination of SIP-SII and AZD4547 (SIP + AZD). All drugs were administered by intraperitoneal injection (0.2 mL) every day for 28 days. Tumor volumes and body weights were documented every other day. Mice were euthanized by cervical dislocation under isoflurane anaesthesia 24 h after the last drug injection.
Statistical analysis
Data were analyzed using GraphPad Prism 7.00 (GraphPad Software, San Diego, CA, USA) and are expressed as the mean ± standard deviation (SD). Multiple comparisons were performed using two-way analysis of variance (ANOVA) and Tukey's multiple comparisons test. Multiple comparisons of tumor weights were performed by one-way ANOVA and Tukey's multiple comparisons test. P < 0.05 was considered statistically significant. | 2019-09-24T13:04:32.668Z | 2019-09-23T00:00:00.000 | {
"year": 2019,
"sha1": "5f3ede6a1b6c741d538e512845af9a58c51e5082",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.18632/aging.102286",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "756308943c7e1a38da0bcb0fe48267174ff1fbcb",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
14519876 | pes2o/s2orc | v3-fos-license | Computer-Assisted Visualization of Central Lung Tumours Based on 3-Dimensional Reconstruction
Each year, lung cancer is diagnosed in approximately 1.4 million people world-wide. There are approx. 205,000 new cases in the USA and approx. 345,000 in Europe [Cancer Atlas of the Federal Republic of Germany]. US researchers at the Centers for Disease Control and Prevention [CDC] expect the number of deaths to continue to rise as an immediate consequence of smoking. During the 20th Century, tobacco consumption caused about 100 million deaths, and this number is estimated at about one billion deaths world—wide for the 21st Century [CDC]. Axial 2-D computed tomography (CT) of the thorax is the accepted and established standard method used in pre-operative morphological imaging diagnosis in patients with central benign or malignant lung tumours. Tumour size, infiltration of central structures or segmental relatedness are the decisive parameters that the surgeon can derive in variable quality from 2-dimensional images, in order to assess the technical operability and the extent of the resection. However, the availability and quality of CTs vary greatly from one hospital to the next. The surgeon is thus often given print-outs on paper of a CT with 5 mm slices. Comprehensive coverage across the board with multi-slice detector CT (MSDCT) with 1mm slices and the possibility of interactive observation by the operator is, however, not yet available. Improved imaging and image-processing is crucial to the further optimization of pre-operative risk assessment, especially with reference to population development in industrialized nations. Multimorbid patients, patients with severe obstructive or restrictive diseases of the respiratory tract, as well as patients of advanced age, often limit the – actually required – tactical oncological extent of resection due to a post-operative lung function that is too low. Demands must therefore be made for a best-possible pre-operative localization and functional diagnostics, also with reference to the constant rise in patient age for the corresponding co-morbidities.
Introduction
Each year, lung cancer is diagnosed in approximately 1.4 million people world-wide.There are approx.205,000 new cases in the USA and approx.345,000 in Europe [Cancer Atlas of the Federal Republic of Germany].US researchers at the Centers for Disease Control and Prevention [CDC] expect the number of deaths to continue to rise as an immediate consequence of smoking.During the 20 th Century, tobacco consumption caused about 100 million deaths, and this number is estimated at about one billion deaths world-wide for the 21 st Century [CDC].Axial 2-D computed tomography (CT) of the thorax is the accepted and established standard method used in pre-operative morphological imaging diagnosis in patients with central benign or malignant lung tumours.Tumour size, infiltration of central structures or segmental relatedness are the decisive parameters that the surgeon can derive in variable quality from 2-dimensional images, in order to assess the technical operability and the extent of the resection.However, the availability and quality of CTs vary greatly from one hospital to the next.The surgeon is thus often given print-outs on paper of a CT with 5 mm slices.Comprehensive coverage across the board with multi-slice detector CT (MSDCT) with 1mm slices and the possibility of interactive observation by the operator is, however, not yet available.Improved imaging and image-processing is crucial to the further optimization of pre-operative risk assessment, especially with reference to population development in industrialized nations.Multimorbid patients, patients with severe obstructive or restrictive diseases of the respiratory tract, as well as patients of advanced age, often limit theactually required -tactical oncological extent of resection due to a post-operative lung function that is too low.Demands must therefore be made for a best-possible pre-operative localization and functional diagnostics, also with reference to the constant rise in patient age for the corresponding co-morbidities.
3-D visualization in modern clinical practice
The pre-operative assessment of a malignant lung tumour, its anatomical and topographical position, the final extent of the tumour and the possibility of infiltration of central
Computer assistance for thoracic surgery planning
Standard risk assessment in thoracic surgery is based on conventional imaging techniques like x-ray and CT.In exceptional cases, additional examinations are carried out using, e.g., MRT, PET, SPECT or US.The oncological requirements (curative therapeutic approach), on the one hand, and the patient's technical and functional prerequisites, on the other, must be balanced individually for each patient.While the computer-assisted approach to the planning of thoracic surgical interventions is the aim of the current research, other questions pertaining to CT-assisted pulmonary diagnostics have already been answered.Rigorous quantification and visualization methods already exist for screening and early recognition of bronchial carcinomas, planning of biopsies, radiological monitoring of chemotherapy for lung metastases, quantification of parenchyma in lung function disorders, or embolism diagnostics.The researchers that specialize in medical applications at Fraunhofer MEVIS -Institute for Medical Image Computing, Bremen, develop prototypic software applications for the reconstruction, quantification and visualization of thoracic CT data, with the aim of supporting the planning of thoracic surgery in cases where complex resections are required in oncological lung patients [Dicken et al. 2005, Stoecker et al. 2009].Methods and algorithms have been developed in close collaboration with about 20 german hospitals for lung surgery, to facilitate the delimitation of anatomical structures and pathologies in highresolution CT data on the lungs.Results, conducted by the University Hospital Luebeck from a 3-D reconstruction dating from 2005 revealed reformatting and visualization that was still highly simplified and the user modules for interactive use had also not yet been developed.The technical feasibility and a 3-dimensional reconstruction were initially evaluated and validated on 9 patients in a first exploratory phase between December 2005 and February 2006.These patients were characterized by a great diversity as possible with reference to age, tumour genesis, primary tumour/relapse, tumour localization or infiltration of extra-thoracic / mediastinal structures.The original CT and the reconstructed data sets for a 54-year-old patient with bilateral lung metastases derived from a uterine leiomyosarcoma are shown below as an example (Fig. 3.1 and Fig. 3.2).In addition to the simplified screen shots, the coarse grid of the reformatting is of particular note.Over the course of the years, a computer-assisted approach was developed that allowed automatic segmentation of the lungs, the branching structures of the bronchial tree as far as the subsegmental level and interactive segmentation of the pulmonary blood supply.The method of reformatting and 3-D visualization has also proved to be robust in cases of central tumour localization with potential tumour invasion of larger vessels or central mediastinal structures.The differentiation of the lobes of the lungs is carried out automatically, the approximation of the individual lung segments is also possible with some manual interaction.The segmentation masks of the delimited regions form the basis for a quantitative analysis of functional CT data, such as lung volumes, emphysema index or mean lung density.The portion of the lung requiring resection can be calculated prior to surgery using this instrument and the expected post-operative loss of function can be approximated.Conventional methods (multiplanar reformatting, volume rendering) can be complemented by colour coding and anatomical reformatting of the data [Dicken et al. 2003], i.e., the data are not depicted on flat sections, but based on their distance to the pleura.This means that more superficial changes, e.g., pleural mesotheliomas, defects in the thoracic wall or osseous changes in the bony thorax can be depicted in a way that resembles the surgical situation (Fig. 3.3).Volumetric and metric calculations permit a precise statement on tumour thickness or distance of the tumour from its surrounding structures (thoracic wall).Furthermore, 3-D visualizations of the hilum of the lung or of regions around the lesions can be produced, in order to depict the morphological and topographical relationship of the tumour to the bronchial and vascular tree with colour coding.All pulmonary areas, including emphysematous portions, can be depicted separately in three dimensions in variable detail and resolution.All 3-D scenes can be interactively rotated or enlarged by the observer and set to obtain an optimum view.Fig. 3.3 The conventional display of CT data on flat sections provided by the data set is complemented here by the depiction of all image points that, similar to the layers in an onion, are all at a constant distance to the surface of the organ.On the left, you can see a superficial depiction of such a layer, located at approx. 10 mm from the internal thoracic wall.Lung density is colour coded.On the right, you can see the corresponding conventional view.
Lungs, lobes, segments and the bronchial system
The number of publications on the identification of anatomical lung structures rose dramatically after the introduction of MSDCT.Examples of bronchial tree segmentation and analytical methods are found as early on as 1999 in a publication by Preteux and later on in publications by Aykac and Tschirren [Preteux F et al. 1999, Aykac D et al. 2003, Tschirren J et al. 2002].A variety of algorithms for the automatic segmentation of the lung have been published by Kitasaka, Hu, Leader, Kuhnigk and Sluimer [Kitasaka T et al. 1999, Hu S et al. www.intechopen.comComputer-Assisted Visualization of Central Lung Tumours Based on 3-Dimensional Reconstruction 211 2001, Leader JK et al. 2003, Kuhnigk JM et al. 2003, Sluimer I et al. 2005].A first method on the segmentation of the lobes of the lungs, an algorithm based on fissure detection and knowledge from anatomical atlases, was presented by Zhang [Zhang L et al. 2003].An alternative method used the Voronoi division of the lung starting at the lobar bronchi for a coarse estimate of the lobes [Zhou et al. 2003].A method that includes both the areas of vascular and bronchial supply and fissure formations, and thus permits a more rapid interactive refinement of the results, is an integral part of a software solution for parenchyma analysis, developed by Fraunhofer MEVIS [Kuhnigk JM et al. 2005].Furthermore, also a method for the approximation of lung segments which are not delimited by fissures is available.It was introduced by Krass in 2000 [Krass S et al. 2000] and is based on a similar approach to Zhou's lung lobe segmentation algorithm.The method has currently been further refined which is described in [Welter et al. 2011].An expanded variant of the lobe segmentation approach developed by Kuhnigk was introduced by Ukill [Ukil S et al. 2005 and2006].Methods for the qualitative and quantitative analysis of the lung parenchyma had already been developed in the 1990's by Kalender, Uppaluri and Coxson [Kalender WA et al. 1990, Uppaluri R et al. 1997, Coxson HO et al. 1999] and were further developed over the course of the years [Blechschmidt RA et al. 2001, Hara T et al. 2003, Xu Y et al. 2006].A first system for quantitative analysis under inclusion of a regional division of the lung, previously achieved through segmentation, was presented by Reinhardt in 2001, whereby Zhou's fissure-based lobe segmentation was used and no further division into subsegments could be undertaken [Reinhardt JM et al. 2001].There is currently no generally available software for CT-based division of the lung into lobes and segments.There are the following additional challenges when segmenting the supply systems, especially in the case of the lungs: In the segmentation of the bronchial tree, on the one hand, the structures that take a horizontal course in the internal regions are rendered very bright and, on the other hand, the centres of the thinner bronchi that take a diagonal course through the volume are no longer coherent within the meaning of neighbouring relationships in the voxel grid in the discrete reconstruction [Park W et al. 1998, Prêteux F et al. 1999, Ley S et al. 2002].The focus of current research is the stabilization of automatic segmentation methods that produce a satisfactory segmentation result without interaction from the user [Tschirren J et al. 2005b, Schalthölter T et al. 2002, Hoffmann EA et al. 2003].
Vascular system
The segmentation of pulmonary arteries and veins plays an important role in pre-operative image analysis, in addition to segmentation and the functional units of the lung.This also permits the quantitative determination of morphological parameters for these tree-like supply systems, such as, e.g., diameter and cross-sectional area of vessels, curvature, as well as length or volume of a section.Methods for the segmentation of tubular structures can be broadly divided into two categories.The methods in the first category are based on the determination of the path between two pre-determined end points and subsequently carry out the segmentation on reformatted layers orthogonal to the course of the path [Frangi AF et al. 1999, Hernandez-Hoyos M et al. 2002].During this process, the calculation of the path results from the minimization of a cost functional, into which, on the one hand, external parameters, such as, e.g.voxel intensity or local contrast, and on the other hand, internal parameters such as, e.g., path length or curvature are fed.This "functional" approach can be efficiently realized and has the advantage that the segmentation of the cross-sections is mainly www.intechopen.com
Computer-Assisted Visualization of Central
Lung Tumours Based on 3-Dimensional Reconstruction 213 carried out in 2-D layers.On the other hand, there is no guarantee that the resultant path actually runs parallel to the structure, as length and curvature are taken into consideration in path minimization, which results in non-orthogonal cross-sections in the region of branching and strong curvature, in particular, and thus in erroneous values for diameter and crosssectional area.Furthermore, these methods are often not suitable for interactive expansions aimed at the improvement or correction of an initial segmentation result.In contrast, the methods in the second category largely take a 'geometric' approach, i.e., first a 3-D segmentation is carried out on the structure of interest, then its mid-line is determined and, finally, the vessel is measured on reformatted layers that are orthogonal to the mid-line.In this procedure, the mid-line is determined with the aid of skeletonization algorithms that can achieve high levels of accuracy [Lam L et al 1992, Selle D 1999, Borgefors G et al. 2001].An advantage of this approach is that the user can verify and, if necessary, interactively modify the 3-D segmentation result, prior to calculating the mid-lines.Both approaches can, of course, be combined.For example, it may be meaningful to conduct another, further refined 2-D segmentation of the lumen based on the vascular cross-sections obtained using a geometric approach.During segmentation of the vascular systems, no satisfactory solution has been found to date for the automatic separation of the venous and arterial trees, in particular, which is, however, a prerequisite to producing 3-D depictions that have been coloured in analogous to illustrations in textbooks for each patient for the planning of surgery.
Lung tumors and mediastinal lymph nodes
The assessment of the size, shape, location and number of lung tumours plays a substantial role in radiological diagnostics and in the planning of surgical interventions.The locational relationship to structures in lung, in particular, can provide important information on whether a tumorous process is locally restricted to an individual lobe or segment of the lung or whether it extends across multiple lobes or segments, which has a decisive effect on the extent of the resection.In this case, the most important tasks to be fulfilled by the algorithm are the segmentation and the quantification of such masses in the lungs.Research into the segmentation and volumetry of round lung lesions in CT data has, to date, been mainly motivated by CT screening studies for the early recognition of lung cancer.A corresponding emphasis has been placed on the reproducibility of the quantification of small round lesions in the algorithms developed to date.In many cases, these have an almost spherical shape and are generally only in slight contact with the pleura or the vessels.In the approach taken by Kostis [Kostis et al. 2003], a semi-automatic classification into one of very few round lesion models (isolated, close to pleura, with connection to vessels) is made prior to segmentation.After the subsequent initial segmentation with fixed threshold values, the connected, highly dense structures are severed by morphological operations.In 2005, Okada introduced an automated method for the approximation of so-called 'ground glass opacities' (GGO) to ellipsoids [Okada K et al. 2005].Fetita also used an approach involving an initial segmentation with fixed threshold values and the subsequent application of morphological procedures [Fetita et al. 2003].However, in this case, global information was also used, while the other methods only based their calculations on a section of the data set.The first commercial software packages that are intended for lung-screening examinations have been introduced on to the market since 2002 (R2, Siemens, GE, Philips).What these tools have in common is that the segmentation of larger and/or more complex tumors with substantial contact to the pleura or vessels is inadequate in many cases and there are often no options available to the user for making corrections, or these options are difficult to operate, as all methods to date for segmentation have been primarily developed for small round lesions.
Quantitative analysis of lung parenchyma
The standard method for functional lung parenchyma analysis is the selective perfusion scintigraphy.Computed tomography has gained in importance in emphysema diagnostics due to the low sensitivity of conventional x-rays for the detection of emphysema and the low sensitivity of lung function tests for the detection and quantification of early forms of emphysema, in particular [Grosse C and Bankier A 2007].In addition to the quantification of emphysematous changes in lung parenchyma, computer tomographic scans (CT) also permit determination of the severity of the disease.Recognized standard parameters are the mean lung density (MLD) and the pixel index (PI) or -a synonym -also the emphysema index (EI).The EI can be determined globally or regionally [Blechschmidt, Achenbach].Thin layer images (1 -2 mm layer thickness) and the use of a high-resolution algorithm are prerequisites to optimum radiological evaluation.However, constantly improving technologies (4-, 16-, 64-slice CT) also require a large number of images.A data set comprising 300 -600 images is usually produced, depending on the size of the patient's thorax, which corresponds to 150 -300 MB storage capacity.However, this quantity of data is of limited practicality to routine clinical use.There are different research software application for quantitative analysis of parenchyma, e. g.MeVisPulmo3D [Kuhnigk JM et al.
Visualization
The visualization of complex medical structures is a current challenge in the field of computer graphics.While algorithms for rapid volume rendering are now part of the functional scope of modern radiological units, to date, special methods for the accentuation of relevant information in the image data generally only exist in the context of prototypic research applications.The depiction of spatially complex anatomical situations that include a variety of diagnostic parameters and can be evaluated intuitively by the surgeon using it must be realized if a precise reproduction of the acquired image data is to be central to the purpose of radiological diagnostics.In the case of spatial visualizations, in particular, the targeted exploration of individual structures, i.e., the furtherance of visibility and recognition of individual partial objects in a complex spatial scene, generally requires the use of special image-enhancement techniques.Above all, the development of nonphotorealistic enhancement techniques that support an intuitive assessment of complex spatial scenes, has a high potential for the efficient implementation of such visualization tasks [Strothotte T et al. 2002, Ibanez L et al. 2005].For example, the targeted use of contour lines, transparency and shadow projections can substantially improve the visibility of the enhanced objects and the recognition of the shape of the object [Preim B et al. 2002].The targeted use, focused on specific tasks, of such enhancement techniques for the visualization of medical data is a topic in current research [Ibanez L et al. 2005].The analysis and visualization of CT data and the production of dynamic image sequences was carried out by Fraunhofer MEVIS, Bremen.In connection with the tasks of planning a lung intervention, the visualization of bronchial and vascular trees is of particular importance.A correct illustration of the branching structure and the spatial relationships between vessels and supplied parenchyma is also important in this case, as is the accentuation of the decrease in vascular diameter towards the periphery.Conventional visualization techniques, such as volume or surface rendering, reach their limits of applicability in this case, partially due to the fact that the resolution of the image data is too low and partially because of interference from artefacts due to noise and other distortions.Specially developed software permits a rapid and robust visualization of the vascular and bronchial trees.This is based on a surface depiction of the vessels through geometric filtration based on their mid-lines and radii [Ritter et al. 2006, Hahn et al. 2001].The bronchial tree and the pulmonary vascular trees can be clearly illustrated with a very smooth and natural appearance using these methods.Triangulated surface models are often used for the visualization of segmented objects, where the surface of an object that is to be depicted is approximated using a network composed of triangles.This type of visualization provides a diversity of options, such as, for example, the transparent depiction of surfaces or the simultaneous illustration of multiple objects that penetrate each other, and has a long tradition of use in the field of 3-D computer graphics.For the purposes of surface visualization, Fraunhofer MEVIS has developed a flexible and modular library for the production, modification and visualization of surface models based on so-called winged edge meshes (WEM), that is characterized by a high efficiency and quality and is particularly well suited to interactive applications.The speed at which a surface model can be depicted is essentially determined by the number of triangles used.Although the quality of the visualization increases with the number of triangles, it can become so slow that smooth interactions are no longer possible.The number of triangles can be drastically reduced using a special, locally adapted filtering algorithm, without the quality deteriorating to any great extent.It is even possible to obtain a faster and also more precise visualization by simultaneously increasing the sampling rate.Besides isosurface representation methods, also (direct) volume rendering is used for 3-D visualization.Volume rendering is the visualization of a three-dimensional data set through the projection of the individual voxels on to an image plane.In this process, the transparency and coloration of the voxels is determined by the grey values and a transfer function.Volume rendering is a reliable and established technique for the depiction of radiological images as this type of visualization does not require segmentation.An illumination model and a multi-dimensional transfer function are often used in modern volume rendering, which include additional attributes for the depiction of the voxel grey www.intechopen.com
Computer-Assisted Visualization of Central
Lung Tumours Based on 3-Dimensional Reconstruction 217 values on to colours and transparencies and thus make special effects possible.For example, vessels or tumours can be highlighted using this technique, with the surrounding areas of the image being illustrated as a transparent silhouette.
3-D reconstruction for the planning of interventions on central tumours
To evaluate the potential of the CT based segmentation and 3-D reconstructions of morphological structures of the lung for thoracic surgery planning 40 cases were analysed.The surgical approach was decided on by the thoracic surgeon undertaking the treatment, while the pre-operative tumour classification was mainly carried out by colleagues in radiology.All 40 CT examinations were initially evaluated without knowledge of the 3-D reconstruction analysis, which is the common procedure in clinical routine.The optimal, oncologically correct surgical intervention was planned.The peri-operative risk assessment with reference to the selected intervention was carried out, also taking into consideration the patient's pulmonary reserve and the general condition.Finally, the oncological approach to resection and the extent of resection were determined individually for each patient.17 patients were staged as inoperable and not scheduled for surgery.The scheduled surgical strategy which was based on 2-D slices only was documented.In a second step, the 3-D reconstruction analysis was taken into consideration as an additional information and assessment device.In some cases the knowledge of the 3-D analysis lead to a change of the planned surgical procedure.The planned surgical procedure based on the 3-D reconstruction as an additional planning device was documented.The scheduled procedures were then compared to the actually carried out surgical intervention respectively.The assumption of a primary inoperable situation (n = 17 patients) based on the 2-D slices was confirmed by the 3-D reconstruction analysis only in 10 of 17 cases.Finally, 30 patients were scheduled for surgery.The surgical approach in the 30 patients with operative treatment, selected based on the 2-D slices, corresponded to the therapy that was ultimately carried out in 14 patients (46.7 %, Table 5.1).Ultimately, the final surgical approach was only predicted in just under half of the patients based on the usual 2-D analysis.The risk analysis after the 3-D reconstruction had been viewed revealed a correct predictive outlook in 25 cases (83.3 %) (Table 5.2).We were not able to classify the operability in 3 patients neither in the 2D nor in the 3D analysis, so we decided to perform a surgical exploration.4 of the 17 patients -not resectable after 2D analysis -were predicted to be resectable in a curative intention after using the 3D based analysis.In 3 of these 4 patients a curative resection could be performed, the 4 th patient turned out to be not resectable despite the 3D analysis.After neo-adjuvant therapy (stage IIIB), 3 patients became operable (stage IIIA).This stage improvement with a corresponding surgical approach was correctly predicted both based on the 2-D and on the 3-D analysis.No advantage of the 3-D reconstruction was determined in patients after neo-adjuvant therapy with reference to the risk analysis.A comparison of the 2-D and 3-D proposals for surgery reveals 25 congruent and 15 divergent results for all cases (n = 40) and 15 congruent and 15 divergent cases, respectively, for those patients operated on.The analysis of the divergent results (n = 15) revealed the following constellation: out of 15 divergent results, 13 (87 %) had been properly corrected by the 3-D analysis, i.e., the initially incorrect prediction was improved to a correct prediction through the pre-operative use of the 3-D depiction in 13/15 cases.In only 2 cases was the 2-D analysis found to be correct, compared with a subsequent incorrect 3-D analysis.In these cases, the correct 2-D analysis was erroneously changed by the 3-D reconstruction.Patient 1 exhibited status post atypical resection of the left upper lobe with pulmonary metastasis.Given a metastatic relapse in the left lower lobe, the probable operation was regarded as pneumonectomy of the residual lung based on 2-D CT slices.The 3-D representation favoured a lower lobe resection under retention of the partially remaining upper lobe.Pneumonectomy of the residual lung had to be carried out intra-operatively, such that the initial 2-D assessment was confirmed.Patient 2 exhibited a left-central, small-cell bronchial carcinoma with broad-based contact to the aortic arch.While the 2-D analysis indicated inoperability due to a suspected infiltration of the aortic arch, based on the 3-D analysis, the observers thought it would be possible to conduct a resection just within healthy tissue.However, intra-operatively, the tumour infiltration was revealed as even more extensive than on the pre-operative images.The intervention had to be abandoned as an exploration.
In 3 patients, neither the 2-D, nor the 3-D analysis produced a correct analysis of the final surgical strategy.The extent of the resection was underestimated in 2 patients, while it was possible to spare more parenchyma during surgery in 1 patient, when compared with what had been planned based on the analyses.
Outlook: Fusion of different modalities
In this outlook we discuss the potential of combining CT information with modalities from nuclear medicine.This discussion is kind of a roadmap of upcoming research in the area of image based preoperative risk assessment and planning of surgical interventions in the lung.
Functional imaging (SPECT) and data fusion SPECT/3-D
In addition to the depiction of the morphology, the functionality of the original and remaining residual lung tissue is of central interest for a large thoracic resective intervention.The functionality of the lung parenchyma can be assessed with lung function or exercise tests (spirometry, spiroergometry or body plethysmography).At the university hospital Lübeck, selective digital perfusion scintigraphy SPECT (Fig. 6.1.3)is used as the functional imaging method.The examination technique permits a graphic illustration of functional aspects in individual organs.Based on the principle of scintigraphy, a radiotracer is administered intravenously to the patient.The tracer used in lunge perfusion exams is constructed to be fixed in the lung capillaries during first pass.The radionuclides -(Tc99m) -that are used emit gamma radiation that can be detected and measured with using gamma cameras.One or more gamma cameras rotate around the body and detect the emitted radiation from different directions in space.The distribution of the radiotracer in the lung depends on the amount of blood passing thought the different parts of the lungs.It can be deduced from these planar projections using inverse radon transformation and the distribution can then be depicted in the form of computed tomographic sections through the body.This allows the production of a three-dimensional image of lung function.The substance, Tc-99m-labeled macroaggregated albumin, that is used for perfusion scans has no medicinal effects or side effects as it is used in tiny quantities that produce only small doses of gamma radiation.After about 36 h, 99 % of the material has decayed or been excreted.The levels of radiation exposure are substantially smaller than for an x-ray examination.Beside perfusion scans ventilation scans could be performed using radioisotops of noble gases.As handling of radioactive gases is demanding perfusion scans are preferred at many sites.A relatively novel technique is the combination of SPECT and CT to obtain SPECT quantification on a lobar basis.Both scans are therefore used during the examination: CT produces images of the body's morphology, SPECT depicts the perfusion.Superimposed SPECT and CT images then permit a precise local determination of ill perfused areas.As CT data cannot be acquired in the available SPECT scanner we undertook an external fusion of the available image data and subsequent 3-D visualization.In a feasibility experiment, a CT dataset (Clinic for Radiology, University Hospital Schleswig-Holstein (UKSH) , Campus Luebeck) and the tomographic SPECT perfusion data (Clinic for Nuclear Medicine, UKSH, Campus Luebeck) were fused using user guided registration (rigid + scale) (Fig. 6.2) and visualized in 3D externally (Fraunhofer MEVIS /Bremen), and sent back online.Along with the renderings, a quantitative relative and absolute measurement of lung perfusion per lobe and emphysema scores based on the CT data was computed, employing the lung lobe segmentation performed at MEVIS (see Section 5.4). Figure 6.1 shows the corresponding visualization.Until recently mostly only anatomy and topography has been illustrated prior to thoracic surgery using MSDCT.In addition to this, the depiction of functional aspects also plays an important role in the planning of surgery.So far scintigraphy of the lungs to illustrate perfusion and, if necessary, ventilation, has been established in clinical routine, as the determination of vessel diameters or the distribution of pulmonary density using CT does not permit secure conclusions with reference to perfusion or ventilation.More recently, positron emission tomography (PET) has become more and more established as an additional diagnostic tool in everyday clinical routine.PET is an imaging method used in nuclear medicine to study the distribution of small doses of radioactively labelled substances in the organism and to thus depict biochemical and physiological functions.In this context, particular interest is in its use in investigating the activity of a suspected tumorous structure, in order to differentiate between scar tissue, artelectasis (non ventilated lung appearing in similar density as the tumor in CT) and tumour tissue in the search for tumour extends, metastases, or within the framework of a staging examination following on from (neo-)adjuvant therapy to assess tumour response to chemotherapy.The fusion of an individual PET finding with 2-dimensional CT images on a light box is difficult and a task that is almost impossible to accomplish for a non-radiologist or nuclear medic, in particular.Therefore most recent PET installations combine a PET an a CT scanner.The CT images from a PET/CT however are commonly not of diagnostic quality.The option of fusing FDG-PET and diagnostic CT images with high 3D resolution was also assessed within the framework of the current study.
For the PET scan a radiopharmaceutical is administered intravenously to the patient at the start of a PET examination.For oncology application mostly 18-FDG is used, a radiolabeled sugar, that has high uptake in tumours and other metabolic highly active areas (e.g. the brain or inflammation areas).In contrast to SPECT, PET uses radionuclides that emit positrons (β + rays).The spatial distribution of 18FDG within the body can be deduced from the temporal and spatial distribution of the recorded decay events and a series of crosssectional images can be calculated.Furthermore, the distribution of the tracer in the volume under investigation can be precisely quantified -which is not as well possible with SPECTas the absorption of the photons being measured depends only on the thickness of the irradiated tissue and not on the origin of the photons.Oncological PET uses metabolically active glucose as the radionuclide (also called radiopharmaceutical or tracer), in which a hydroxyl group on C6 of the sugar molecule is replaced by the radionuclide F18.FDG-6phosphate cannot undergo any further metabolism after phosphorylation and accumulates in tissue ('metabolic trapping').Metabolically active processes taking place in tumours and metastases can thus be depicted and distinguished from non-metabolically active structures like scar tissue.In PET-CT, the patient is passed through the two detector rings for CT and PET, one after the other (in housing for the equipment).The images that are produced are automatically fused in the computer.In contrast to a conventional thoracic CT, a so-called low-dose CT scan is sufficient for a PET-CT.The exposure to radiation for a pure PET examination using F-18 is about 4mSv and is thus in the range of a computed tomography on the thorax.For a feasibility study the digital PET images were fused to the high resolution diagnostic CT data externally (Fraunhofer MEVIS, Bremen) in order to create 3D visualization incorporating the functional PET information.The CT data (Clinic for Radiology, University Hospital Schleswig-Holstein (UKSH) , Campus Lübeck) and digital PET images (Clinic for Nuclear Medicine, UKSH, Campus Luebeck) were sent separately online, fused with user guidance in dedicated MeVisLab software prototypes (Figs.6.2.1 and 6.2.2) and rendered in 3D externally (Fraunhofer MeVis, Bremen), the data structures allowing interactive 3D visualizations were then sent back to Lübeck online.The advantages of a high-dose CT became clear: only with high resolution CT data an acceptable image quality will be feasible for 3D visualizations.They provide a greatly improved fused image quality when compared with a low-dose CT image.However, as PET-CT is a rather expensive imaging method, it is not currently available across the board in Germany.
Limits of 3-D reconstruction
The basic prerequisite for a subsequent optimal and meaningful 3-D reconstruction is the initial CT.Standard CT quality (5mm sections) does not permit adequate 3-D reconstruction.Distances between layers that are too great or CT layers that are too thick result in 3Dimages of poor quality containing little information and possibly in inaccurate results of software based segmentations.In turn, if no, or too little, contrast agent is administered, this results in a deficient depiction of the intra-thoracic vascular supply.The initial CT must therefore fulfil the following requirements: }{Hochauflösendes Computertomogramm (mindestens 4-Zeiler), maximale Schichtdicke 2 mm, maximaler Schichtabstand 1,5mm undKontrastmittelgabe zur Gefäßdarstellung.|high-resolutioncomputed tomography scan (minimum 4-slice), maximum layer thickness 2 mm, maximum distance between layers 1.5 mm and administration of contrast agent for vascular depiction.}The thresholds for 3-D imaging are identical to those for an axial 2-D CT: The differentiation between a solid tumour and a post-stenotic atelectasis is just as impossible as the secure delimitation of a point of contact between a tumour and a potential infiltration of a vascular wall or solid mediastinal organs.Even so, the image analysis and 3-D reconstruction constitutes an enormous gain in quantitative and qualitative information for the surgeon.In addition to the quantitative depiction and calculation of tumour size and tumour volume, the lobe volumes and calculations of distances, it is mainly the qualitative advance that is of importance when compared with an axial 2-D CT.The possibility of segmentation permits the selective observation of tumour, vascular system or bronchial tree and is vastly superior to the visual depiction in 2 planes.The method of anatomical 3-D reformatting currently provides the best possible imaging for pre-operative risk analysis in complex thoracic interventions.
In the medium term, computer analysis should also be used for intra-operative detection of lung tumours and for navigation during VATS.However, prior to this, the problem that is specific to the lungs, e.g.changes in locational anatomy due to the peri-operative atelectasis, must be clarified and a full understanding must be gained.
Summary
The segmentation and 3-dimenstional visualization of thoracic morphology based on CT is a novel and highly promising method for pre-operative imaging and risk analysis for central lung tumours.This allows a visualization of the tumour in combination with colour-coded lobe and segment association and the anatomical relationship to neighbouring structures.Furthermore, the depiction and calculation of lung volumes and emphysematous portions permits estimates of the expected residual functionality of the lung.The 3-dimensional image that can be moved in all planes, in combination with the possibility of anatomical segmentation, is easier and more simple for the observer to understand than the 2-D scans used to date with/without the associated radiological information.
Acknowledgments
This work has been financially supported by grant DFG PE 199/20-1 of the german society for research.The referred software is based on contributions of many current and former researchers at Fraunhofer MEVIS.It is also a result of a lot of fruitful discussions with a variety of thoracic surgeons throughout Germany within and beyond the frame of the ongoing project.
Fig. 3
Fig. 3.1 Axial CT of a 54-year-old patient with bilateral lung metastases.As an example, the largest metastasis is shown in the right left upper lobe of the lung (left).Reformatting and volumetric evaluation of the metastasis (right image).
Fig. 4.2.1 Isolated 3-D reconstruction of the pulmonary arterial (left) and the pulmonary venous (right) systems.Anterior view.
Fig. 4
Fig. 4.2.2Combined depiction of the vascular and bronchial systems.Anterior view.
Fig. 4.4.1 Pre-operative 3-D visualization of a patient with extensive lung emphysema in addition to lung cancer of the left lower lobe.View from anterior (A), posterior (P), feet (F) and right side (R).
Fig. 6
Fig. 6.1.1Three-dimensional visualization for a patient with severe lung emphysema.Software fusion of SPECT data set with CT data using ManualRegistration in MeVisLab.Ventilated areas are highlighted in colour, non-ventilated areas are depicted semitransparently.
Fig 6
Fig 6.1.2Reformatted data after fusion with SPECT.Graduated illustration of the ventilation density (dark = reduced ventilation, light = good ventilation).
Fig. 6
Fig.6.1.3Scintigram of the lungs after i.v.administration of a radionuclide ( 99m Tc).From a higher number of such projection images a 3D-SPECT perfusion image can be reconstructed.
Fig. 6
Fig. 6.2.1 67-year-old patient with a suspected local relapse of an adenocarcinoma in the right lower lobe.Status post primary transthoracic radiation therapy with initial functional inoperability (tumor highlighted in yellow).
Fig. 6
Fig. 6.2.2 PET-CT image fusion.Clear superimposition of metabolically active areas on the F18-PET (black grid lines) with the tumor (yellow). | 2016-01-22T01:51:31.857Z | 2011-10-03T00:00:00.000 | {
"year": 2011,
"sha1": "1b083b2b17bc2ee123fc47eca44d36339b5d62f0",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.intechopen.com/citation-pdf-url/20935",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "cf3d0a0fc6a7f85d387cd5d8b5d48ba823fc939b",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
33852368 | pes2o/s2orc | v3-fos-license | Performance Analysis of a PV / FC Hybrid System for Generating Electricity in Iraq ’ s Remote Areas
A reliable electrical energy supply is a prerequisite for improving the standard economic and quality of life levels in a country. As is the case in many countries, Iraq is home to a collection of remote villages. Since it is uneconomical to connect these villages to the existing grid, the installation of standalone electrical power generators has become common practice. As a result, diesel stand-alone power generators see widespread use in these remote locales, which, whilst fit for their intended purpose, unfortunately suffer from several drawbacks, including instability in regards to everyday oil prices and a number of environmental issues. The implementation of a PV/FC hybrid power system could be one potential alternative to help solve these problems. Therefore, this paper will present PV/FC system control strategies alongside information relating to the performance of such system components, based on a case study that was conducted in Al-Gowair, Iraq. This study is especially important in terms of envisioning the future energy supply needs of Iraq. The HOMER simulation results showed that by using the proposed control strategies and suggested components of a PV/FC system, it was able to produce a satisfactory outcome.
Bac kground
There are many remote villages that are located far away from the utility grid in Iraq.Connecting such villages to the existing grid is certainly both impractical and inefficient.Therefore, in order to fulfil the electrical energy demand in those particular villages, the installation of standalone generators is a normal practice in Iraq.
However petroleum costs keep increasing, with the fluctuations in price often being unpredictable.As such, the use of diesel as a fuel source for standalone generators in remote areas can no longer be considered reliable.In addition, since its consumption releases significant pollutants, such as CO2, CO, NOx and SO2, diesel is unfriendly to the environment [1,2].
As a result, the best option for remote areas would be to install standalone electrical generators, which utilise a renewable energy supply.Under similar conditions, there are some renewable energy sources and technologies that are available for use, which have already been applied, as shown in Table 1.
Country
Renewable energy applied Capacity Sine Moussa Abdou, Thiès region, Senegal [3] ISSN: 1693-6930 TELKOMNIKA Vol.14, No. 2, June 2016 : 411 -422 412 One potential renewable energy source is a hybrid photovoltaic (PV) and fuel cell (FC) system.Regarding the green energy concept, these are both excellent renewable energy sources.PV/FC power plants have been successfully operated in many countries, including Germany, Italy, Finland, Japan, Spain, Saudi Arabia, Switzerland and the USA [7].
In Neunburg vorm Wald, Germany, the system consists of several different PV technologies that range in size from 6 kW p to 135 kW p .Other subsystems include DC/DC and DC/AC converters, DC and AC busbars as well as two electrolyzers of 111 kWe and 100 kW e , which are used to produce 47 m 3 /h of hydrogen, refrigerating units of 16.6 kW th and three fuel cells, i.e. (1) alkaline of 6.5 kWe and 42.2 kWth, (2) phosphoric acid of 79.3 kWe and 13.3kWth, and (3) PEM of 10 kWe.Similarly, the Ente Nazionale per le Energie Alternative (The ENEA Project) in Italy consists of a PV field of 5.6 kW p , a bipolar alkaline electrolyzer of 5 kW and a tank storage subsystem of 18 Nm 3 .The control system is based on a Programmable Logic Controller (PLC), which controls many variables such as the temperature of the electrolyzer, the range of the current, the conductivity of water and it has the ability to stop the system in emergency situations.The fuel cell size is a 3 kW PEM, operating at 72°C.The two aforementioned examples show that the cost of installing a PV/FC system is higher than the relative installation costs of a diesel generator system.They also indicate that the energy conversion process that takes place through a PV-electrolyzer-storage-FC chain is much more complex than a simple direct load supply.However, the PV/FC system is able to avoid energy surplus losses and can store more energy for longer periods of time [8].
Even though many efforts have been made towards simplifying the design of PV/FC systems, so far researchers have been unable to agree on a definitive optimum design process for such a system.There is a real need to explore optimum sizing of component selection, operational control strategies and performance-related issues in this area.
A feasibility study regarding the application of PV/FC systems in Iraq's remote areas has not yet been carried out.Therefore, this study on PV/FC systems is of particular importance when attempting to envisage the future energy supply needs of Iraq.
Considering the above facts, a PV/FC system for Al-Gowair village has been planned in order to obtain an optimal design, which includes the sizing of components, hourly-based operating states and the operational control strategy.Four main components of a PV/FC hybrid power system which will be examined, namely PV, the electrolyzer, hydrogen storage tanks, fuel cells as well as other accessories.The stored hydrogen and oxygen furnish the fuel cells in a controlled fashion without interruption when the PV system cannot supply sufficient power to the electrolyzer and accessories during off-solar days.
HOME R Software
The Hybrid Optimization Model for Electric Renewable (HOMER) is software that is used to perform comparative economic analysis on distributed generation power systems.The data inputted into the HOMER software will perform an hourly simulation for every possible combination of the components.These inputs are used to rank the systems according to userspecified criteria, such as cost of energy (COE) or capital costs.Furthermore, HOMER simulations can perform 'sensitivity analysis', in which the values of certain parameters, such as cost of fuel cells, are varied in order to determine their impact on the COE [9].
Load Profile
In the first step of the design process, load analysis is performed by considering the electrical loads over an average day.In real-life data, the load profile will vary from day to day due to the size and shape of the load consumption.In simulation, to achieve the real conditions, some noise inputs are added to the load data profile.In both cases, day-to-day and time-stepto-time-step, a small random variability of 3% has been applied.Variations due to seasonal affects are also considered as another factor of variation in the load.
The daily consumption of electrical energy in village during June to October is shown in Table 2.It was assumed that the load of each hour would be reduced by 2 kW during November to February, while the load would be reduced to 3 kW for each hour during March to May.The daily load demand in Al-Gowair village during June to October is shown in Figure 1 [10].Figure 2 shows the monthly average load profiles [11].
Solar Radiation Resource
The daily sunshine profile as a monthly average over the course of a year is illustrated in Figure 3.It represents the monthly clearness index of Al-Gowair.All the data was gathered for latitude 34" 9' North and longitude 42" 26' East [12].It should be noted that during the summer season, solar radiation attains its maximum level.The highest level is during June, with daily radiation levels reaching around 7.6 kWh/m 2 /d.Then, during the winter season it attains the minimum value, which takes place during December with daily radiation levels of around 2.6 kWh/m 2 /d.These levels are similar to the reliable solar radiation level in Iraq.
Sy stem Description
A hybrid-type power generation system consists of a PV module equipped with a controller that is used to attain maximum power-point trackers, a pressurized storage tank for The whole system has been designed using HOMER as shown in Figure 5. Furthermore, several component prices for this study are obtained from previously published papers [13,14].
PV System
The PV system consists of arrays of solar cells, which are commercially available in many types of power and voltage ranges.For example, the simplest PV power module is found in many types of small calculators and wristwatches.Larger PV modules are utilised for electrical water pumps, communication towers, home appliances, etcetera [11,15].The utilization of PV systems for electricity generation provides substantial advantages over conventional power sources, for example: (1) PV is environmentally friendly-there are no harmful greenhouse gas emissions during the generation of electricity; (2) solar energy is obtained from natural resources, which are free and in abundance; (3) the current cost of PV is on a fast-reducing track and this reduction is expected to continue for the next several years, therefore, PV panels have a promising future in terms of economic viability; (4) PV panels convert sunlight into electricity in a direct way; and (5) PV panels have very low operation and maintenance costs [16].In order to cater to the electrical demand in Al-Gowair, the capacity of the PV module has been determined as needing to be capable of producing between 0 kW to 280 kW.This information will be applied to the HOMER, along with the capital cost, replacement cost, O&M cost, lifetime and tracking system.The details of the input data for the PV module are provided in Table 3.
Fuel Cells
A fuel cell combines hydrogen and oxygen to produce electricity.The basic principle of a fuel cell is illustrated in Figure 6.Hydrogen is fed to the fuel electrode (anode), where it is oxidized, producing hydrogen ions and electrons.In the meantime, oxygen is fed to the air electrode (cathode), where the hydrogen ions from the anode absorb electrons and react with the oxygen to produce water.The difference between the respective energy levels of the anode and the cathode is the voltage per unit cell.However, the current flows in the external circuit depend on the chemical activity and the amount of supplied hydrogen.The flow of the current will continue as long as there is a supply of reactants (hydrogen and oxygen) [17].Detailed data of FC for the current study is provided in Table 4.
Conv erter
All PV and FC systems produce DC power, which cannot be directly applied to particular machines and home appliances.To convert the DC power to AC power, an inverter device is required.Since most electrical appliances have no built-in facility for accessing DC power, an inverter is of utmost necessity as part of the overall system.Inverter devices are available with different specifications of output wattage [18].The inverter specifications detailed inError!Reference source not found.are the values needed to cater the load profile.
Electroly zer
Electrolysis is the process in which an electric current is passed through water (H2O) in order to break the bonds between the hydrogen and the oxygen, yielding hydrogen (H2) and oxygen (O2) in separate states.In this project, the electrolysis process is used to get H2 and store it in a hydrogen tank [19].A stand-alone electrolyzer system, known as a Proton Exchange Membrane (PEM) electrolyzer, purchased from Proton Energy Inc., was used to obtain a cost estimate of a stand-alone (hydrogen by wire) electrolyzer.The details of the electrolyzer are provided in Table 6.All these components could be improved upon-for example, by replacing fittings with welded tube assemblies-in order to achieve further cost reductions [20].In conventional systems, an electrolyzer produces hydrogen at low pressures (100-200 psi).The hydrogen is then compressed to elevate the pressure for gas storage.In recent decades, the resultant pressure is about 2500-3000 psi, which is expected to increase up to 6,000 psi in the very near future, through the application of improved techniques.As a result, it would eliminate the need for compressors.Given this context, it is assumed that a compressor will not be required for the current study [9].
H ydrogen Tank
A tank for storing the hydrogen is a necessary element.The hydrogen storage specification is shown in Table 7, in which the variation of size is 0 kg to 140 kg.During the 25year service period, this tank will need to be maintained annually.The cost of operation and maintenance of the hydrogen storage is $15 per kg per annum.The stored hydrogen energy is used to overcome daily and seasonal discrepancies in order to meet the demand for reliablysourced energy.
Results and Discussion
The simulation was carried out based on a 25-year-long projection period and 6% annual real interest rate.The aim was to ensure the highest levels of reliability in terms of supply security, efficiency of the stand-alone PV/FC system and to properly define the operational strategy needed to maintain the generator, all of which can be summarized as follows: (a) The first scenario was the PV system supplies the electricity immediately to the load demand.In this scenario the power of the PV system was equal to the load demand (P Load); (PV supply = P Load).
(b) The second scenario was if the power of the PV system exceeds the P Load.In such a situation, the PV system would immediately supply the P Load as well as distribute the excess power from the PV system to the electrolyzer in order to produce H 2 ; (PV supply > P Load).
(c) Another scenario was that the PV system provides less electrical power than the P Load.In this scenario, the P Load would be supplied by both the PV system and the FC; (PV supply < P Load).
(d) Finally, if solar irradiation is unavailable, electricity might be supplied from the FC to the load demand; (PV supply = 0).
Furthermore, experiments were conducted in order to find the optimum values of each decision variable size, with the possible decision variables being (1) PV array, (2) fuel cell generator, (3) converter, (4) electrolyzer and (5) hydrogen storage tank.Figure 7.The overall optimization results showing system configuration sorted by the total net present cost presents the overall optimization results for the proposed system, including a list of different possible sizes for the components.The first row shows the optimum system configuration-meaning the one with the lowest net present cost.
Equipment Optimization Analysis
It was determined that 265 kW of PV output was the optimum size for the potential Al-Gowair PV/FC system.If lower sizes are used, they will result in an insufficient energy supply for the required loads, whilst higher sizes will significantly increase the capital cost.The monthly average PV output from January to December is illustrated in Figure 8.It should be noted that the maximum average output appears during the summer season, with the winter season having the lowest possible average output.A summary of the PV output results can be seen in Table 8.Summary of PV output results, which provides essential information regarding the quantity of PV output for Al-Gowair.In order to account for the required load, the optimum size of the FC should be 60 kW.This is enough to supply the necessary load even when the output of PV turns to zero during the night.The results of the FC output in the simulation have been summarized in Table 9. Summary of fuel cell output results.Meanwhile, Figure 9 contains the data in regard to the daily profile of FC output.As can be seen from the supplied data, the maximum FC output occurs mostly at 06:00 PM.However, the FC output ramps down during the sunlight hours and becomes zero if PV output attains a threshold point where it can handle the entire load.In terms of the monthly average output of FC, as shown in Figure 10, it can be seen that during the winter season, the FC output intensifies as the PV output goes down due to a reduction in solar radiation.FC represents an attractive option as an intermittent source of electricity generation because of its characteristics, such as high efficiency, fast load response, modularity and fuel flexibility.Unlike batteries, FC does not need to be recharged.In fact, FC will continuously produce electricity as long as fuel is supplied to the unit.This is in direct contrast to batteries, whose electrodes are permanently consumed during their operating time, which ultimately results in the batteries running out of energy [21].According to Georgi, L. [22], some advantages of FC are its high electrical and total efficiency potential (much higher than the combustion engine), low emissions (zero emission), low maintenance and low noise.
For both small and large-scale systems, one efficient method of obtaining hydrogen is by using an electrolysis method, whereby PV can be coupled with an electrolyzer to produce hydrogen.This is the cleanest source of producing hydrogen without causing pollutant emissions.PV-based hydrogen production plants are flexible systems, in other words, it is easy to customise [9] such a system to meet a specific region's needs.
In order to design an effective PV/FC system, one important thing to be considered is the converter (inverter) efficiency factor.The inverter efficiency factor depends on constant power being supplied over a certain duration.Hence, a perfect PV/FC design involves properly determining the input/output wattage of the inverter.From the HOMER simulation, it was observed that 65 kW is the suitable capacity for the PV/FC system in this instance.Some details regarding the required quantity output for the inverter are given in Table 10 Meanwhile, in line with load profile, the daily profile of the inverter, as detailed in Figure 11 shows that the maximum output of the inverter occurs at 06:00 PM.
The electrolyser requires 230 kW in order to produce sufficient hydrogen for utilization of the FC.Information regarding the monthly average electricity consumption absorbed by the electrolyzer (electrolyzer input), as well as the output, is illustrated in Figure 12 and Figure 13 respectively.During sunlight hours-at which point PV produces an electrical power output-the shape of the curves of the electrolyzer output power, as shown Figure 14, is similar to the curve of the PV output.A hydrogen tank with a capacity of 135 kg is required to store the hydrogen produced by the electrolyzer.A summary of results in accordance with hydrogen-tank production and consumption is shown in Table 11.Summary of the hydrogen tank results, reveals that every year there is a 2 kg surplus of hydrogen obtained.However, a detailed analysis regarding the impact of the hydrogen surplus for a 25-year service is still uncertain since no accurate information could be used as justification.Meanwhile, it should be noted that Figure 15 shows that the monthly average amount of stored hydrogen is affected by the PV and electrolyzer outputs, in which the minimum values are during the months of minimum output of PV and FC and vice versa.Even though lead acid batteries may be used for long-term energy storage, hydrogen storage has many advantages over batteries.For instance, batteries need constant monitoring and maintenance in order to ensure the power storage is in good condition, whilst hydrogen storage requires no such continuous care.
Recently, cost-effective pressurized tanks, which can be used safely for most applications, have become available for hydrogen storage.In cases of unfavourable weather conditions, a storage system is a very necessary part of a stand-alone energy system in order to ensure that energy can still be provided in emergencies, such as instantaneous overload conditions and solar off-day conditions [23].
Cost to Build the System
By using the HOMER simulation, it was determined that the estimated cost of the system is relatively expensive when compared with the average cost of an equivalent system that makes use of a diesel generator [14].This is because the capital cost of PV and FC are more expensive than that of a diesel generator.However, taking advantage of new emerging techniques, the production cost should be significantly reduced in the future when taking into account an efficient and cost-effective design.In Figure 16, the cash-flow summary of the annual estimated costs is illustrated.Furthermore, net present cost (NPC) of the system for 25 years of service is shown in Figure 17.The cash-flow summary and NPC provide a similar profile, in which PV cost contributes the highest overall cost to the project.This is because the current price of PV modules is still fairly expensive.It should be noted that the cost of an electrolyzer is the second highest, followed by the FC, H 2 Tank and converter respectively.It is also essential to highlight that the lifetime of an electrolyzer is only 15 years, after which time it should be replaced with a new one.As a result, the replacement cost must be taken into account, which obviously affects the total cost of the electrolyzer portion of the system.
Energy Production and Consumption Analysis
Data regarding the energy production (as shown in Table 12), and energy consumption (as shown in Table 13), as well as the summary (as shown in Table 14), was obtained through the use of the HOMER simulation.From Table 12 and Table 13, it is obvious that PV is the main energy source being utilised to supply the entire primary load, including the electrolyzer.However, it is important to also consider that the electrolyzer absorbs 60% of the total energy produced by the PV -for 75% overall efficiency (see Table 6).For that reason, it will be essential to thoroughly study the electrolyzer efficiency being utilised by the real system prior to actual installation.Between the current two foremost types of electrolyzers, namely alkaline and Figure 18 presents collected data from the 10 th of February, during which time the PV module begins to generate power from 06:30 AM and continues doing so until 06:30 PM.This energy is used to supply the load with the desired power output.However, during 06:30 AM to 07:30 AM and 04:30 PM to 06:30 PM, the supplied power is not enough to fully account for the requested energy load, so another generation device (FC) is added in to the loop in order to compensate for the lack of power during these periods.From 07:30 AM to 04:30 PM, the PV will be the only power supply needed in order to meet the load requirements, during which time the electrolyzer will also do its work.From 06:30 PM to 06:30 AM, the supply will depend solely on the FC.The subsequent daily output profile provides a comprehensive description about the PV/FC system.According to the results, the designed PV/FC system is feasible for implementation in order to cater to the energy demands of Al-Gowair.
Conclu sion
A feasibility study regarding the use of PV/FC hybrid power systems for remote locations in Iraq has been carried out comprehensively by using the HOMER software.Based on the simulation results, the control strategies and the performance of the system components are acceptable for implementing in Al-Gowair.Though the NPC system is higher when compared to a diesel system for 25 years, the PV/FC, the PV/FC system operates completely independent from world fossil fuel price fluctuation (which is likely to only increase in the years to come).As an additional benefit, no harmful pollutants such as CO2, CO, NOx and SO2 will be released into the environment.Considering varieties in load profiles and meteorological conditions, the proposed method could be safely implemented in similar cases for the optimization of hybrid PV/FC power systems.
Figure 1 .
Figure1.The daily load demand in Al-Gowair village during June to October[11]
Figure 2 .
Figure 2. Monthly averages load profile Figure 3. Monthly average daily radiation and clearness index
Figure 4 .Figure 5 .
Figure 4.The configuration of a PV/FC hybrid power generation system
Figure 6 .
Figure 6.The basic principle of a fuel cell
Figure 7 .Figure 8 .
Figure 7.The overall optimization results showing system configuration sorted by the total net present cost
Figure 9 .
Figure 9. Daily profile of fuel cell out Figure 10.Monthly average output of fuel cell
Figure 16 .Figure 17 .
Figure 16.The cash-flow summary of the annualized costs
Figure 18 .
Figure 18.Contribution of electric energy production by various energy sources in an optimal system
Table 2 .
The daily consumption of electrical energy in Al-Gowair village
Table 3 .
PV input details
Table 7 .
Hydrogen storage details
Table 8 .
Summary of PV output results
Table 9 .
Summary of fuel cell output results
Table 10 .
Summary of inverter output results Performance Analysis of a PV/FC Hybrid System for Generating Electricity Iraq's… (Z.Nawawi) 419
Table 11 .
Summary of the hydrogen tank results
Table 12 .
[24]EM electrolyzers, the PEM electrolyzer type requires less overall energy in order to function than an equivalent alkaline electrolyzer[24].Annual electric energy production Performance Analysis of a PV/FC Hybrid System for Generating Electricity Iraq's… (Z.Nawawi)
Table 13 .
Annual electric energy consumption
Table 14 .
Summary of important electrical output results Universiti Teknologi Malaysia is acknowledged for the permission to publish the results of the research. | 2017-09-07T14:16:19.481Z | 2016-06-01T00:00:00.000 | {
"year": 2016,
"sha1": "12a6f04ff7d72cd7ae412eeeb4ad0682c4544c7a",
"oa_license": "CCBYNC",
"oa_url": "http://www.journal.uad.ac.id/index.php/TELKOMNIKA/article/download/3749/2777",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "12a6f04ff7d72cd7ae412eeeb4ad0682c4544c7a",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
17768425 | pes2o/s2orc | v3-fos-license | Determining Term Subjectivity and Term Orientation for Opinion Mining
Opinion mining is a recent subdiscipline of computational linguistics which is concerned not with the topic a document is about, but with the opinion it expresses. To aid the extraction of opinions from text, recent work has tackled the issue of determining the orientation of “subjec-tive” terms contained in text, i.e. deciding whether a term that carries opinion-ated content has a positive or a negative connotation. This is believed to be of key importance for identifying the orientation of documents, i.e. determining whether a document expresses a positive or negative opinion about its subject matter. We contend that the plain determination of the orientation of terms is not a realistic problem, since it starts from the non-realistic assumption that we already know whether a term is subjective or not; this would imply that a linguistic resource that marks terms as “subjective” or “objective” is available, which is usually not the case. In this paper we confront the task of deciding whether a given term has a positive connotation, or a negative connotation, or has no subjective connotation at all ; this problem thus subsumes the problem of determining subjectivity and the problem of determining orientation. We tackle this problem by testing three different variants of a semi-supervised method previously proposed for orientation detection. Our results show that determining subjectivity and orientation is a much harder problem than determining orientation alone.
We contend that the plain determination of the orientation of terms is not a realistic problem, since it starts from the nonrealistic assumption that we already know whether a term is subjective or not; this would imply that a linguistic resource that marks terms as "subjective" or "objective" is available, which is usually not the case. In this paper we confront the task of deciding whether a given term has a positive connotation, or a negative connotation, or has no subjective connotation at all; this problem thus subsumes the problem of determining subjectivity and the problem of determining orientation. We tackle this problem by testing three different variants of a semi-supervised method previously proposed for orientation detection. Our results show that determining subjectivity and orientation is a much harder problem than determining orientation alone.
Introduction
Opinion mining is a recent subdiscipline of computational linguistics which is concerned not with the topic a document is about, but with the opinion it expresses. Opinion-driven content management has several important applications, such as determining critics' opinions about a given product by classifying online product reviews, or tracking the shifting attitudes of the general public toward a political candidate by mining online forums.
Within opinion mining, several subtasks can be identified, all of them having to do with tagging a given document according to expressed opinion: 1. determining document subjectivity, as in deciding whether a given text has a factual nature (i.e. describes a given situation or event, without expressing a positive or a negative opinion on it) or expresses an opinion on its subject matter. This amounts to performing binary text categorization under categories Objective and Subjective (Pang and Lee, 2004;Yu and Hatzivassiloglou, 2003); 2. determining document orientation (or polarity), as in deciding if a given Subjective text expresses a Positive or a Negative opinion on its subject matter (Pang and Lee, 2004;Turney, 2002); 3. determining the strength of document orientation, as in deciding e.g. whether the Positive opinion expressed by a text on its subject matter is Weakly Positive, Mildly Positive, or Strongly Positive (Wilson et al., 2004).
To aid these tasks, recent work (Esuli and Sebastiani, 2005;Hatzivassiloglou and McKeown, 1997;Kamps et al., 2004;Kim and Hovy, 2004;Takamura et al., 2005;Turney and Littman, 2003) has tackled the issue of identifying the orientation of subjective terms contained in text, i.e. determining whether a term that carries opinionated content has a positive or a negative connotation (e.g. deciding that -using Turney and Littman's (2003) exampleshonest and intrepid have a positive connotation while disturbing and superfluous have a negative connotation). This is believed to be of key importance for identifying the orientation of documents, since it is by considering the combined contribution of these terms that one may hope to solve Tasks 1, 2 and 3 above. The conceptually simplest approach to this latter problem is probably Turney's (2002), who has obtained interesting results on Task 2 by considering the algebraic sum of the orientations of terms as representative of the orientation of the document they belong to; but more sophisticated approaches are also possible (Hatzivassiloglou and Wiebe, 2000;Riloff et al., 2003;Wilson et al., 2004). Implicit in most works dealing with term orientation is the assumption that, for many languages for which one would like to perform opinion mining, there is no available lexical resource where terms are tagged as having either a Positive or a Negative connotation, and that in the absence of such a resource the only available route is to generate such a resource automatically.
However, we think this approach lacks realism, since it is also true that, for the very same languages, there is no available lexical resource where terms are tagged as having either a Subjective or an Objective connotation. Thus, the availability of an algorithm that tags Subjective terms as being either Positive or Negative is of little help, since determining if a term is Subjective is itself non-trivial.
In this paper we confront the task of determining whether a given term has a Positive connotation (e.g. honest, intrepid), or a Negative connotation (e.g. disturbing, superfluous), or has instead no Subjective connotation at all (e.g. white, triangular); this problem thus subsumes the problem of deciding between Subjective and Objective and the problem of deciding between Positive and Negative. We tackle this problem by testing three different variants of the semi-supervised method for orientation detection proposed in (Esuli and Sebastiani, 2005). Our results show that determining subjectivity and orientation is a much harder problem than determining orientation alone.
Outline of the paper
The rest of the paper is structured as follows. Section 2 reviews related work dealing with term orientation and/or subjectivity detection. Section 3 briefly reviews the semi-supervised method for orientation detection presented in (Esuli and Sebastiani, 2005). Section 4 describes in detail three different variants of it we propose for determining, at the same time, subjectivity and orientation, and describes the general setup of our experiments. In Section 5 we discuss the results we have obtained. Section 6 concludes.
Related work 2.1 Determining term orientation
Most previous works dealing with the properties of terms within an opinion mining perspective have focused on determining term orientation. Hatzivassiloglou and McKeown (1997) attempt to predict the orientation of subjective adjectives by analysing pairs of adjectives (conjoined by and, or, but, either-or, or neither-nor) extracted from a large unlabelled document set. The underlying intuition is that the act of conjoining adjectives is subject to linguistic constraints on the orientation of the adjectives involved; e.g. and usually conjoins adjectives of equal orientation, while but conjoins adjectives of opposite orientation. The authors generate a graph where terms are nodes connected by "equal-orientation" or "opposite-orientation" edges, depending on the conjunctions extracted from the document set. A clustering algorithm then partitions the graph into a Positive cluster and a Negative cluster, based on a relation of similarity induced by the edges. Turney and Littman (2003) determine term orientation by bootstrapping from two small sets of subjective "seed" terms (with the seed set for Positive containing terms such as good and nice, and the seed set for Negative containing terms such as bad and nasty). Their method is based on computing the pointwise mutual information (PMI) of the target term t with each seed term t i as a measure of their semantic association. Given a target term t, its orientation value O(t) (where positive value means positive orientation, and higher absolute value means stronger orientation) is given by the sum of the weights of its semantic association with the seed positive terms minus the sum of the weights of its semantic association with the seed negative terms. For computing PMI, term frequencies and co-occurrence frequencies are measured by querying a document set by means of the AltaVista search engine 1 with a "t" query, a "t i " query, and a "t NEAR t i " query, and using the number of matching documents returned by the search engine as estimates of the probabilities needed for the computation of PMI. Kamps et al. (2004) consider instead the graph defined on adjectives by the WordNet 2 synonymy relation, and determine the orientation of a target adjective t contained in the graph by comparing the lengths of (i) the shortest path between t and the seed term good, and (ii) the shortest path between t and the seed term bad: if the former is shorter than the latter, than t is deemed to be Positive, otherwise it is deemed to be Negative. Takamura et al. (2005) determine term orientation (for Japanese) according to a "spin model", i.e. a physical model of a set of electrons each endowed with one between two possible spin directions, and where electrons propagate their spin direction to neighbouring electrons until the system reaches a stable configuration. The authors equate terms with electrons and term orientation to spin direction. They build a neighbourhood matrix connecting each pair of terms if one appears in the gloss of the other, and iteratively apply the spin model on the matrix until a "minimum energy" configuration is reached. The orientation assigned to a term then corresponds to the spin direction assigned to electrons.
The system of Kim and Hovy (2004) tackles orientation detection by attributing, to each term, a positivity score and a negativity score; interestingly, terms may thus be deemed to have both a positive and a negative correlation, maybe with different degrees, and some terms may be deemed to carry a stronger positive (or negative) orientation than others. Their system starts from a set of positive and negative seed terms, and expands the positive (resp. negative) seed set by adding to it the synonyms of positive (resp. negative) seed terms and the antonyms of negative (resp. positive) seed terms. The system classifies then a target term t into either Positive or Negative by means of two alternative learning-free methods based on the probabilities that synonyms of t also appear in the respective expanded seed sets. A problem with this method is that it can classify only terms that share some synonyms with the expanded seed sets. Kim and Hovy also report an evaluation of human inter-coder agreement. We compare this evaluation with our results in Section 5.
The approach we have proposed for determining term orientation (Esuli and Sebastiani, 2005) is described in more detail in Section 3, since it will be extensively used in this paper.
All these works evaluate the performance of the proposed algorithms by checking them against precompiled sets of Positive and Negative terms, i.e. checking how good the algorithms are at classifying a term known to be subjective into either Positive or Negative. When tested on the same benchmarks, the methods of (Esuli and Sebastiani, 2005;Turney and Littman, 2003) have performed with comparable accuracies (however, the method of (Esuli and Sebastiani, 2005) is much more efficient than the one of (Turney and Littman, 2003)), and have outperformed the method of (Hatzivassiloglou and McKeown, 1997) by a wide margin and the one by (Kamps et al., 2004) by a very wide margin. The methods described in (Hatzivassiloglou and McKeown, 1997) is also limited by the fact that it can only decide the orientation of adjectives, while the method of (Kamps et al., 2004) is further limited in that it can only work on adjectives that are present in WordNet. The methods of (Kim and Hovy, 2004; Takamura et al., 2005) are instead difficult to compare with the other ones since they were not evaluated on publicly available datasets. Riloff et al. (2003) develop a method to determine whether a term has a Subjective or an Objective connotation, based on bootstrapping algorithms. The method identifies patterns for the extraction of subjective nouns from text, bootstrapping from a seed set of 20 terms that the authors judge to be strongly subjective and have found to have high frequency in the text collection from which the subjective nouns must be extracted. The results of this method are not easy to compare with the ones we present in this paper because of the different evaluation methodologies. While we adopt the evaluation methodology used in all of the papers reviewed so far (i.e. checking how good our system is at replicating an existing, independently motivated lexical resource), the authors do not test their method on an independently identified set of labelled terms, but on the set of terms that the algorithm itself extracts. This evaluation methodology only allows to test precision, and not accuracy tout court, since no quantification can be made of false negatives (i.e. the subjective terms that the algorithm should have spotted but has not spotted). In Section 5 this will prevent us from drawing comparisons between this method and our own. Baroni and Vegnaduzzo (2004) apply the PMI method, first used by Turney and Littman (2003) to determine term orientation, to determine term subjectivity. Their method uses a small set S s of 35 adjectives, marked as subjective by human judges, to assign a subjectivity score to each adjective to be classified. Therefore, their method, unlike our own, does not classify terms (i.e. take firm classification decisions), but ranks them according to a subjectivity score, on which they evaluate precision at various level of recall.
Determining term subjectivity and term orientation by semi-supervised learning
The method we use in this paper for determining term subjectivity and term orientation is a variant of the method proposed in (Esuli and Sebastiani, 2005) for determining term orientation alone. This latter method relies on training, in a semisupervised way, a binary classifier that labels terms as either Positive or Negative. A semisupervised method is a learning process whereby only a small subset L ⊂ T r of the training data T r are human-labelled. In origin the training data in U = T r − L are instead unlabelled; it is the process itself that labels them, automatically, by using L (with the possible addition of other publicly available resources) as input. The method of (Esuli and Sebastiani, 2005) starts from two small seed (i.e. training) sets L p and L n of known Positive and Negative terms, respectively, and expands them into the two final training sets T r p ⊃ L p and T r n ⊃ L n by adding them new sets of terms U p and U n found by navigating the Word-Net graph along the synonymy and antonymy relations 3 . This process is based on the hypothesis that synonymy and antonymy, in addition to defining a relation of meaning, also define a relation of orientation, i.e. that two synonyms typically have the same orientation and two antonyms typically have opposite orientation. The method is iterative, generating two sets T r k p and T r k n at each iteration k, where T r k p ⊃ T r k−1 p ⊃ . . . ⊃ T r 1 p = L p and T r k n ⊃ T r k−1 n ⊃ . . . ⊃ T r 1 n = L n . At iteration k, T r k p is obtained by adding to T r k−1 p all synonyms of terms in T r k−1 p and all antonyms of terms in T r k−1 n ; similarly, T r k n is obtained by adding to T r k−1 n all synonyms of terms in T r k−1 n and all antonyms of terms in T r k−1 p . If a total of K iterations are performed, then T r = T r K p ∪ T r K n . The second main feature of the method presented in (Esuli and Sebastiani, 2005) is that terms are given vectorial representations based on their WordNet glosses (i.e. textual definitions). For each term t i in T r ∪ T e (T e being the test set, i.e. the set of terms to be classified), a textual representation of t i is generated by collating all the glosses of t i as found in WordNet 4 . Each such represen-3 Several other WordNet lexical relations, and several combinations of them, are tested in (Esuli and Sebastiani, 2005). In the present paper we only use the best-performing such combination, as described in detail in Section 4.2. The version of WordNet used here and in (Esuli and Sebastiani, 2005) is 2.0. 4 In general a term ti may have more than one gloss, since tation is converted into vectorial form by standard text indexing techniques (in (Esuli and Sebastiani, 2005) and in the present work, stop words are removed and the remaining words are weighted by cosine-normalized tf idf ; no stemming is performed) 5 . This representation method is based on the assumption that terms with a similar orientation tend to have "similar" glosses: for instance, that the glosses of honest and intrepid will both contain appreciative expressions, while the glosses of disturbing and superfluous will both contain derogative expressions. Note that this method allows to classify any term, independently of its POS, provided there is a gloss for it in the lexical resource.
Once the vectorial representations for all terms in T r∪T e have been generated, those for the terms in T r are fed to a supervised learner, which thus generates a binary classifier. This latter, once fed with the vectorial representations of the terms in T e, classifies each of them as either Positive or Negative.
Experiments
In this paper we extend the method of (Esuli and Sebastiani, 2005) to the determination of term subjectivity and term orientation altogether.
Test sets
The benchmark (i.e. test set) we use for our experiments is the General Inquirer (GI) lexicon (Stone et al., 1966). This is a lexicon of terms labelled according to a large set of categories 6 , each one denoting the presence of a specific trait in the term. The two main categories, and the ones we will be concerned with, are Positive/Negative, which contain 1,915/2,291 terms having a positive/negative orientation (in what follows we will also refer to the category Subjective, which we define as the union of the two categories Positive and Negative). In opinion mining research the GI was first used by Turney and Littman (2003), who reduced the list of terms to 1,614/1,982 entries afit may have more than one sense; dictionaries normally associate one gloss to each sense. 5 Several combinations of subparts of a WordNet gloss are tested as textual representations of terms in (Esuli and Sebastiani, 2005). Of all those combinations, in the present paper we always use the DGS¬ combination, since this is the one that has been shown to perform best in (Esuli and Sebastiani, 2005). DGS¬ corresponds to using the entire gloss and performing negation propagation on its text, i.e. replacing all the terms that occur after a negation in a sentence with negated versions of the term (see (Esuli and Sebastiani, 2005) for details). 6 The definitions of all such categories are available at http://www.webuse.umd.edu:9090/ ter removing 17 terms appearing in both categories (e.g. deal) and reducing all the multiple entries of the same term in a category, caused by multiple senses, to a single entry. Likewise, we take all the 7,582 GI terms that are not labelled as either Positive or Negative, as being (implicitly) labelled as Objective, and reduce them to 5,009 terms after combining multiple entries of the same term, caused by multiple senses, to a single entry.
The effectiveness of our classifiers will thus be evaluated in terms of their ability to assign the total 8,605 GI terms to the correct category among Positive, Negative, and Objective 7 .
Seed sets and training sets
Similarly to (Esuli and Sebastiani, 2005), our training set is obtained by expanding initial seed sets by means of WordNet lexical relations. The main difference is that our training set is now the union of three sets of training terms T r = T r K p ∪T r K n ∪T r K o obtained by expanding, through K iterations, three seed sets T r 1 p , T r 1 n , T r 1 o , one for each of the categories Positive, Negative, and Objective, respectively.
Concerning categories Positive and Negative, we have used the seed sets, expansion policy, and number of iterations, that have performed best in the experiments of (Esuli and Sebastiani, 2005), i.e. the seed sets T r 1 p = {good} and T r 1 n = {bad} expanded by using the union of synonymy and indirect antonymy, restricting the relations only to terms with the same POS of the original terms (i.e. adjectives), for a total of K = 4 iterations. The final expanded sets contain 6,053 Positive terms and 6,874 Negative terms.
Concerning the category Objective, the process we have followed is similar, but with a few key differences. These are motivated by the fact that the Objective category coincides with the complement of the union of Positive and Negative; therefore, Objective terms are more varied and diverse in meaning than the terms in the other two categories. To obtain a representative expanded set T r K o , we have chosen the seed set T r 1 o = {entity} and we have expanded it by using, along with synonymy and antonymy, the WordNet relation of hyponymy (e.g. vehicle / car), and without imposing the restriction that the two related terms must have the same POS. These choices are strictly related to each other: the term entity is the root term of the largest generalization hierarchy in WordNet, with more than 40,000 terms (Devitt and Vogel, 2004), thus allowing to reach a very large number of terms by using the hyponymy relation 8 . Moreover, it seems reasonable to assume that terms that refer to entities are likely to have an "objective" nature, and that hyponyms (and also synonyms and antonyms) of an objective term are also objective. Note that, at each iteration k, a given term t is added to T r k o only if it does not already belong to either T r p or T r n . We experiment with two different choices for the T r o set, corresponding to the sets generated in K = 3 and K = 4 iterations, respectively; this yields sets T r 3 o and T r 4 o consisting of 8,353 and 33,870 training terms, respectively.
Learning approaches and evaluation measures
We experiment with three "philosophically" different learning approaches to the problem of distinguishing between Positive, Negative, and Objective terms. Approach I is a two-stage method which consists in learning two binary classifiers: the first classifier places terms into either Subjective or Objective, while the second classifier places terms that have been classified as Subjective by the first classifier into either Positive or Negative. In the training phase, the terms in T r K p ∪ T r K n are used as training examples of category Subjective.
Approach II is again based on learning two binary classifiers. Here, one of them must discriminate between terms that belong to the Positive category and ones that belong to its complement (not Positive), while the other must discriminate between terms that belong to the Negative category and ones that belong to its complement (not Negative). Terms that have been classified both into Positive by the former classifier and into (not Negative) by the latter are deemed to be positive, and terms that have been classified both into (not Positive) by the former classifier and into Negative by the latter are deemed to be negative. The terms that have been classified (i) into both (not Positive) and (not Negative), or (ii) into both Positive and Negative, are taken to be Objective. In the training phase of Approach II, the terms in T r K n ∪ T r K o are used as training examples of category (not Positive), and the terms in T r K p ∪ T r K o are used as training examples of category (not Negative).
Approach III consists instead in viewing Positive, Negative, and Objective as three categories with equal status, and in learning a ternary classifier that classifies each term into exactly one among the three categories.
There are several differences among these three approaches. A first difference, of a conceptual nature, is that only Approaches I and III view Objective as a category, or concept, in its own right, while Approach II views objectivity as a nonexistent entity, i.e. as the "absence of subjectivity" (in fact, in Approach II the training examples of Objective are only used as training examples of the complements of Positive and Negative). A second difference is that Approaches I and II are based on standard binary classification technology, while Approach III requires "multiclass" (i.e. 1-of-m) classification. As a consequence, while for the former we use well-known learners for binary classification (the naive Bayesian learner using the multinomial model (McCallum and Nigam, 1998), support vector machines using linear kernels (Joachims, 1998), the Rocchio learner, and its PrTFIDF probabilistic version (Joachims, 1997)), for Approach III we use their multiclass versions 9 .
Before running our learners we make a pass of feature selection, with the intent of retaining only those features that are good at discriminating our categories, while discarding those which are not. Feature selection is implemented by scoring each feature f k (i.e. each term that occurs in the glosses of at least one training term) by means of the mutual information (MI) function, defined as and discarding the x% features f k that minimize it. We will call x% the reduction factor. Note that the set {c 1 , . . . , c m } from Equation 1 is interpreted differently in Approaches I to III, and always consistently with who the categories at stake are.
Since the task we aim to solve is manifold, we will evaluate our classifiers according to two evaluation measures: • SO-accuracy, i.e. the accuracy of a classifier in separating Subjective from Objective, i.e. in deciding term subjectivity alone; • PNO-accuracy, the accuracy of a classifier in discriminating among Positive, Negative, 9 The naive Bayesian, Rocchio, and PrTFIDF learners we have used are from Andrew McCallum's Bow package (http://www-2.cs.cmu.edu/˜mccallum/bow/), while the SVMs learner we have used is Thorsten Joachims' SV M light (http://svmlight.joachims.org/), version 6.01. Both packages allow the respective learners to be run in "multiclass" fashion. and Objective, i.e. in deciding both term orientation and subjectivity.
Results
We present results obtained from running every combination of (i) the three approaches to classification described in Section 4.3, (ii) the four learners mentioned in the same section, (iii) five different reduction factors for feature selection (0%, 50%, 90%, 95%, 99%), and (iv) the two different training sets (T r 3 o and T r 4 o ) for Objective mentioned in Section 4.2. We discuss each of these four dimensions of the problem individually, for each one reporting results averaged across all the experiments we have run (see Table 1).
The first and most important observation is that, with respect to a pure term orientation task, accuracy drops significantly. In fact, the best SOaccuracy and the best P N O-accuracy results obtained across the 120 different experiments are .676 and .660, respectively (these were obtained by using Approach II with the PrTFIDF learner and no feature selection, with T r o = T r 3 o for the .676 SO-accuracy result and T r o = T r 4 o for the .660 P N O-accuracy result); this contrasts sharply with the accuracy obtained in (Esuli and Sebastiani, 2005) on discriminating Positive from Negative (where the best run obtained .830 accuracy), on the same benchmarks and essentially the same algorithms. This suggests that good performance at orientation detection (as e.g. in (Esuli and Sebastiani, 2005;Hatzivassiloglou and McKeown, 1997;Turney and Littman, 2003)) may not be a guarantee of good performance at subjectivity detection, quite evidently a harder (and, as we have suggested, more realistic) task. This hypothesis is confirmed by an experiment performed by Kim and Hovy (2004) on testing the agreement of two human coders at tagging words with the Positive, Negative, and Objective labels. The authors define two measures of such agreement: strict agreement, equivalent to our PNO-accuracy, and lenient agreement, which measures the accuracy at telling Negative against the rest. For any experiment, strict agreement values are then going to be, by definition, lower or equal than the corresponding lenient ones. The authors use two sets of 462 adjectives and 502 verbs, respectively, randomly extracted from the basic English word list of the TOEFL test. The intercoder agreement results (see Table 2) show a deterioration in agreement (from lenient to strict) of 16.77% for adjectives and 36.42% for verbs. Following this, we evaluated our best experiment according to these measures, and obtained a "strict" accuracy value of .660 and a "lenient" accuracy value of .821, with a relative deterioration of 24.39%, in line with Kim and Hovy's observation 10 . This confirms that determining subjectivity and orientation is a much harder task than determining orientation alone.
The second important observation is that there is very little variance in the results: across all 120 experiments, average SO-accuracy and P N Oaccuracy results were .635 (with standard deviation σ = .030) and .603 (σ = .036), a mere 6.06% and 8.64% deterioration from the best results reported above. This seems to indicate that the levels of performance obtained may be hard to improve upon, especially if working in a similar framework.
Let us analyse the individual dimensions of the problem. Concerning the three approaches to classification described in Section 4.3, Approach II outperforms the other two, but by an extremely narrow margin. As for the choice of learners, on average the best performer is NB, but again by a very small margin wrt the others. On average, the 10 We observed this trend in all of our experiments. best reduction factor for feature selection turns out to be 50%, but the performance drop we witness in approaching 99% (a dramatic reduction factor) is extremely graceful. As for the choice of T r K o , we note that T r 3 o and T r 4 o elicit comparable levels of performance, with the former performing best at SO-accuracy and the latter performing best at P N O-accuracy.
An interesting observation on the learners we have used is that NB, PrTFIDF and SVMs, unlike Rocchio, generate classifiers that depend on P (c i ), the prior probabilities of the classes, which are normally estimated as the proportion of training documents that belong to c i . In many classification applications this is reasonable, as we may assume that the training data are sampled from the same distribution from which the test data are sampled, and that these proportions are thus indicative of the proportions that we are going to encounter in the test data. However, in our application this is not the case, since we do not have a "natural" sample of training terms. What we have is one human-labelled training term for each category in {Positive,Negative,Objective}, and as many machine-labelled terms as we deem reasonable to include, in possibly different numbers for the different categories; and we have no indication whatsoever as to what the "natural" proportions among the three might be. This means that the proportions of Positive, Negative, and Objective terms we decide to include in the training set will strongly bias the classification results if the learner is one of NB, PrTFIDF and SVMs. We may notice this by looking at Table 3, which shows the average proportion of test terms classified as Objective by each learner, depending on whether we have chosen T r o to coincide with T r 3 o or T r 4 o ; note that the former (resp. latter) choice means having roughly as many (resp. roughly five times as many) Objective training terms as there are Positive and Negative ones. Table 3 shows that, the more Objective training terms there are, the more test terms NB, PrTFIDF and (in particular) SVMs will classify as Objective; this is not true for Rocchio, which is basically unaffected by the variation in size of T r o .
Conclusions
We have presented a method for determining both term subjectivity and term orientation for opinion mining applications. This is a valuable advance with respect to the state of the art, since past work in this area had mostly confined to determining term orientation alone, a task that (as we have ar- gued) has limited practical significance in itself, given the generalized absence of lexical resources that tag terms as being either Subjective or Objective. Our algorithms have tagged by orientation and subjectivity the entire General Inquirer lexicon, a complete general-purpose lexicon that is the de facto standard benchmark for researchers in this field. Our results thus constitute, for this task, the first baseline for other researchers to improve upon. Unfortunately, our results have shown that an algorithm that had shown excellent, stateof-the-art performance in deciding term orientation (Esuli and Sebastiani, 2005), once modified for the purposes of deciding term subjectivity, performs more poorly. This has been shown by testing several variants of the basic algorithm, some of them involving radically different supervised learning policies. The results suggest that deciding term subjectivity is a substantially harder task that deciding term orientation alone. | 2014-07-01T00:00:00.000Z | 2006-01-01T00:00:00.000 | {
"year": 2006,
"sha1": "af5c4034493461af13a7f5480e081becf0218511",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "af5c4034493461af13a7f5480e081becf0218511",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
266843154 | pes2o/s2orc | v3-fos-license | Development of innovative multi-epitope mRNA vaccine against Pseudomonas aeruginosa using in silico approaches
Abstract The rising issue of antibiotic resistance has made treating Pseudomonas aeruginosa infections increasingly challenging. Therefore, vaccines have emerged as a viable alternative to antibiotics for preventing P. aeruginosa infections in susceptible individuals. With its superior accuracy, high efficiency in stimulating cellular and humoral immune responses, and low cost, mRNA vaccine technology is quickly replacing traditional methods. This study aimed to design a novel mRNA vaccine by using in silico approaches against P. aeruginosa. The research team identified five surface and antigenic proteins and selected their appropriate epitopes with immunoinformatic tools. These epitopes were then examined for toxicity, allergenicity and homology. The researchers also checked their presentation and identification by major histocompatibility complex cells and other immune cells through valuable tools like molecular docking. They subsequently modeled a multi-epitope protein and optimized it. The mRNA was analyzed in terms of structure and stability, after which the immune system’s response against the new vaccine was simulated. The results indicated that the designed mRNA construct could be an effective and promising vaccine that requires laboratory and clinical trials.
INTRODUCTION
Pseudomonas aeruginosa is a Gram-negative bacillus that causes severe infections in patients with burns, severe wounds and pneumonia, as well as in critically ill patients who require intubation (ventilator-associated pneumonia) or catheterization (urinary tract infections) [1,2].The emergence of antibiotic-resistant strains has made treating P. aeruginosa infections increasingly difficult [3].According to the World Health Organization, there is an urgent need for novel therapeutics to combat P. aeruginosa infections due to their increasing prevalence and resistance rates.Pseudomonas aeruginosa is listed as one of the three bacterial species for which there is the most critical need for developing novel therapeutics [4].
In the USA, multi-drug-resistant (MDR) P. aeruginosa caused 32 600 infections among hospitalized patients and an estimated 2700 deaths in 2017 [5].Resistance rates of P. aeruginosa are increasing in many parts of the world, with recent studies reporting the widespread presence of extensively drug-resistant (XDR) high-risk clones in healthcare settings [6,7].Therefore, researchers are exploring several evolving strategies for the control and therapy of P. aeruginosa, including immunotherapy, phage therapy and vaccination [8].Vaccines are a promising alternative to antibiotics in preventing P. aeruginosa infections in susceptible individuals [9].They may be the best strategy to overcome treatment-associated complications with MDR P. aeruginosa.Several vaccines have entered clinical trials to prevent P. aeruginosa infections, but none have been approved for human use [10].
Over the years, vaccine development research has made significant progress.Traditional approaches like living attenuated, inactivated bacteria and subunit vaccines have proven to induce immunogenicity and provide long-term protection [11].However, novel methods such as peptide-based and DNA vaccines show potential for rapid and scalable vaccine development [12][13][14].Unfortunately, peptide-based vaccines have a low immunogenicity index, and DNA vaccines carry the risk of insertional mutagenesis into the host's DNA.On the other hand, mRNA vaccines are more efficient, addressing the safety and efficacy concerns that arise with DNA and peptide-based vaccines [15,16].In addition, cell-free mRNA vaccines can be produced quickly, costeffectively and at scale.Furthermore, a single mRNA vaccine can encode multiple antigens, enhancing the immune response against resilient pathogens [17].
In designing an effective mRNA vaccine for P. aeruginosa, surface proteins critical for binding and entering the bacteria into host cells could serve as suitable targets.The outer membrane of P. aeruginosa contains various proteins, including lipoproteins and channels.Porins, which are β-barrel proteins that create water-filled diffusion channels, control nutrient exchange across the outer membrane.Among porin proteins, OprF is a significant target for diagnosing and treating P. aeruginosa because it is highly expressed, antigenically conserved and immunogenic [4,18].Another essential surface lipoprotein in P. aeruginosa pathogenesis is Outer membrane protein I (OprI), which plays a significant role in making the bacteria resistant to antimicrobial peptides.A phase III clinical trial vaccine (NCT01563263) consisting of OprI and OprF proteins shows promise as a vaccine for P. aeruginosa [19].
T4 pilli in P. aeruginosa are critical in many processes, including attachment to biotic and abiotic surfaces, DNA uptake, biofilm formation, phage transduction, and twitching motility.Therefore, they provide an ideal target antigen for vaccine development [20].The pilus fiber comprises hundreds of copies of PilA or pilin, which act as both a major structural subunit and an adhesion factor [21].Pseudomonas aeruginosa is equipped with a single polar f lagellum, comprising a filament made of helically arranged polymerized f lagellin subunits (FliC), a type-specific cap protein (FliD), the hook at the base of the filament (FlgE), two filament-hook junction proteins (FlgKL) and several basal body components across outer and inner membranes [22].FliC f lagellin (paFliC) is crucial for P. aeruginosa colonization and acts as an essential virulence factor.It activates innate immune responses by recognizing Tolllike receptor 5 (TLR5) and adaptive immunity in the host.paFliC has been considered a vaccine candidate against P. aeruginosa infections [23].In this study, a novel multi-epitope mRNA vaccine was designed against P. aeruginosa using in silico approaches.
Prediction of immune cell epitopes
To predict B-cell epitopes, the ABCpred webserver available at https://webs.iiitd.edu.in/raghava/abcpred/ABC was used.An artificial machine-learning approach was employed for epitope prediction, with each protein sequence submitted using a 0.5 threshold.The selected epitope length was 16 amino acids, and the overlap filter remained active.The top epitope results were further investigated [24].
In addition, cytotoxic T-cell lymphocyte (CTL) epitopes were predicted using the ANN 4.0 method through the Immune Epitope Database MHC, which can be accessed at http://tools.iedb.org/main/tcell/.Predicted epitopes were sorted based on their IC 50 value.Helper T-cell lymphocytes were predicted using the NNalign method through the Immune Epitope Database MHC-II, which can be accessed at http://tools.iedb.org/main/tcell/[25].
Human homology
To check for homology between the predicted peptides and human peptides, the NCBI BLASTp tool available at https://blast.ncbi.nlm.nih.gov/Blast.cgiPAGE=Proteins was used to compare all peptides against the Homo sapiens (TaxID: 9606) protein database.Peptides with an E-value greater than 0.05 were considered possibly non-homologous to human peptides.
Prediction of epitope's antigenicity, allergenicity and toxicity
To evaluate the antigenicity, allergenicity and toxicity of selected epitopes, various web servers were used.The VaxiJen web server available at http://www.ddg-pharmfac.net/Vaxijen/VaxiJen/VaxiJen.html predicted antigenicity based on the physicochemical properties of the epitopes in an alignment-independent manner, with a focus on bacteria and a threshold of 0.4.The AllerTop V.2.0 webserver at http://www.ddg-pharmfac.net/AllerTOP was used to predict allergenicity of epitopes using default settings.Lastly, the ToxinPred server available at https://webs.iiitd.edu.in/raghava/toxinpred/multi_submit.phppredicted and measured toxicity of epitopes by generating all potential mutants using default settings.Only epitopes that were antigenic, non-toxic and non-allergenic were retained for further research.
Molecular docking between T-lymphocyte epitopes and MHC alleles
Molecular docking simulations were used to evaluate the binding affinity of selected T-lymphocyte epitopes to their corresponding major histocompatibility complex (MHC) alleles.The 3D structures of MHC alleles were obtained from the RCSB PDB database and processed using PyMOL software to remove unnecessary ligands.Energy minimization of the structures was performed using Swiss-PDB Viewer.The selected epitopes were folded into their respective three-dimensional structures using the PEP-FOLD 3.5 server and then energy minimized using Swiss-PdBViewer before docking.Docking was performed using the ClusPro 2.0 server available at https://cluspro.bu.edu/login.php.
Design of the vaccine construct
A proposed mRNA vaccine construct has been designed using a specific sequence order from the N to C terminus.The sequence includes a modified cap structure (m7GCap), followed by a 5 untranslated region (5 UTR) and a Kozak sequence to enhance translation.The coding region begins with a signal peptide (tPA) connected through an EAAAK linker to an adjuvant component (RpfE), separated by a GPGPG linker.The vaccine's epitopes have been grouped into three sets and linked together via AAY, KK and GPGPG linkers, which provide cleavability, f lexibility, rigidity, and separate domains for proper folding and functioning of the components.The HTL epitopes are connected to the LBL epitopes using the KK linker, while the LBL epitopes are connected to the CTL epitopes via an AAY linker.The mRNA construct terminates with a MITD sequence, followed by a stop codon, a 3 UTR and a poly(A) tail to ensure proper termination and stability of the mRNA molecule.This sequence offers a promising approach to developing vaccines against certain diseases.
Prediction of antigenicity, allergenicity, toxicity and physicochemical properties of the vaccine construct
To determine the antigenicity of the mRNA vaccine construct, the VaxiJen 2.0 and ANTIGENpro servers were utilized.VaxiJen 2.0 predicts based on the physicochemical properties of the vaccine, while ANTIGENpro uses machine-learning algorithms and microarray analysis data.The constructed mRNA vaccine's amino acid sequence were used as the input, excluding tPA and MITD sequences.To assess allergenicity, the AllerTOP 2.0 server was used, and the ToxinPred server predicted toxicity.Finally, the ProtParam online web server (https://web.expasy.org/protparam/) was employed to predict various physicochemical properties of the vaccine, including amino acid composition, molecular weight, theoretical isoelectric point (pI), instability index (II), aliphatic index (A.I.) and grand average of hydropathicity (GRAVY).
In silico immune simulation
The C-ImmSim online simulation server (http://150.146.2.1/ CIMMSIM/index.php) was utilized to simulate the immune response for the mRNA vaccine construct.An immune response is stimulated by this server using epitopes in conjunction with lymphocyte receptors.To simulate the recommended dosage schedule of current vaccines, three doses of 1000 vaccine units were administered over 4 weeks.For the purposes of this study, all parameters were set to default values, and the injections were administered at time-steps 1, 84 and 168.By conducting a dynamic simulation of the immune response, the performance of the mRNA vaccine construct and its potential efficacy in eliciting an immune response can be assessed by researchers.
Codon optimization of the vaccine construct
The codon sequences in the designed mRNA vaccine construct were optimized to ensure efficient expression within human cells.For this purpose, the GenSmart Codon Optimization Tool (http:// www.genscript.com/)provided by GenScript (G.S.) was used.After optimization, a quality assessment of the optimized sequence was performed using the Rare Codon Analysis tools (http://www.genscript.com/)also provided by GenScript.The efficiency of mRNA translation was determined using Codon Adaptation Index (CAI).In addition, any unusual tandem codons present in the optimized sequence were identified through codon frequency distribution analysis.By optimizing the codon sequences, the expression and efficacy of the mRNA vaccine construct can potentially be improved by researchers.
Secondary structure prediction of the designed mRNA vaccine
The RNAfold tool (http://rna.tbi.univie.ac.at/cgi-bin/RNAWebSuite/RNAfold.cgi), which is part of the Vienna RNA Package 2.0, was used to determine the predicted secondary structure of the mRNA vaccine construct.McCaskill's algorithm was utilized by the RNAfold tool to calculate the minimum free energy (MFE) of the predicted secondary structure.Through this tool, researchers obtained information on both the minimal free energy structure and the centroid secondary structure, as well as their respective minimum free energies.By predicting the secondary structure of the mRNA vaccine construct, researchers can gain a better understanding of its potential stability and functionality within human cells.
Prediction and validation of the secondary and tertiary structures of the designed vaccine
The PSIPRED server (http://bioinf.cs.ucl.ac.uk/psipred/) was used by researchers to predict the secondary structure of the peptide sequence in the mRNA vaccine construct, excluding the tPA signal and MITD sequences.This tool employs position-specific scoring matrices and has an accuracy rate of 84.2%.To predict the three-dimensional structure of the peptide sequence, the Robetta server (https://robetta.bakerlab.org/)was utilized, which generated five possible structures.The ProSA-web (https:// prosa.services.came.sbg.ac.at/prosa.php),PROCHECK and ERRAT (https://saves.mbi.ucla.edu/)servers were then used to verify the best structure.By predicting both the secondary and tertiary structures of the peptide sequence, researchers can gain a better understanding of its potential function within the mRNA vaccine construct.
Prediction of the conformational B-cell epitopes
ElliPro, an online server (http://tools.iedb.org/ellipro/),was employed to identify discontinuous B-cell epitopes within the studied protein structure.ElliPro uses the geometrical characteristics of the 3D model to predict these epitopes.Compared to other available prediction tools for discontinuous B-cell epitopes, ElliPro provides the highest AUC value of 0.732 for any protein model.By identifying these epitopes, researchers can gain a better understanding of the potential immunogenicity of the protein structure and its role in the overall efficacy of the mRNA vaccine construct.The identification of these epitopes can also aid in the development of future vaccines and immunotherapies.
Molecular docking of the designed vaccine
The ClusPro server was utilized to determine the potential interaction between the desired mRNA vaccine and toll-like receptor 4 (TLR-4) or toll-like receptor 3 (TLR-3).The 3D structures of both the vaccine and TLR-4 (PDB ID: 3FXI) or TLR-3 (PDB ID: 1ZIW) were docked using the PIPER docking algorithm.By predicting how the vaccine structure interacts with TLR-4 or TLR-3, researchers can gain a better understanding of the potential immunogenicity of the vaccine construct and its overall efficacy in eliciting an immune response.
Molecular dynamics simulation
To confirm the physical motions of atoms and molecules within the TLR4-vaccine and TLR3-vaccine complex structures, dynamics simulation analysis was conducted using the iMODS server (http://imods.chaconlab.org/).The complex structures with the lowest binding energy were utilized for this analysis to ensure accuracy.The movement of atoms and molecules within the complex structures over time was simulated and their stability was assessed through the iMODS server.By understanding the dynamic behavior of the complex structures, insight into their potential efficacy in eliciting an immune response can be gained by the researchers.
Prediction and estimation of B-cell epitopes
In this study, the selection of epitopes was limited to the top five predicted for each included protein by the ABCpred webserver.The selected epitopes were subjected to further filtering to ensure their antigenicity, non-allergenicity and non-toxicity, using the VaxiJen, AllerTop and ToxinPred web servers, respectively.To avoid the induction of autoimmunity by the vaccine construct, all predicted epitopes were screened for homologs among Homo sapiens (Taxid:9606) with an E-value <0.05 and any homologs found were excluded from the vaccine construct.The selected protein variants were downloaded from the NCBI database and aligned using the Bioedit 7.2 program.In total, five B-cell epitopes extracted from the five studied proteins were chosen for inclusion in the vaccine construct, as listed in Table 1.
Prediction and estimation of the CTL epitopes
In this study, the researchers used the IEDB database to identify potential cytotoxic T lymphocyte (CTL) epitopes from the nine proteins studied.Epitopes with an IC 50 over 500 were selected for further analysis.Only epitopes that were antigenic, nonallergenic, non-toxic and non-homologs were included in the subsequent analysis.From these selection criteria, 11 epitopes located in the conserved regions of the proteins were chosen for inclusion in the vaccine construct.By selecting conserved epitopes, the researchers aimed to create a vaccine with broad efficacy against multiple strains of the target virus or pathogen.The selected epitopes are listed in Table 1.
Prediction and estimation of the HTL epitopes
Several potential HTL epitopes were identified from studying five P. aeruginosa proteins mentioned previously.Epitopes that were antigenic, non-allergenic, non-toxic, and non-homologs were investigated for their ability to induce cytokines, specifically IL-4, IL-10 and IFN-γ .From these analyses, 18 epitopes located in the conserved regions of the proteins were chosen for inclusion in the vaccine construct.These epitopes induced the cytokines mentioned earlier and are listed in Table 1.By selecting epitopes that induce cytokine responses, the researchers aimed to enhance the immune response elicited by the vaccine construct.
Molecular docking between MHC alleles and the selected T-lymphocyte epitopes
In this study, the researchers identified 29 lymphocyte epitopes that recognized a total of 38 MHC alleles, with some epitopes binding to one allele and others binding to multiple alleles (Table 2).Four epitopes and their corresponding MHC alleles were selected for further molecular docking analysis, with the crystallographic structures of all MHC alleles selected from the RCSB PDB server (Table 2).Using ClusPro 2.0, molecular docking was performed on these four epitopes and their parallel MHC alleles, and the results are presented in Table 3.The epitope LSDGAAAGY displayed the strongest binding affinity with its corresponding MHC allele (HLA-A * 01:01) with a −643.6 kcal/mol value.Furthermore, it was found that this epitope fits perfectly inside the binding cleft of its corresponding allele (Figure 1B).Finally, the interactions between the epitope and various residues of the MHC allele were evaluated using the LIGPLOT website (Figure 2).
Evaluation of antigenicity, allergenicity, toxicity and physicochemical properties of the vaccine construct
To assess the antigenicity, allergenicity and toxicity of the vaccine construct, the VaxiJen, ANTIGENpro, AllerTop and Toxin-Pred servers were utilized by the researchers.In addition, the physicochemical properties of the vaccine were evaluated using the ProtParam server (Table 4).The results indicated that the vaccine construct was found to be antigenic, non-allergenic and non-toxic.Furthermore, the physicochemical properties of the vaccine suggested that it was thermally stable, with a GRAVY score of −0.592, indicating its hydrophilic nature.Based on these findings, the multi-epitope mRNA vaccine construct developed in this study has the potential to be an effective candidate against P. aeruginosa.
Population coverage prediction
The IEDB Population Coverage tool was used to measure the global population coverage of the 29 epitopes for their corresponding 38 alleles.The results indicated that the vaccine construct has the potential to cover ∼90.13% of the world's population, suggesting that it could provide broad protection against P. aeruginosa infections.
In silico immune response simulation against the vaccine
In the study, three vaccine injections were administered to simulate the immune response (Figure 3).The second and third injections elicited higher immune responses compared to the primary injection.Immunoglobulin M (IgM) levels were higher than IgG levels, and the levels of immunoglobulins remained elevated following antigen reduction, suggesting the potential emergence of immune memory and practiced immunity.Isotype switching and memory formation of the B-cell population were also observed, with the presence of B-cell isotypes persisting for a prolonged duration.In addition, an increase in CTL and HTL cells with memory generation was observed.Furthermore, macrophage
Codon optimization of the mRNA construct
To enhance the translation of the mRNA vaccine construct within host cells, codon optimization tools were utilized.The GenSmart Codon Optimization tool (G.S.) was used to optimize the vaccine sequence for efficient expression in human cells.The length of the CDS was 1824 nucleotides, and the rare codon analysis tool from G.S. was employed to evaluate the quality of the optimized construct.The CAI value was estimated to be 0.96 (Figure 4A), which is acceptable since it exceeds the cut-off value of 0.8.In addition, the optimized construct's G.C. content was evaluated, and it was found that the optimal percentage of G.C. content should be ∼30-70% to ensure efficient expression in the human host.The average G.C. percentage of the optimized construct was 76.53%.This indicates that no codons could impede translation efficiency or function, as any codons with a value lower than 30 could reduce or stop translational machinery.Overall, these analyses suggest that the codon-optimized mRNA vaccine construct can be efficiently expressed in human cells.
Prediction of the secondary structure of the mRNA vaccine
The RNAfold server was used to predict the structure of the mRNA vaccine construct.The optimized codons of the construct were used as input, and the free energies of the structure were assessed using the server.The results indicated that the mRNA vaccine construct would be stable when manufactured with the MFE of the structure, which was calculated to be −739.10kcal/mol (Figure 4B).In addition, the secondary centroid structure had a free energy of −680.30kcal/mol (Figure 4C).These findings suggest that the mRNA vaccine construct can be efficiently manufactured and is structurally stable, potentially enhancing its efficacy as a vaccine.
Prediction and validation of the secondary and tertiary structures of the translated construct
The PSIPRED web service was used by researchers to further analyze the structure of the mRNA vaccine construct and predict its secondary structure.The alpha helices that predominated in the structure are shown in Figure 5A (Supplementary file 1).
In addition, the Robetta server was used to predict the tertiary structure of the peptide (Figure 5B), and the PROCHECK server was employed to verify the stereochemical accuracy of the structure.The Ramachandran plot presented in Figure 5C indicated that ∼83.1% of the residues were in the most favored regions, 13.2% in the additionally allowed zone and 1.8% in the generously allowed regions.The overall quality factor was 93.5275.Moreover, a negative Z-score (−7.09) was predicted by the ProSA-web, indicating that the 3D protein model is highly consistent (Figure 5D).Overall, these analyses suggest that the structure of the mRNA vaccine construct is stable, accurate and consistent, potentially enhancing its efficacy as a vaccine.
Conformational B-cell epitope prediction
The ElliPro server was used by researchers to predict the conformational B-cell epitopes generated from the folding of the model protein.The results revealed the prediction of six discontinuous B-cell epitopes, with a prediction score ranging from 0.515 to 0.791 for 215 residues.The 2D and 3D models of these conformational B-cell epitopes are shown in Figure 6I and II, respectively.These findings suggest that strong B-cell immune responses against P. aeruginosa infections can potentially be elicited by the mRNA vaccine construct.
Molecular dynamics simulation
To further analyze the vaccine-TLR3 and vaccine-TLR4 complexes, molecular dynamics simulation was performed using the iMODS server, while receptor-ligand interactions were assessed.The deformable loci of the construct were represented by peaks in the deformability graph (Figures 7B and 8B), which showed amino acids with coiled shapes.Normal mode analysis (NMA) was also conducted to study and characterize protein f lexibility, with the B-factor graph (Figures 7C and 8C) depicting the relationship between NMA and PDB areas in the uploaded complex.Eigenvalues of the docked complexes are displayed in Figures 7D and 8D.Overall, these analyses indicated that the vaccine-receptor complex had a low deformation index, muscular stiffness and high stability.
In Figures 7E and 8E, the covariance matrix showed the connection between amino acid duplets scattered in dynamical regions, with red indicating correlated residues, white representing anti-correlated amino acid duplets and blue representing non-correlated residues.In addition, a connecting matrix representing the elastic network model was employed to classify which atom pairs were connected by springs (Figures 7F and 8F).Each chain of the complex was found to have high stiffness, with darker gray colors indicating stiffer regions.These findings suggest that the vaccine-receptor complex is stable, rigid and has strong intermolecular interactions, potentially enhancing its efficacy as a vaccine against P. aeruginosa.
DISCUSSION
Pseudomonas aeruginosa is a challenging pathogen due to its antibiotic resistance, and developing effective vaccines against it has become crucial [26].Research on P. aeruginosa vaccines has been ongoing for over half a century, but despite extensive efforts, there are still no approved vaccines to date [27].The complexity of P. aeruginosa's pathogenesis, diverse virulence factors, high plasticity within the lung and high diversity of serotypes are significant obstacles in developing an effective vaccine [28].Both innate and adaptive immune responses play critical roles in combating P. aeruginosa infection.As P. aeruginosa is an extracellular pathogen, humoral, mucosal or systemic opsonizing immunity is most effective in preventing bacterial colonization and infection.However, T-cell responses can also mediate protective immunity in individuals with P. aeruginosa infections [29].Despite the challenges posed by the emergence of MDR and XDR strains, complex pathogenesis, high serotype diversity, and more, continued research and collaboration among scientists may lead to the development of an effective vaccine to combat this critical health challenge [30].
Immunoinformatic approaches were used in a study to develop a novel mRNA vaccine that is safe, engineered and efficient.This mRNA-based vaccine has been shown to be effective against various viral infections such as Zika, inf luenza, rabies, coronavirus and many others [31].Recently, multiple human clinical trials have begun, indicating that mRNA vaccines are now considered to be a safe and effective alternative to subunit protein, chimeric virus and even DNA-based therapies in the form of vaccination [32].One of the benefits of mRNA-based vaccines is their ability to induce the transient expression and accumulation of selected antigens in the cytoplasm, which then triggers an immune response against the target pathogen [33].This approach may offer several advantages over traditional vaccine strategies, including ease of production, rapid development and improved efficacy.
The development of a safe and efficient mRNA vaccine using immunoinformatic approaches represents a promising advancement in the field of vaccination.With increasing preclinical evidence and ongoing clinical trials, mRNA vaccines may become a preferred option for preventing viral infections and related diseases.In this study, a novel in silico multi-epitope mRNA vaccine has been proposed to combat the infection crises caused by P. aeruginosa.The vaccine is based on the major surface proteins of P. aeruginosa that contribute to cell binding and attachment of the bacterium.This approach may offer a potential solution to the challenges posed by P. aeruginosa infections, and further research is needed to evaluate the safety and efficacy of this vaccine.To identify potential epitopes that could induce humoral or cellular responses, the target proteins were examined using webbased tools such as the IEDB database, which predicts epitopes of HTL and CTL based on immune epitope determination, and ABC pred, an online server that anticipates B-cell epitopes using an artificial machine-learning method.The evaluation of epitopes was performed using web servers to determine antigenicity, allergenicity and toxicity.Specific linkers were used to combine the epitopes.
To further refine the vaccine design, an immune simulation was conducted to validate the humoral and cellular responses of the vaccine.The vaccine construct's targeted epitopes had 38 corresponding MHC alleles.The four chosen epitopes were subjected to molecular docking analysis, as ligand-epitope interaction is essential in vaccine design.Docking analysis was performed among the chosen epitopes and their corresponding MHC alleles using ClusPro, which predicts binding affinity and bond formation between the receptor and ligands.The interactions among the epitopes and MHC pockets were also analyzed.These steps were taken to ensure that the vaccine design was optimized for maximum efficacy.
This study proposes a novel in silico multi-epitope mRNA vaccine against P. aeruginosa based on major surface proteins of the bacterium.Various bioinformatics tools were utilized to predict and evaluate epitopes, and immune simulations were conducted to validate the vaccine's effectiveness.The results of molecular docking analysis suggest strong binding affinity between the chosen epitopes and their corresponding MHC alleles, indicating the potential efficacy of the proposed vaccine.It is important to note that vaccination is only effective in individuals with a particular MHC allele that can bind the epitope.Therefore, the IEDB population coverage tool was used to predict that the proposed vaccine would cover 90.13% of the world's population.This indicates that the vaccine has the potential to be widely effective.Moreover, to evaluate the vaccine's capacity to interact with immune receptors, the TLR-3 and TLR-4 immune receptors were docked with the vaccine construct.The findings showed that the vaccine has a strong affinity for binding to TLR-4 and TLR-3, indicating the potential for triggering both innate and adaptive immunity.This is a promising result as it suggests that the vaccine has the potential to stimulate a robust immune response.Overall, the proposed vaccine shows great potential for providing protection against P. aeruginosa infection.
The stability of the vaccine complex was investigated using molecular dynamics simulation, which showed that the proposed mRNA vaccine's peptide sequence is stable and thermostable.Immunoinformatic approaches were also used to evaluate the vaccine's antigenicity, allergenicity and hydrophilicity, and the results indicate that the vaccine is antigenic, non-allergenic and hydrophilic.In this study, the proposed in silico multi-epitope mRNA vaccine against P. aeruginosa has been evaluated for its potential effectiveness in triggering an immune response.The predicted population coverage is high, and the vaccine construct has a strong affinity for binding to immune receptors TLR-4 and TLR-3.In addition, molecular dynamics simulations indicate that the vaccine complex is stable, antigenic and non-allergenic.These findings suggest that the proposed vaccine may be a promising approach to combat P. aeruginosa infections.Overall, the study provides valuable insights into the development of effective vaccines against P. aeruginosa and highlights the potential of in silico approaches for vaccine design.
CONCLUSION
In conclusion, the proposed design of a novel multi-epitope mRNA vaccine for P. aeruginosa in this study provides a promising framework for future research in the field of vaccination against this bacterium.However, it is important to note that further in vitro and in vivo studies are necessary to confirm the findings of this study and to evaluate the vaccine's safety, efficacy and potential limitations in real-world scenarios.Successful development of an effective vaccine against P. aeruginosa could have significant implications for public health by reducing the morbidity and mortality associated with this pathogen.The proposed vaccine's high predicted population coverage and strong affinity for immune receptors TLR-4 and TLR-3 suggest that it may be a promising approach to combat P. aeruginosa infections.Overall, this study highlights the potential of in silico approaches for vaccine design and provides valuable insights into the development of effective vaccines against P. aeruginosa.
Key Points
• In designing an effective mRNA vaccine for P. aeruginosa, surface proteins critical for binding and entering the bacteria into host cells could serve as suitable targets.• The results indicated that the designed mRNA construct could be an effective and promising vaccine that requires laboratory and clinical trials.• The proposed design of a novel multi-epitope mRNA vaccine for P. aeruginosa in this study provides a promising framework for future research in the field of vaccination against this bacterium.
Figure 1 .
Figure 1.Docking between the epitopes and their corresponding MHC allele.
Figure 2 .
Figure 2. Epitopes and their corresponding MHC allele interaction using the LIGPLOT webserver.
Figure 3 .
Figure 3.In silico immune simulation against the mRNA vaccine retrieved from the C-ImmSim server (https://kraken.iac.rm.cnr.it/C-IMMSIM/).(A) The immunoglobulin production after antigen injection.(B) The B-cell population after three injections.(C) The B-cell population per state.(D) The helper Tcell population.(E) The helper T-cell population per state.(F) The cytotoxic T-cell population per state.(G) Macrophage population per state.(H) Dendritic cell population per state.(I) Cytokine and interleukin production with Simpson Index of the immune response.
Figure 4 .
Figure 4. Codon optimization and mRNA vaccine structure prediction: (A) CAI value; (B) optimal secondary structure; (C) centroid secondary structure of the vaccine mRNA retrieved using RNAfold webserver.
Figure 5 .
Figure 5. Structure prediction and validation of the peptide vaccine construct: (A) tertiary structure of the peptide using the Robetta server; (B) Ramachandran plot analysis using the PROCHECK server; (C) Z-score analysis using ProSA webserver.
Figure 6 .
Figure 6.The six predicted conformational B-cell epitopes using the ElliPro tool of the IEDB database: (I) 2D diagra m of the positions of conformational B-cell epitopes.(II) The 3D models of B-cell epitopes.The spheres represent the conformational B-cell epitopes.(A) 14 residues with a score of 0.613.(B) 37 residues with a score of 0.779.(C) 32 residues with a score of 0.791.(D) 120 residues with a score of 0.681.(E) 7 residues with a score of 0.582.(F) 5 residues with a score of 0.515.
Table 1 :
Cell type and sequence of epitope in this study
Table 2 :
Selected T-lymphocyte epitopes and their corresponding MHC alleles' protein name
Table 3 :
Docking analysis of some CTL epitopes with their corresponding MHC alleles
Table 4 :
The physicochemical properties of the translated form of the proposed mRNA vaccine activity was enhanced, while dendritic cell activity remained stable.Levels of IFN-γ and IL-2 cytokines were also increased.Epithelial cells, which are components of innate immunity, were augmented as well.Lastly, the Simpson index (D) was low, indicating a diverse immune response.Overall, these results suggest that the multi-epitope mRNA vaccine construct elicits a robust and diverse immune response against P. aeruginosa. | 2024-01-09T06:17:28.701Z | 2023-11-22T00:00:00.000 | {
"year": 2024,
"sha1": "05082c9f7a7cf73ceacb5b4e7360c6c590eb4af5",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3e004fd8ff426d52adfbb5d399b4170a7d696e52",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239022295 | pes2o/s2orc | v3-fos-license | The Use of Corticosteroids or Tocilizumab in COVID-19 Based on Inflammatory Markers
Background The inflammatory cascade is the main cause of death in COVID-19 patients. Corticosteroids (CS) and tocilizumab (TCZ) are available to treat this escalation but which patients to administer it remains undefined. Objective We aimed to evaluate the efficacy of immunosuppressive/anti-inflammatory therapy in COVID-19, based on the degree of inflammation. Design A retrospective cohort study with data on patients collected and followed up from March 1st, 2020, to May 1st, 2021, from the nationwide Spanish SEMI-COVID-19 Registry. Patients under treatment with CS vs. those under CS plus TCZ were compared. Effectiveness was explored in 3 risk categories (low, intermediate, high) based on lymphocyte count, C-reactive protein (CRP), lactate dehydrogenase (LDH), ferritin, and d-dimer values. Patients A total of 21,962 patients were included in the Registry by May 2021. Of these, 5940 met the inclusion criteria for the present study (5332 were treated with CS and 608 with CS plus TCZ). Main Measures The primary outcome of the study was in-hospital mortality. Secondary outcomes were the composite variable of in-hospital mortality, requirement for high-flow nasal cannula (HFNC), non-invasive mechanical ventilation (NIMV), invasive mechanical ventilation (IMV), or intensive care unit (ICU) admission. Key Results A total of 5940 met the inclusion criteria for the present study (5332 were treated with CS and 608 with CS plus TCZ). No significant differences were observed in either the low/intermediate-risk category (1.5% vs. 7.4%, p=0.175) or the high-risk category (23.1% vs. 20%, p=0.223) after propensity score matching. A statistically significant lower mortality was observed in the very high–risk category (31.9% vs. 23.9%, p=0.049). Conclusions The prescription of CS alone or in combination with TCZ should be based on the degrees of inflammation and reserve the CS plus TCZ combination for patients at high and especially very high risk. Supplementary Information The online version contains supplementary material available at 10.1007/s11606-021-07146-0.
Study Design, Patient Selection, and Data Collection
This is a retrospective cohort study with data on patients collected and followed up from March 1st, 2020, to May 1st, 2021, from the nationwide Spanish SEMI- 19 This is a multicenter, nationwide registry with over 150 hospitals. All included patients were diagnosed by polymerase chain reaction (PCR) test taken from a nasopharyngeal sample, sputum, or bronchoalveolar lavage. The collection of data from each patient in terms of laboratory data, treatments, and outcomes was verified by the principal investigator of each center through the review of clinical records. All participating centers in the register received approval from the relevant Ethics Committees, including Bellvitge University Hospital (PR 128/20).
Inclusion Criteria
The group that received only CS was considered the standard of care (SOC) for hospitalized patients. We included patients whose CS use started within the first 72 h after hospital admission and before the onset of high-flow nasal cannula (HFNC), non-invasive mechanical ventilation (NIMV), invasive mechanical ventilation (IMV), or the requirement of intensive care unit (ICU) admission. The CS plus TCZ group included patients who received both drugs in the first 72 h after hospital admission and also before the onset of HFNC, NIMV, IMV, or ICU admission.
Exclusion Criteria
We excluded patients who did not receive CS, or received it more than 3 days after hospitalization, those with a nosocomial infection, and those who died within 24 h.
Treatments Prescribed and Definitions of Groups
We divided the cohort into 2 groups: patients who received solely CS, and patients who received both CS and TCZ. The usual dose of TCZ in our country was 4-8 mg/kg iv, generally in a single dose, although some additional doses are allowed at the discretion of the responsible physician.
Regarding antiviral treatment, the use of antivirals (lopinavir/ritonavir, 20 remdesivir 21 ), hydroxychloroquine, 22 and azithromycin 22 was allowed according to the recommendations of the Spanish Ministry of Health.
Degrees of Inflammation
We previously reported the 3 categories of risk: (low, intermediate, and high risk) based on the total lymphocyte count, and the C-reactive protein (CRP), lactate dehydrogenase (LDH), ferritin, and D-dimer values taken at the time of admission (Table S1). 18 The high-risk category was defined based on only 1 of the 5 criteria described above the previously defined cutoff. In addition, for the present study, a very high-risk category was added, defined as the presence of 3 or more high-risk upon admission criteria (Table S1).
Outcome Definition
The primary outcome of our study was in-hospital mortality. Secondary outcomes included length of stay (LOS), and the requirement of HFNC, NIMV, IMV, and ICU admission.
Statistical Analysis
Categorical variables were expressed as absolute numbers and percentages. Continuous variables are expressed as mean plus standard deviation (SD) in the case of parametric distribution or median [IQR] in the case of non-parametric distribution.
Differences among groups were assessed using the chi-square test for categorical variables and the t-test or Mann-Whitney test as appropriate for continuous variables. p values< 0.05 indicated statistical significance. For the study of risk factors associated with in-hospital mortality, univariate and multivariate binary logistic regression was performed. For the latter, variables with p<0.10 in the univariate study plus age and gender were included. Differences in mortality were shown graphically using Kaplan-Meier curves with their log-rank test (event: death; censored data: hospital discharge). Missing data were treated with multiple imputations. To improve the comparability of the groups, propensity score matching (PSM) was performed. This included age, sex, body mass index (BMI), race, smoking behavior, days from onset to admission, all comorbidities, Charlson index, heart rate on admission, tachypnea on admission, PaO2/FiO2, lymphocyte count, CRP, LDH, ferritin, D-dimer, remdesivir treatment, and prescription of low-molecular-weight heparins (LMWH) during admission.
Treatments Between Groups
The treatments received in both groups are shown in Table S2. The CS group less frequently received remdesivir (9.8% vs. 14.3%) as well as intermediate (12.2% vs. 22.5%) or full doses of LMWH (14.3% vs. 22.2%). These differences disappeared after PSM.
The CS regimen was not standard in all patients. There were significant differences between both groups in the maximum dose of prednisone or equivalent (75 mg vs. 100 mg), days of treatment (7 days vs. 8 days), and cumulative dose (400 mg vs. 600 mg). Figure S1). After PSM, we found no differences in the low/intermediaterisk (1.5% vs. 7.4%, p=0.175) or the high-risk category (23.1% vs. 20%, p=0.223).
Risk Factors for In-Hospital Mortality
The independent risk factors for mortality in the highrisk category were age, male sex, moderate/severe dependency, higher Charlson index, tachypnea on admission, and lower PaO2/FiO2 ( Table 6). The use of TCZ showed a trend of benefit that did not reach statistical significance as an independent protective factor. The very high-risk category showed similar results (data not shown). The AUC of the final model was 0.792 ( Figure S2).
DISCUSSION
We found that higher degrees of inflammation responded to combination therapy, consistent with COVID-19 as an inflammatory disease. Treatment should be risk-stratified based on inflammation. At present, the approach to the disease has been heterogeneous and often based on oxygenation/ventilation status. In order to evaluate the efficacy of immunosuppressive/ anti-inflammatory treatments, we have to include the degree of inflammation in patients to judge efficacy. The degree of inflammation in most studies is difficult to assess and appears to include many patients with low degrees of inflammation. It is thus difficult to know the real efficacy of these drugs and explain differences in efficacy between observational studies and clinical trials [23].
Our results suggest that the greater the inflammation, the more effective these drugs will be. Our group previously described 3 categories of inflammation based on 5 parameters at admission (lymphopenia, CRP, LDH, ferritin, and D-dimer). 18 Since the low-risk category rarely requires hospital admission, it is the least numerous in our national series.
Our study shows that the addition of TCZ does not provide benefit in the low/intermediate-risk category. While the combination reduced mortality in the high-risk group, we did not achieve statistical significance, due to inadequate power. Patients classified as very high risk (3-5 high-risk criteria) had statistically significant reduction in death.
Our secondary outcomes (use of HFNC, NIMV, and IMV, and admission to the ICU) suggest that the CS+TCZ group had more severe disease despite PSM. The sociodemographic, clinical, and analytical data included could not fully capture patient severity. Our patients were on oxygen therapy (not high-flow) at the time of treatment initiation (CS patients or CS+TCZ patients). However, we do not know the exact FiO2 they were receiving; it is possible that the combination of CS+TCZ was used in patients requiring higher amounts of oxygen.
Our study strengths include that it is large and nationally representative. In addition, the therapeutic approach based on degree of inflammation is a good approximation to clinical practice decision-making.
Our study also has some limitations. First, it is a retrospective study. Second, it comes from a multicenter registry, with the heterogeneity that this implies, though we used standardized definitions. Another limitation to be taken into account is the heterogeneity in CS dosage and administration time as well as lack of information of important variables that might trigger addition of TCZ, such as oxygen requirement.
In conclusion, the prescription of CS alone or in combination with TCZ should be based on the degrees of inflammation and reserve the CS plus TCZ combination for patients at high and especially very high risk. | 2021-10-19T13:41:50.401Z | 2021-10-18T00:00:00.000 | {
"year": 2021,
"sha1": "b5e4992888948b6918843b44537b63aaee299d01",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11606-021-07146-0.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "b5e4992888948b6918843b44537b63aaee299d01",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119305246 | pes2o/s2orc | v3-fos-license | The XDSPRES CL-based package for reducing OSIRIS cross-dispersed spectra
We present a description of the CL-based package XDSPRES, which aims at being a complete reducing facility for cross-dispersed spectra taken with the Ohio State Infrared Imager/Spectrometer, as installed at the SOAR telescope. This instrument provides spectra in the range between 1.2um and 2.35um in a single exposure, with resolving power of R ~ 1200. XDSPRES consists of two tasks, namely xdflat and doosiris. The former is a completely automated code for preparing normalized flat field images from raw flat field exposures. Doosiris was designed to be a complete reduction pipeline, requiring a minimum of user interaction. General steps towards a fully reduced spectrum are explained, as well as the approach adopted by our code. The software is available to the community through the web site http://www.if.ufrgs.br/~ruschel/software.
Introduction
Cross-dispersed spectroscopy makes possible to acquire information of wide spectral regions in a single exposure, by projecting several dispersion axes on the detector simultaneously. As a consequence, the reduction process required to analyze this kind of data is complicated, since different diffraction orders need to be selected, extracted, calibrated independently and combined in the final step. This difficulty led many authors to develop methods and software packages for the reduction of cross-dispersed and echelle spectra (e.g. Moreno et al. 1982;Rossi et al. 1985;Piskunov & Valenti 2002;Bochanski et al. 2009).
In the past decade the near infrared (NIR) has also been explored by cross-dispersed spectrographs, such as Spex at the NASA Infrared Telescope Facility (IRTF), with a resolving power of ∼ 2000 and reaching from 0.8 to 5.5µm. Other examples are Triple-Spec (Edelstein et al. 2007) and the Folded-port Infrared Echellette (FIRE) (Simcoe et al. 2008), achieving R ∼ 2600 and R ∼ 6000 respectively, and covering roughly the same wavelength domain (0.8 -2.4µm).
Another instrument of similar capabilities is the Ohio State Infrared Imager/Spectrometer (OSIRIS), currently installed at the Southern Astrophysics Research Observatory (SOAR), attached to the 4.1m telescope. OSIRIS provides spectral coverage from 1.0µm to 2.4µm in cross-dispersed mode, with a resolving power of ∼ 1200. High resolution (R ∼ 3000) long-slit modes are also available, but multi-band spectroscopy of this kind suffers from differences in aperture and seeing.
However, reduction of NIR spectra has a complexity of its own, mostly related to telluric spectral features, both in absorption and emission, and black body radiation due to the telescope itself. A rich literature has been developed on the subject (e.g. Maiolino et al. 1996;Vacca et al. 2003;Cushing et al. 2004).
There are currently no specific software packages available for the reduction of cross-dispersed spectra taken with OSIRIS. Aiming at providing a fast and highly automated task, we developed the xdspres (acronym for cross-dispersed spectra reduction script) package. The CL language was chosen due to the availability of almost all of the basic tasks needed to perform the reduction in the Image Reduction and Analysis Facility (IRAF) software (Tody 1986(Tody , 1993.
In §2 we describe main aspects of the instrument, focusing on its effects on the reduction process. In §3 we describe the general steps towards a fully reduced spectrum, as well as the approach adopted by the xdspres package to each of these steps, and finally in §4 we give a brief summary.
OSIRIS
In this section we discuss the main aspects of the cross-dispersed mode of OSIRIS, with special attention to those characteristics that are relevant to the reduction process. A complete description of the instrument can be found in its on line User's Manual 1 .
The detector is a 1024x1024 HAWAII array (Hodapp et al. 1996), sensitive to wavelengths of up to 2.5µm. Equation 1 models the non-linear behavior of the array, which only becomes critical above 28,000 counts. Usually the detector is read only at the end of the integration, but since it can be read non-destructively different sampling methods could be implemented.
A residual image is sometimes seen, specially when bright sources are observed in acquisition mode. This means that eventually some of the first spectra taken after the target acquisition images have to be discarded. Residuals have approximately 2% of the intensity of the original source, and it should not be a problem to science exposures that have typical counts below one thousand.
In cross-dispersed mode OSIRIS projects almost six orders on the detector, from which three are extracted. Wavelength coverage for each of the extracted orders are 1.2 -1.5, 1.5 -1.9 and 1.9 -2.35µm, for the J, H and K bands respectively, all of them with R ∼ 1200. Orders that are not extracted include a small portion of the J band (1.0 -1.2µm), and second order duplicates of the J band, located to the right of the K band. Figure 1 shows an example of sky spectrum where the three main orders are evident. Orders that are not extracted are also visible in figure 2.
From figure 1 it can also be noted that dispersion axes are nearly vertical, meaning that within a given aperture each line corresponds to a particular wavelength. The misalignment between detector lines and wavelength coordinate are less than one pixel from one end of the slit to the other, or less than one third of the full width at half maximum (FWHM) of a emission line in the J band. Therefore corrections to dispersion axis orientation were not attempted, and all extractions assume a vertical dispersion.
Flat field
In cross-dispersed mode, flat field images are taken with the cross-dispersing grism already positioned, which results in a spectrum of the flat field lamp, rather than an evenly illuminated image. Since the main purpose of a flat field is to identify pixel-to-pixel variations which are intrinsic to the detector, the continuum that corresponds to the spectral energy distribution of the lamp has to be removed. Moreover, two sets of flat fields are needed, one with the flat field lamp on and another with the lamp off. The later is required because thermal radiation from the telescope becomes appreciable in the low energy end of the spectrum, as it can be seen in Fig. 2. Typical sets consist of 10 exposures of each kind.
The xdflat task automates the preparation of a normalized flat field image, which will be later used to correct the science images. First it applies a linearity correction to all flat field images, according to equation 1. Then both sets (flat-on and flat-off) are averaged independently, and the resulting flat-off image is subtracted from the flat-on. We have omitted a figure showing the subtracted flat because it is visually identical to the flat-on. The only noticeable difference is the suppression of a few hot pixels at the lower portion of the image.
To remove the spectrum of the flat field lamp xdflat begins by extracting each order. Apertures are identified by a centering algorithm (apfind) that searches for three local maxima in the central lines of the chip. The peaks are assumed to be separated by more than 30 pixels and have an approximate width of 80 pixels. Aperture sizes are reevaluated by setting the borders at 20% of the peak intensity of each order. A tracing algorithm (aptrace) moves in regular five pixel steps along the dispersion axis, assessing changes in peak location for each order, leading to a twodimensional description of the aperture position. The aperture tracing function, a second order Legendre polynomial, is fitted to predefined sample regions of the chip that are less affected by scattered light. Errors in the two-dimensional aperture border definitions are usually below three pixels.
A 30th order Legendre polynomial is fit to the spectrum, which is then normalized. Such a high order polynomial is justified by the complex pattern produced by the flat field lamp as it passes through the spectrometer, as shown by figure 3. Artificial oscillations at the apertures' limits are ignored after extraction. Typical RMS of the fit is below 5000 ADU, which may seem high but actually amounts to roughly 2% of the average signal. The final flat-field image has all its pixel counts set to 1, except those on the regions occupied by the spectrum, which are replaced by the ratio between the original count and the fitted polynomial.
Subtraction Object -Sky
In the NIR spectral region the atmosphere plays an important role. Besides a significant telluric absorption, several atmospheric emission lines are entangled with the spectrum of the astronomical source (see figure 1 for a sample spectrum of the sky, where the J, H and K bands are identified).
The process of removing telluric emission lines is commonly known as sky subtraction, or sky chopping, and the angular size of the target dictates whether additional off-source exposures are required. In the case of point sources, which occupy only a small fraction of the slit, one can take exposures with the source in two different positions along the slit, and later subtract subsequent images. This is the technique employed to obtain the spectra of standard stars, and it makes more efficient use of telescope time. When extended sources are concerned, a separate set of exposures taken from a nearby dark region of the sky is needed, a process commonly referred to as nodding.
The doosiris task was developed to reduce spectra from extended sources, therefore it assumes that a set of sky exposures was taken along with the science exposures, in order to remove the telluric emission lines. There are two ways by which users can inform the software about the nature of each image, namely: interactively identifying them via SAO Image DS9, or providing an ASCII file with the type of exposure with respect to its numerical order. For further details refer to the xdspres Manual. No attempt was made to provide a software solution for identifying different types of exposures, as specific criteria regarding the spectrum of the astronomical target would have to be predefined, adding, in our judgement, unnecessary complexity to the code.
Nodding patterns that make best use of telescope time use each sky exposure in more than one subtraction, as in O-S-O or O-S-O-O-S 2 . It is thus impractical to simply subtract a combination of sky images from an equivalent combination of target ones. Instead of assessing the relevant physical quantities, a routine searches for the best telluric calibrator based on the file name index, assuming that these are sequentially numbered after the time of exposure.
Extraction and Wavelength Calibration
Extraction of science spectra follows the same procedures that were described in section 3.1 3 . The sky spectrum is extracted using the same aperture definitions of the target spectrum.
Wavelength calibration is based on strong OH lines present in the sky exposures, a sample of which is shown in figure 5. As of the moment of the publication of this paper, OSIRIS presents what appears to be an illumination problem that produces lines across the detector, in the direction perpendicular to the dispersion axis. Since these lines can lead to confusion in the OH line identification process, a high order polynomial is used to fit and remove the vertical profile identified between columns 980 and 1024 (see figure 4).
Interactive line identification is usually the best option, and since the dispersion function is almost linear there is no need for manually identifying more than four well spaced features. If the dispersion function fitting was successful, the unidentified features will match those in the line list provided with xdspres, which was extracted from Oliva & Origlia (1992). Doosiris also provides an option to automatically identify OH features in the spectrum of the sky that uses the reidentify task, which requires a previously identified image. For ∼ 20 identified features, typical residuals are below 2 pixels, which translates into roughly ± 50 km s −1 .
Telluric Removal and Flux calibration
The subtraction of sky exposures from the on source images, obviously can only account for emission features of the atmosphere. To deal with the more subtle problem of removing atmospheric absorption doosiris by default uses the spectrum of an A0V star, that should be obtained just before or after the science images. If the observed standard star has a different spectral type, the model atmosphere spectra, mentioned below, have to be replaced accordingly.
The standard star, being a point source, do not need a separate set of sky exposures, because it occupies only a small fraction of the slit. Doosiris is prepared to manage two or three different star positions on the slit. In either case subsequent exposures are subtracted and the resulting images are summed; a sample of this sum can be seen in figure 6. After division by the normalized flat field image, both spectra are extracted and summed.
The spectrum of an A0V is almost devoid of metallic absorption lines, but the H lines that are present need to be eliminated before it can be applied to the science spectrum as a telluric calibrator. The method employed here follows the reasoning of Vacca et al. (2003), but with a different implementation. It basically consists in dividing the spectrum of the standard star by a model atmosphere of Vega, obtained from R. Kurucz 4 .
First the model of Vega was smoothed by a Gaussian with σ equal to the FWHM measured in a NeAr calibration lamp, to match the resolving power of the standard star. A spline was then adjusted to the continuum, leading to a purely absorption spectrum. The later is provided with the xdspres package. The actual division of the reference star is performed by the telluric task, which allows for the shifting and scaling of the model. Figure 7 shows a comparison between the observed spectrum of the standard star and a model atmosphere for Vega.
Once the absorption lines due to the stellar atmosphere have been removed, the spectrum becomes essentially a black body with telluric features. Its normalization by a polynomial that acts as a pseudo-continuum returns a purely telluric spectrum. Unabsorbed regions, that translate into sample regions for continuum fitting, were identified with the aid of NSO/Kitt Peak FTS data produced by NSF/NOAO 5 . This division of the science spectrum also allows shifting and scaling. Some of the strongest telluric bands cannot be fully removed. Additionally, the high absorption in these regions causes a significant decrease in S/N. The same polynomial employed as a pseudo-continuum for the reference star is later used to produce an independent sensitivity function for each aperture, by comparing it to a black body of 9480 K. This procedure restores the correct slope of the spectrum regardless of the accuracy in absolute flux. The later is estimated from the exposure time and magnitude of the standard star, which has to be provided by the user. Figure 8 shows the effects of telluric line removal and flux calibration to a sample spectrum.
One would expect that a good flux calibration leads to a perfect alignment of the spectrum between different apertures. Although generally true, it has been observed that agreement is harder to achieve where the H and K bands meet. Strong telluric absorption bands near 1.9µm difficult the evaluation of the sensibility function causing large deviations in the final spectrum. Figure 9 shows a completely reduced spectrum encompassing the whole spectral range.
Summary
We have presented the xdspres CL-based package, consisting of the xdflat and doosiris tasks, aimed at being a complete reduction facility for cross-dispersed spectra taken with the OSIRIS spectrometer, currently installed at the SOAR telescope. This particular instrument provides a relatively large spectral coverage, being able to project the full range between 1.2µm and 2.35µm over the detector in a single exposure. The blazing of different orders in the same image adds complexity to the already lengthy reduction of infrared spectroscopy data. xdspres automatically performs the more mechanical and time consuming steps of the reduction, at the same time that it allows considerable user interaction in the more subjective stages. In addition, the possibility of a fast reduction provides means to make site adjustments to the observation strategy. As a sample of actually published data that was fully reduced with the xdspres tasks, see Riffel et al. (2011). The complete software package and its documentation is available to the community at the web site http://www.if.ufrgs.br/∼ruschel/software. -Left A sample flat field image with the lamp turned off. Thermal radiation from the telescope can be seen, as the flat field exposure is taken with grism already positioned. Right A flat field with the lamp turned on, clearly showing the three orders in the center. Also visible are: further J band orders at the lower left and beyond the K band to the right; hot pixels in the lower corners; two groups of cold pixels near the center of chip; a small portion of an order at the detector's left border, between lines 400 and 800. Both images were taken with 3.2s of exposure. Fig. 3.-Mean flat field spectrum at the J band (grey), and fitted function (black ), RMS for this fit is ∼ 4600 ADU, which correspond to roughly 2% of the average signal. Artificial oscillations of the fit have no practical effects over the science spectrum as these portions are ignored after extraction. b) Same spectrum after the removal of telluric lines. c) Flux calibrated spectrum. Areas of strong telluric absorption cannot be fully corrected, and even if they could the signal to noise ratio would still be much lower than the rest of the spectrum. Fig. 9.-Flux calibrated spectrum in the whole spectral range. Aperture transitions are at 1.5µm and 1.9µm. Strong telluric absorption bands that dominate the spectrum between 1.8 and 2.0µm difficult the alignment between the H and K bands. | 2011-07-08T19:59:29.000Z | 2011-07-01T00:00:00.000 | {
"year": 2011,
"sha1": "722cb2ba7ff5284c8f82b793259714f40595aff0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1107.1713",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "722cb2ba7ff5284c8f82b793259714f40595aff0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
254399040 | pes2o/s2orc | v3-fos-license | TULIP: An RNA-seq-based Primary Tumor Type Prediction Tool Using Convolutional Neural Networks
Background: With cancer as one of the leading causes of death worldwide, accurate primary tumor type prediction is critical in identifying genetic factors that can inhibit or slow tumor progression. There have been efforts to categorize primary tumor types with gene expression data using machine learning, and more recently with deep learning, in the last several years. Methods In this paper, we developed four 1-dimensional (1D) Convolutional Neural Network (CNN) models to classify RNA-seq count data as one of 17 highly represented primary tumor types or 32 primary tumor types regardless of imbalanced representation. Additionally, we adapted the models to take as input either all Ensembl genes (60,483) or protein coding genes only (19,758). Unlike previous work, we avoided selection bias by not filtering genes based on expression values. RNA-seq count data expressed as FPKM-UQ of 9,025 and 10,940 samples from The Cancer Genome Atlas (TCGA) were downloaded from the Genomic Data Commons (GDC) corresponding to 17 and 32 primary tumor types respectively for training and validating the models. Results: All 4 1D-CNN models had an overall accuracy of 94.7% to 97.6% on the test dataset. Further evaluation indicates that the models with protein coding genes only as features performed with better accuracy compared to the models with all Ensembl genes for both 17 and 32 primary tumor types. For all models, the accuracy by primary tumor type was above 80% for most primary tumor types. Conclusions: We packaged all 4 models as a Python-based deep learning classification tool called TULIP (TUmor CLassIfication Predictor) for performing quality control on primary tumor samples and characterizing cancer samples of unknown tumor type. Further optimization of the models is needed to improve the accuracy of certain primary tumor types.
Background
Accounting for nearly 10 million deaths in 2020, cancer is a leading cause of death worldwide and the second leading cause of death in the United States. 1,2 Extensive research has been devoted to improving tools for cancer diagnosis and prognosis and developing targeted cancer therapies. The accumulation of publicly available cancer data has enabled the development of machine learning and deep learning models in several areas of clinical oncology including classification of tumor types and molecular subtyping of cancers. 3 One such resource is the Genomic Data Commons (GDC), a unified repository and cancer knowledge base that includes several cancer genome programs such as The Cancer Genome Atlas (TCGA). 4,5 By harmonizing data from different programs and incoming submissions from researchers, the GDC provides a robust and growing dataset to enable precision oncology research. As more data enters GDC, it is important to address any data ambiguity that may arise with the clinical and/or sample metadata associated with genomics data. A machine learning or deep learning model that can predict with high accuracy the primary tumor type from RNA-seq data can help identify any misclassified primary tumor types, provide the precise primary tumor type of more generalized or missing primary tumor types, and differentiate any samples that do not express similar expression profiles to the assigned primary tumor type for further analyses.
Several machine learning and deep learning models have been developed for tumor classification of RNA-seq data. For example, Ahn et al. 5 built a fully connected deep neural network (DNN) to differentiate tumor versus normal samples. Similarly, Park et al. 6 constructed PathDeep, a biological function structure based DNN, to discriminate between cancer and normal 2 Cancer Informatics tissues. Others such as Li et al. 7 12 We first downloaded TCGA RNA-seq data of 32 primary tumors from GDC. Based on the sample distribution of the primary tumors, we developed 2 types of 1D-CNN models that can either classify 17 primary tumor types that had at least 300 samples or all 32 primary tumor types regardless of sample size. In addition, we also experimented with the number of genes or features of the models to determine the accuracy of the models when all 60K genes or all 19K protein coding genes are used. In total, we had 4 different 1D-CNN models that had an overall accuracy of 94.7% to 97.6% on the test dataset. Given the performance values, we created a Python-based deep learning classification tool called TULIP (TUmor CLassIfication Predictor) incorporating our 1D-CNN models to serve as a QC tool for the cancer research community. Lastly, we tested the use of our tool on kidney cancer RNA-seq data from the Clinical Proteomic Tumor Analysis Consortium (CPTAC), also available on GDC. In addition to being a QC tool, TULIP can potentially be used for predicting primary tumor types of samples with unspecified or unknown primary tumor diagnosis.
Gene expression data collection and preprocessing
We downloaded RNA-seq data expressed as FPKM-UQ, where FPKM-UQ is the upper quartile of the number of fragments per kilobase per million mapped reads for 9,025 and 10,940 samples corresponding to 17 and 32 primary tumor types respectively from GDC (February 2022). Supplemental Table S1 lists the number of samples per primary tumor type. We utilized the gdc-RNA-seq-tool 13 to download and merge individual RNA-seq data files. We then developed an in-house Python script (version 3.7.12) to convert the FPKM-UQ expression values to TPM (transcripts per million) and normalized the TPM values by applying log10 transformation. The scikit-learn package (version 1.0.2) 14 was used to split the data randomly into training (80%), validation (10%) and test (10%) datasets. We encoded the primary tumor types using the OneHotEncoder() function (Supplemental Figure S1).
Since the RNA-seq files from GDC contain all 60,483 genes, we created 2 additional datasets containing only 19,758 protein coding genes for both 17 and 32 primary tumor types. The links to the lists of genes, which are organized alphanumerically based on their Ensembl IDs, for both all the genes and protein coding genes only are provided in Supplemental File 1 along with queries used to obtain the data from GDC.
To test the performance of our models with unknown data, we obtained 277 RNA-seq samples from CPTAC (February 2022) that are associated with kidney cancer. The samples and metadata are listed in Supplemental File 2.
Dimensionality reduction (t-SNE) analysis
To visualize how the samples from different primary tumor types may cluster, we employed t-distribution stochastic neighbor embedding (t-SNE), a non-linear dimensionality reduction technique used for visualizing high dimensional datasets in a low dimensional space. 15 The t-SNE analysis, using default scikitlearn package parameters, was performed on the top 1,000 highly variable genes from the log10 transformed TPM datasets of both the 17 and 32 primary tumor types with all the genes.
CNN model construction and implementation
We created 4 CNN models with the same underlying architecture as that of the TC1 resource 11 mentioned above ( Figure 1). The main differences between each model are the number of genes for the input layer and the number of primary tumor types in the output layer. For the sake of simplicity, the names of the models will be referred to as the following: The CNN models were implemented using Keras (version 2.4.3). 16 All the models included two 1D convolutional layers, 2 maximum pooling layers of size 10, two fully connected (FC) layers with 200 and 20 nodes respectively, and the output layer. Each convolutional layer contained 128 filters of kernel size 20 and stride of 1. The rectified linear unit (ReLU) activation function was used for all hidden layers, while softmax was used for the output layer. We used categorical cross-entropy as the loss function. To address overfitting of the models, 10% dropout was applied to all the FC layers. The model was trained with a starting learning rate of 0.1, a stochastic gradient descent (SGD) optimizer, and a batch size of 20. We also used "ReduceLROnPlateau" to reduce the learning rate when the cross-entropy loss stops improving after 10 epochs as the model trained for a maximum of 400 epochs. NVIDIA V100 GPUs were used for training the CNN models.
Model performance evaluation metrics
To evaluate the 1D-CNN models during training, we tracked model accuracy and loss value at every epoch to optimize and identify the best performing model. To compare the performance of all 4 models, we used accuracy using Keras' evaluate() function on the training, validation, and test datasets. To find the largest predicted probability, we implemented the argmax() NumPy function on each sample in the test dataset to identify the predicted primary tumor type. We assessed the performance of the models on the test dataset with the weighted average of precision, recall and F1 score to account for class imbalance using the number of true positives (TP), true negatives (TN), false positives (FP) and false negatives (FN). The formulas for calculating precision, recall, and F1 score are below.
Distribution of primary tumor types in GDC
The histogram (Figure 2) shows the sample distribution of 32 primary tumor types. The number of samples ranges from 45 (cholangiocarcinoma (CHOL)) to 1,220 (breast invasive carcinoma (BRCA)). Due to the class imbalance of the primary tumor types, we created 2 types of models. In one model, we considered all the primary tumor types regardless of the number of samples. For the second model, we selected primary tumor types with high representation in the dataset, with a cutoff of greater than 300 samples, resulting in 17 primary tumor types being represented.
Visualization of RNA-seq data using t-SNE
Next, we used t-SNE to visualize the RNA-seq data of 17 and 32 primary tumor types (Figures 3 and 4, respectively). In both figures, distinct clusters corresponding to many of the primary tumor types can be observed. This indicates that unique gene expression profiles can be used to differentiate primary tumor types; however, there is some overlap of samples associated with certain primary tumor types based on tissue type or cell type. For example, some of the lung squamous cell carcinoma (LUSC) samples can be found within the lung adenocarcinoma (LUAD) cluster (Figures 3 and 4). In Figure 4, there is complete overlap of rectum adenocarcinoma (READ) and colon adenocarcinoma (COAD) samples. Samples for bladder urothelial carcinoma (BLCA), head and neck squamous cell carcinoma (HNSC), cervical squamous cell carcinoma and endocervical adenocarcinoma (CESC), and LUSC tend to group together based on cell type in both figures. It seems that samples originating from similar tissue types such as the lung for LUSC and LUAD or similar cell types such as carcinoma for BLCA, CESC, HNSC, and LUSC might play a significant factor in the similarity of these samples' gene expression profiles. Even though there are several primary tumor types that may be hard to differentiate with any classifier, the high number of clusters provide a strong level of confidence that most primary tumor types can be classified correctly. Additionally, primary tumor types with small sample sizes, such as adrenocortical cancer (ACC), form their own clusters, indicating that sample size may not be a limiting factor for some primary tumor types to be classified.
Performance evaluation of 1D-CNN models
The overall training accuracy of all 4 models ranged from 98.7% to 100%, and the validation accuracy ranged from 95.2% to 97.6% (Table 1). The validation accuracies for the 32 primary tumor type models were slightly lower than the 17 primary tumor type models. We then examined the performance of the 1D-CNN models on 2 test datasets. The first dataset contained 903 samples corresponding to 17 primary tumor types, and the second dataset contained 1,094 samples corresponding to 32 primary tumor types. The overall test accuracies of the models ranged from 94.7% to 97.6% ( Table 2). The weighted averages of precision, recall, and F1 score values were all above 90% for all 4 models. Like the validation accuracies, the test accuracies for the 32 primary tumor type models performed slightly lower than the 17 primary tumor type models. The difference in performance is most likely attributed to the additional primary tumor types with low sample sizes, as well as the greater number of primary tumor types with highly similar expression profiles with other primary tumor types described above. When comparing between models for 17 and 32 primary tumor types,
Accuracy by primary tumor type
Supplemental Table S2 shows the accuracy for each model by primary tumor type while Supplemental Tables S3 and S4 show the precision, recall, and F1 scores. To understand which primary tumor types have a tendency to be misclassified, we generated confusion matrices for each model as well. From Supplemental Table S2 as well as Figure 5 and Supplemental Figure S2, the test accuracy was above 90% for 16 of the 17 primary tumor types for both the CNN-17 and CNN-17-PC models. Only LUSC had an accuracy below 90% with 67% for CNN-17 and 73% for CNN-17-PC. In Figure 5 and Supplemental Figure S2, the most common misclassified Table S2, Supplemental Figure S3, and Figure 6). The primary tumor types with an accuracy of below 80% shared between both models include CHOL at 40% and READ at 0%. The primary tumor type that CHOL was misclassified as was liver hepatocellular carcinoma (LIHC) in the CNN-32-PC model while both LIHC and pancreatic adenocarcinoma (PAAD) were the main primary tumor types in the CNN-32 model. As expected from the t-SNE plot above, all of the test samples for READ in both models were misclassified as COAD. In the CNN-32 model, the test accuracy for kidney chromophobe (KICH) was much lower than the CNN-32-PC model (56% vs 89% respectively). Interestingly, the accuracy for LUSC was slightly better in the 32 primary tumor type models with an accuracy of 89% (CNN-32-PC) and 93% (CNN-32), perhaps due to random sample selection.
TULIP
For public utility of any of the 4 models, we created a Pythonbased tool called TULIP (TUmor CLassIfication Predictor). This tool takes as input an RNA-seq count matrix, ideally from
Cancer Informatics
GDC, expressed as FPKM-UQ and outputs a file containing the predicted primary tumor types with their probability scores (Figure 7). We provide options for the user to select which model to use and to set the minimum probability score threshold for a sample to be classified as a primary tumor type. Based on our results of higher accuracy with the protein coding models, we have set these as the default parameters. The code for TULIP can be found here: https://github.com/CBIIT/TULIP.
CPTAC kidney cancer prediction
To observe how our tool would perform on non-TCGA data, we downloaded CPTAC RNA-seq data of kidney cancer. We then applied all 4 models of TULIP and achieved the best results with the CNN-17-PC model. This model correctly identified kidney cancer for all 277 samples with 274 samples classified as kidney renal clear cell carcinoma (KIRC) and 3 samples classified as kidney renal papillary cell carcinoma (KIRP) ( Table 3). Both the CNN-32-PC and CNN-17 models performed similarly with nearly 100% accuracy. Interestingly, the CNN-32 model, which had 97.8% accuracy, had a different breakdown for KIRC and KIRP. This model classified 220 samples as KIRC and 51 samples as KIRP. Of the samples that were not classified as either KIRC or KIRP by any of the models, the predicted primary tumor types all have carcinoma in common along with KIRC and KIRP. This suggests that cell type may present a challenge for any model to distinguish primary tumor types of the same cell type. The predicted primary tumor types and the probability scores are provided in Supplemental File 2. We highlighted in orange the non-kidney Jones et al 9 primary tumor types. Overall, TULIP was able to classify the kidney cancer samples with high accuracy as well as provide the specific type of kidney cancer.
Discussion
The TC1 framework 11 was used to develop the 1D-CNN models with data from TCGA. Our models performed just as well, or better, to predict the primary tumor type from RNA-seq count data, when compared to similar methods. As other published studies have observed using unsupervised clustering methods such as hierarchical clustering or dimension reduction techniques like t-SNE, certain primary tumor types may be difficult to differentiate regardless of the number of samples due to similar tissue or cell types. 17 For example, the expression profiles of READ and COAD completely overlapped in our t-SNE visualization ( Figure 4) even though these samples come from different anatomical locations. As expected, our models performed poorly in predicting the correct primary tumor type for READ samples. Instead, these samples were classified as COAD. Due to their homogeneity, COAD and READ have long been considered as a single cancer type, colorectal cancer.
Efforts are ongoing to identify other molecular characteristics using omics data such as proteomic data to distinguish them. 18 With only RNA-seq data, we may have to adopt a similar approach to combine these primary tumor types as one.
By not pre-selecting genes to go into the model, contrary to previous studies, we ensure that we include genes that may be important for differentiating primary tumor types as more data is integrated into GDC. This is especially pertinent for primary tumor types with less representation in the database. Even though the high number of features may have added noise to the models, we still obtained an overall test accuracy of 94.7% to 97.6% for all 4 models. In addition, the models had at least 80% test accuracy for most of the primary tumor types in both the 17 and 32 primary tumor type models on our test dataset. However, it is important to note that the ability of the models to predict the primary tumor type is limited by the low number of samples for several primary tumor types in the test dataset. For some of the primary tumor types, we only had 5 samples. As more data becomes available in the future, updating these models with additional data will lend more confidence to the models' prediction accuracy.
To make the models more accessible to the cancer research community, we developed TULIP to take RNA-seq data as input and to generate the predicted primary tumor types with their probability scores. TULIP can be used as a QC tool for identifying any samples that may not have a gene expression profile that aligns with the primary tumor type attached to the sample. Any sample with an incorrect primary tumor type or unknown primary tumor type based on the probability score threshold set by the user can then be further explored to understand how this sample may be different from its assigned primary tumor type. For example, race and sex may lead to differences in RNA expression within individual primary tumor types. Additionally, TULIP can also provide more specific information of the primary tumor type for any sample, such as the CPTAC kidney data, with broad or unknown primary tumor types. Even though TULIP was able to predict the kidney cancer as the primary tumor type with 100% accuracy using the CNN-17-PC model, we do not have information that the specific kidney cancer types predicted, KIRC and KIRP, are correct. Having this information would have provided more support in the prediction accuracy of our models.
At present, the scope of TULIP is to provide quality control of tumor tissue type of samples obtained from patients. We plan to update the models on a regular basis to improve the accuracy of the models as more samples become available. We are also interested in incorporating normal versus tumor prediction to TULIP. Previously, our collaborators have developed a normal versus tumor classifier using a 1D-CNN framework. 19 By adding this classifier as a preliminary step before using TULIP or including normal tissue as another class to predict within the 4 models may help to better distinguish genes unrelated to tissue type in classifying primary tumor types. As more data becomes available, we plan to enhance our classification framework to address categorization of various tumor subtypes. We are also interested in identifying gene signatures that are responsible for tumor classification to provide additional information and insights for users of TULIP. Lastly, we hope that common data sharing platforms and other data processing pipelines would adopt TULIP to assist with validating tumor tissue types as part of their genomic data submission workflows.
Conclusions
We have developed 4 1D-CNN models that can perform primary tumor type prediction of high dimensional RNA-seq count data. All 4 models had at least 94.7% prediction accuracy with the best performing model reaching 97.6%. Unlike previous studies that filtered the genes based on expression levels, our models still achieved high prediction accuracy when we kept all the genes for our all-gene-based and protein-codingbased models. To make these models available for the cancer research community, we created TULIP. Our tool can be utilized for performing quality control on primary tumor samples as well as classifying cancer samples of unknown origin. The tool and the source code are publicly available at https://github. com/CBIIT/TULIP | 2022-12-08T16:03:43.715Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "8383a41598c48af5beb738f51168430fa36a4274",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9d36e3938ca292ef81b06fc82239b1e5ecbf0c17",
"s2fieldsofstudy": [
"Computer Science",
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
114050256 | pes2o/s2orc | v3-fos-license | DESIGN AND ANALYSIS OF WORK HOLDING DEVICE FOR MILLING OPERATION ON FRONT AXLE BEAM (HCV)
Sanket B. Mahind. Over the past century, automotive mass production has increased the demand for forged components .Front axle is the major component in an automotive chassis. The work holding was the first issue of the machining operation to be confronted. In machining work holding devices, minimized work piece deformation due to proper clamping and cutting force which was essential to maintain the machining accuracy. This paper gives detailed description on designing a work holding device so that milling operation was performed properly to obtain required dimension for a front axle beam used in automotive application. It also includes the detailed result of analysis done for the assembled devices and also to known the behaviour of the work holding devices against the operating load. The proposed work holding device is designed, to increase the accuracy of machining process so that it can be machining easily with lesser time and in turns its saves the machining time, manufacturing cost and increases the productivity.
Over the past century, automotive mass production has increased the demand for forged components .Front axle is the major component in an automotive chassis. The work holding was the first issue of the machining operation to be confronted. In machining work holding devices, minimized work piece deformation due to proper clamping and cutting force which was essential to maintain the machining accuracy. This paper gives detailed description on designing a work holding device so that milling operation was performed properly to obtain required dimension for a front axle beam used in automotive application. It also includes the detailed result of analysis done for the assembled devices and also to known the behaviour of the work holding devices against the operating load. The proposed work holding device is designed, to increase the accuracy of machining process so that it can be machining easily with lesser time and in turns its saves the machining time, manufacturing cost and increases the productivity.
Introduction:-
Increasing the productivity and accuracy are the two basic aims of mass production. In this case the device that caters our needs is the use work holding device. As we all know the work holding device is a special tool for holding a work piece in proper position during manufacturing operation. For supporting and clamping the work piece, device is provided. Frequent checking, positioning, individual marking and non-uniform quality in manufacturing process is eliminated by fixture. This increase machining accuracy, productivity and reduce operation time, but the main concern is the fastening of the fixture. The fixture should be so chosen that the fastening of the job to the table is done quickly and accurately and it is mainly used in milling operation. Work holding device is widely used in the industrial practical production. To locate and immobilize workpieces for machining, inspection, assembly and other operations work holding devices are used. Work holding device are used to determine the position and orientation of a workpiece; Clamping has to be appropriately planned at the stage of machining fixture design. The design of a work holding device is a highly complex and intuitive process, which require knowledge. Work holding device designplays an important role at the setup planning phase. Proper work holding device design is crucial for developing product quality in different termsof accuracy, surface finish and precision of the machined parts in existing design the fixture set up is done manually, so the aim of this project is to increase the machining accuracy of front axle beam and to save time for loading and unloading of component. These work holding device also help in simplifying the network operations which are performed on special equipment.
Design considerations:-
The points that are taken into consideration for designing a product are as following: 1. Work holding device should be so strong that the deflection in the device should be as less as possible. The deflection that includes the forces of cutting, clamping of workpiece to the machine table. The frame of the fixture should have sufficient mass to prevent vibrations during the machining of the job. 2. Another important design consideration is the clamping which should be fast enough. 3. Require less amount of effort and they should also have the arrangement for easy removal as well. 4. In swinging of clamp system is provided so that while removal of workpiece the clamp should swing as far as possible for unclamping the device. 5. The clamps and support points which are to be adjusted in due course of time should be preferred of same size. 6. If the surface area of clamping is more it will not fit the workpiece properly. This can be avoided by making the surface area of clamping as small and proper size as possible. 7. It is designed in such a way that pats can be easily replaced on failure of device. 8. The study of the design should be done thoroughly before analysis and designing. It should always be ensured that the work is done in proper sequence and order. This will ensure zero error during designing pats in NX software and during ansys stress acting. 9. It has been preferred that there is maximum operation in a single setting of the holding device. 10. The movement of the holding device is restricted i.e. there is zero degree of freedom of the workpiece after clamping the workpiece. 11. The design must possess enough rigidity and robustness to prevent vibration else it may lead to undesired movement of the workpiece and tools. 12. Minimum cost should be done during the fabrication of the project and the design should be as simple as possible.
Component Details:- Design of spherical rolling joint:-Spherical rolling joint is selected from the standard design based on total load acting on the joint as shown in below-
Result and Discussion:-
The model is designed in Nx and analyzed in ansys software. This software is capable of giving the user post deformation and stress on the model after applying a force . This result include maximum principal stress, von-mises stress and total deformation stress along x,y,z axes. By considering the milling operation on SPM we analyzed that due to proper clamping of front axle beam with help of work holding device the accuracy of KP top face milling operation of front axle beam increases. Hence the following result has been achieved using work holding device.
Conclusion:-
Observing clearly the milling operation on special purpose machine we have concluded that, the Squareness of KP top face with bore of front axle beam which was out of tolerance because the locators are placed on rough surface. For this the outcome of work is to place the work holding device on prefinished surface due to which the errors are minimized as well as loading and unloading time decreases and increase in productivity simultaneously. For this operation, new work holding device was designed using screw jack and spherical rolling joint with helical compression spring mechanism for best accuracy. | 2019-04-15T13:04:24.941Z | 2016-05-31T00:00:00.000 | {
"year": 2016,
"sha1": "132671d84da562ec8363291ab04d7794a921ca34",
"oa_license": "CCBY",
"oa_url": "http://www.journalijar.com/uploads/779_IJAR-10260.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "215fc68d3a1eff8eece0269cd4898a7064abdbd3",
"s2fieldsofstudy": [
"Materials Science",
"Business"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
3209311 | pes2o/s2orc | v3-fos-license | A dynamic pathway analysis approach reveals a limiting futile cycle in N-acetylglucosamine overproducing Bacillus subtilis
Recent advances in genome engineering have further widened the gap between our ability to implement essentially any genetic change and understanding the impact of these changes on cellular function. We lack efficient methods to diagnose limiting steps in engineered pathways. Here, we develop a generally applicable approach to reveal limiting steps within a synthetic pathway. It is based on monitoring metabolite dynamics and simplified kinetic modelling to differentiate between putative causes of limiting product synthesis during the start-up phase of the pathway with near-maximal rates. We examine the synthetic N-acetylglucosamine (GlcNAc) pathway in Bacillus subtilis and find none of the acetyl-, amine- or glucose-moiety precursors to limit synthesis. Our dynamic metabolomics approach predicts an energy-dissipating futile cycle between GlcNAc6P and GlcNAc as the primary problem in the pathway. Deletion of the responsible glucokinase more than doubles GlcNAc productivity by restoring healthy growth of the overproducing strain.
M etabolic engineering is a key technology for cellular property improvement in the green and sustainable production of biofuels, fine chemicals and pharmaceuticals 1,2 . The past 5 years have revolutionized our ability to engineer genomes, enabling high-precision strain construction at large-scale, for instance through multiplex automated genome engineering 3 , Cas9-mediated genome editing 4 or modular pathway engineering 5 . Although these methods effectively removed all limitations for manipulating existing pathways and constructing synthetic ones to desired products, they further widened the already large gap between our ability to identify the engineering targets in the first place and precise diagnosis of arising bottlenecks and other complications. In other words: What is limit of production and how can we abolish the limitation?
The current repertoire to answer these questions ranges from computational methods that are often based on genome-scale models [6][7][8] to omics methods that monitor cellular responses at the level of molecular component concentrations or metabolic fluxes [9][10][11][12][13][14] . A particular problem is diagnosis of rate-limiting steps in supply or within synthetic pathways from omics-type data. As most diagnosis tools are rather time-consuming, typical applications focus on steady-state analyses, such as transcriptome-based identification of an intermediate for pool size increase to improve riboflavin production 10 , proteome-based identification of two rate-limiting steps in isopentenol synthesis 11 or metabolomics-based identification of an optimal intermediate feeding approach to improve tacrolimus production 12 . In particular, dynamic metabolomics has great diagnostic potential as was elegantly demonstrated by bottleneck identification within in vitro reconstituted glycolysis 15 and a new method for near realtime metabolomics 16 . Surprisingly, applications of such dynamic investigations to biotechnological production systems are rare, presumably because design of meaningful experiments and data interpretation are not trivial. Consequently, diagnosis has so far largely been restricted to ad hoc identification of limitations in precursor supply that can easily be validated by precursors feeding without fundamentally optimizing structure and kinetic parameters of synthetic networks. Examples include identification of bottlenecks in biosynthesis pathways for lipids, recombinant antibodies and heterologous proteins [17][18][19] , and substrate degradation in xylose-fermenting Saccharomyces cerevisiae 20 .
Here we show that metabolite dynamics during the 'start-up phase' of a synthetic pathway with near maximal initial rates provide valuable information about limitations. We identify potential pathway bottlenecks from the comparison of metabolomics data with simulations of a simple kinetic model that can be generically applied to linear pathways. Specifically, we examined Bacillus subtilis metabolism during production of the amino sugar N-acetylglucosamine (GlcNAc) used for pharmaceutical and nutraceutical treatment of osteoarthritis and maintaining joint health 21 . The previously engineered strain with optimized GlcNAc synthesis and blocked GlcNAc catabolism 21,22 (Fig. 1a) is productive in complex media but has poor performance and growth in industrial media with single carbon sources. The combination of dynamic metabolomics and coarse-grained modelling identified a major bottleneck and a potential metabolic engineering target, namely, the reactions between GlcNAc6P and GlcNAc, for improved cell growth and GlcNAc yield on glucose. Dynamic isotope tracing then diagnosed an energy-dissipating phosphorylation/dephosphorylation cycle as the molecular cause. Disrupting this futile cycle through deleting the encoding gene of the here newly identified responsible enzyme, glucokinase GlcK, restored intracellular GlcNAc6P concentrations and more than doubled GlcNAc productivity and GlcNAc yield on glucose.
Results
GlcNAc overproducing B. subtilis on minimal glucose medium. An engineered GlcNAc production strain B. subtilis BSGN6-P xylA -glmS-P 43 -GNA1, abbreviated BSGN hereafter, was recently obtained based on modular pathway optimization to fine-tune a synthetic GlcNAc pathway in our previous research 21 . In brief, BSGN was constructed by overexpression of glucosamine (GlcN)-6-phosphate synthase (GlmS) under the control of an inducible promoter P xylA and GlcN-6-phosphate N-acetyltransferase (Gna1) under the control of a constitutive promoter P 43 in the background of absent GlcNAc catabolism via knockout of nagP, gamP, nagA, nagB and gamA. In glucose minimal medium, the growth rate of BSGN was only a fifth of the parent strain ( Fig. 1 and Supplementary Table 1). Because GlcNAc production is coupled to cell growth, GlcNAc productivity dropped dramatically by 82.7 % to 32.6 mg g À 1 dry cell weight per hour in glucose minimal medium with a yield of 65.0 mg GlcNAc per gram glucose. Here we address the problem of cell growth and product formation in the industrially relevant condition of glucose minimal medium. Given that GlcNAc overproduction requires three central metabolic precursors (fructose-6-P, acetyl-CoA (AcCoA) and the nitrogen donor glutamine), competition between growth-related metabolism and the synthetic pathway is expected. Hence, we first examined the effect of precursor supply on native metabolism and the synthetic pathway.
To probe whether precursor supply constrained either GlcNAc formation or native metabolism, we genetically modulated fluxes through the supply pathways. First, we diverted fructose-6-P availability from native metabolism to GlcNAc by replacing the native phosphofructokinase (Pfk) with a reduced activity of Arg252Ala-mutated Pfk 23 . Surprisingly, the mutation had no effect on growth and specific production rate of BSGN-Pfk* were similar to BSGN (Fig. 1b). Therefore, fructose-6-P supply neither limits GlcNAc production nor cell growth. Likewise, increased supply of glutamine for GlcNAc synthesis and central nitrogen metabolism in strain BSGN-GS by overexpression of glutamine synthase (GS) did not improve growth or GlcNAc production rate ( Fig. 1b and Supplementary Table 2), but increased the specific glucose uptake rate about 1.5-fold relative to BSGN and BSGN-Pfk*. Also addition of glutamine to the cultivation medium had no effect on specific growth rate and GlcNAcspecific production rate ( Supplementary Fig. 1). Thus, we conclude that precursor supply would be sufficient for a much higher GlcNAc production and cell growth than observed in BSGN.
Metabolomics reveals abundant precursor and bottleneck.
To identify the primary limitation in GlcNAc biosynthesis, we determined intracellular metabolite concentrations in wild-type B. subtilis, the GlcNAc production strain BSGN and the two producing strains BSGN-Pfk* and BSGN-GS with modulated precursor supply during mid-exponential growth (at an optical density at 600 nm of 0.5) in minimal glucose medium. Almost all metabolite concentrations were lower in BSGN compared with wild-type, and, in particular, the GlcNAc pathway precursors fructose-6-P, citrate (directly connected to AcCoA) and the nitrogen donor glutamine were markedly decreased (Fig. 2, Supplementary Fig. 2 Tables 3 and 4). How did modifications of precursor supply change this profile? Although the Pfk mutation in BSGN-Pfk* had no further effect on metabolite concentrations, GS overexpression in BSGN-GS restored nearly wild-type levels for most metabolites, suggesting that GlcNAc production impaired cellular nitrogen metabolism, presumably through low levels of glutamine. Nevertheless, BSGN-GS grew as slowly as the ancestral BSGN despite restored metabolite levels (Fig. 1b). Thus, the decreased metabolite concentrations were neither consequence nor major cause of slow growth, strongly suggesting that other mechanisms impair growth and GlcNAc synthesis.
and Supplementary
The essentially only and very striking difference between all three production strains and wild type was a more than 300-fold increased GlcNAc-6-P concentration from 0.06 mM in the wild type to 20-34 mM in the GlcNAc-producing strains (Fig. 2). These abnormal concentrations suggest a metabolic bottleneck in the vicinity of GlcNAc6P. As the concentration of phosphosugars is normally tightly controlled, such extraordinarily high GlcNAc6P concentrations may cause phosphosugar stress, affecting for instance nucleic acid metabolism and triggering degradation of ptsG messenger RNA regulated by the small RNA . AcCoA, acetyl-coenzyme A; DHAP, dihydroxyacetone phosphate; F6P, fructose-6-phosphate; FBP, fructose-1,6, -bis-phosphate; G6P, glucose-6-phosphate; GlcN6P, glucosamine-6-phosphate; GlcNAc6P, N-acetylglucosamine-6-phosphate; GlmS, glucosamine synthase; Gln, glutamine; Glu, glutamate; GNA1, GlcN6P N-acetyltransferase; PEP, phosphoenolpyruvic acid; PGI, glucose-6-phosphate isomerase; PTS, phosphotransferase system; PYR, pyruvate; OD 600 , optical density at 600 nm. Triplicate experiments were done for physiological measurements, and error bars represent standard deviation. SgrS at the posttranscriptional level, which, in turn, inhibits glucose uptake in E. coli [24][25][26] . Based on steady-state data alone, however, one cannot differentiate between different molecular causes such as, for example, low activity of the dephosphorylating enzyme or transport limitations that jam the pathway.
Different bottlenecks lead to distinct metabolite dynamics.
Given that metabolite levels respond very fast, we reasoned that they would be a sensitive indicator of arising bottlenecks during a transition from low to high activity of the synthetic pathway and we sought to test putative dynamics using a metabolic model 16,27 (Fig. 3a). For this purpose, we developed a linear pathway model, which consists of three intermediate metabolites, the intracellular and the extracellular products. To simulate a transition from low to high pathway activity, initial concentrations of the first two intermediates are randomly sampled from a sub-saturating concentration range (between 0 and 10% of average K m values). Initial concentrations of the other three metabolites are randomly sampled from a near-saturating concentration range (50 and 150 % of average K m values). All reactions are described by Michaelis-Menten kinetics and K m values are randomly sampled between 0 and 2, resulting in an average K m of one. Concentrations of metabolites and K m parameters are arbitrary units but their range account for low and high saturating conditions of enzymes. In a base model, all maximal reaction rates were fixed to 1, except influx which was 0.5 in order to assuring that no reaction in the pathway is limiting. Simulations of the base model (i) show a continuous increase of product and asymptotic equilibration of intermediates at the average K m of one (Fig. 3b). Next, we simulated possible scenarios of frequently encountered pathway limitations, including end-product inhibition, as well as futile cycles and limiting enzyme capacities at several nodes of the pathway (Fig. 3b). In case of feedback inhibition (ii), concentrations of intermediates equilibrate at average inhibition constants and overproduction are reduced. (iii) A typical bottleneck reaction with low turnover has a continuously accumulating intermediate upstream of the bottleneck, whereas the downstream intermediate decreases strongly below the average K m value. In case of the futile cycles shown in scenario (iv) and (v), the dynamic response is similar to a limiting reaction in scenario (iii). However, intermediates up-and downstream of the futile cycle equilibrate faster than in case of a limiting reaction. Quantitatively, the response of intermediates depends on the degree of futile cycle and the bottleneck, with much stronger responses in the latter case ( Supplementary Fig. 3 In Out (v) a futile cycle at the end of the pathway or (vi) insufficient intracellular product export in the pathway. One hundred simulation results using 100 random parameter sets for initial concentrations and K m values (lines indicate mean and shaded areas the standard deviation). Colours are as shown in above in a.
The model is described in detail in the Methods section.
Finally, if product export does not match the synthesis rate the intracellular product accumulates (vi). Thus, although metabolite dynamics are difficult to predict quantitatively, their qualitative features bear sufficient information to distinguish between different putative bottlenecks.
To experimentally realize a transition from low to high activity of the GlcNAc pathway, exponentially growing cells were harvested and resuspended in medium without carbon source. Low precursor availability was ensured by 30 min incubation without carbon source that effectively stopped GlcNAc production and lead to constant intra-and extracellular metabolite concentrations without significant changes in pathway enzyme expression ( Fig. 4a and Supplementary Figs 4 and 5). Addition of glucose induced GlcNAc synthesis at a rate of 34.8 mmol g À 1 dry cell weight per hour that remained constant for 1 h (Fig. 4a). Monitoring metabolite dynamics during the first 2 min after glucose addition revealed the known response of glycolytic intermediates to glucose; that is, rapid increase of phosphorylated sugars and decrease of the phosphotransferase system substrate phosphoenolpyruvic acid (Fig. 4b). Concentrations of the three GlcNAc precursors fructose-6-P and glutamine increased rapidly above the K m values of their cognate enzymes 25,28 (Fig. 4b and Supplementary Table 5), confirming our hypothesis of absent limitations in the precursor supply into the pathway. Within the simulated responses of possible scenarios, the scenario with a futile cycle or a limiting reaction at the end of the pathway would be consistent with the experimentally detected dynamics (Figs 3b and 4b). As GlcNAc6P concentrations equilibrate already after 2 min and intracellular GlcNAc remained high (5 mM), we hypothesized that a futile cycle is more likely than a limiting dephosphorylation ( Fig. 3 and Supplementary Fig. 3). Next, we sought to investigate if indeed the product GlcNAc is re-phosphorylated by an unknown enzyme, resulting in an ATP dissipating futile cycle.
Validation and preventing the ATP dissipating futile cycle. To experimentally validate the above-hypothesized futile cycle, we used isotopic tracer experiments. As intracellular GlcNAc decreased upon pathway start-up (Fig. 4a), futile re-phosphorylation to GlcNAc6P should occur to a large extent from unlabelled GlcNAc molecules during the initial phase. To verify that the initial source of accumulating GlcNAc6P was indeed GlcNAc rather than glucose via the normal synthesis route, we started the experiment with uniformly labelled [U-13 C]glucose (Fig. 5a). Within the first 60 s of [U-13 C]glucose addition, the relative concentration of unlabelled [U-12 C]GlcNAc6P (M þ 0) increased 36% (Fig. 5b), whereas no fully labelled [U- 13 C]GlcNAc6P was yet formed (Fig. 5c), demonstrating that re-phosphorylation dominates during this period. Thus, an energy-dissipating phosphorylation/dephosphorylation futile cycle operates between GlcNAc6P and GlcNAc, effectively blocking GlcNAc synthesis. The resulting accumulation of GlcNAc6P may further trigger phosphosugar stress and impair cell growth and GlcNAc production. Alleviating GlcNAc6P accumulation and improving the GlcNAc production would require disruption of the futile cycle between GlcNAc and GlcNAc6P. However, a GlcNAc kinase has so far not been annotated in B. subtilis. To identify candidate genes encoding such a GlcNAc kinase, we performed homology analysis using the amino-acid sequence of E. coli GlcNAc kinase NagK 29,30 . Among the 106 kinases in B. subtilis, the highest homology was found for the annotated glucokinase GlcK (26% sequence identity), and we deleted glcK in the BSGN strain (yielding strain BSGNK). Repeating the dynamic labelling experiment with BSGNK abolished formation of GlcNAc6P M þ 0 and labelled GlcNAc6P M þ 8 increased strongly, because of de novo synthesis from glucose (Fig. 5c). Thus, we conclude that deletion of glcK resulted in drastic reduction of the futile cycle.
Beyond the short-term dynamics, we next investigated GlcNAc synthesis in BSGNK under more production relevant conditions in steady-state cultures grown in minimal medium with glucose. Indeed, breaking the futile cycle through the glcK deletion circumvented the detrimental increase of intracellular GlcNAc6P (0.06 mM in BSGNK versus 33.71 mM in BSGN, Supplementary Fig. 6). The decrease of intracellular GlcNAc6P relieved potential phosphosugar stress for the cell and restored a more healthy growth physiology with a more than doubled specific cell growth rate and more than doubled GlcNAc productivity (9.20 mg l À 1 h À 1 ; Fig. 5d-f). Therefore, the glcK deletion experiments confirmed the deleterious role of the futile cycle between GlcNAc6P and GlcNAc for cell growth and GlcNAc production in engineered B. subtilis. Moreover, the energy charge increased from 0.68±0.03 in BSGN to 0.81±0.04 in BSGNK ( þ / À values represent standard deviation from triplicate experiments), further confirming that eliminating the ATP-dissipating futile cycle improved energy metabolism (Supplementary Fig. 7). Moreover, the GlcNAc yield on glucose (147.5 mg g À 1 glucose) and dry cell weight on glucose (138.3 mg g À 1 glucose) in BSGNK were 2.3-fold higher than in BSGN. As the specific GlcNAc production rate was similar in BSGN (32.6 mg g À 1 DCW h À 1 ) and BSGNK (33.2 mg g À 1 DCW h À 1 ), improved performance of BSGNK resulted from restored healthy growth with a high growth rate, which is apparently coupled to GlcNAc production (Supplementary Table 1).
Discussion
Identification of bottlenecks within engineered pathway still hampers rational metabolic engineering. To differentiate between limitations within a synthetic pathway or in precursors supply to it, we first used comparative steady-state metabolomics profiling of genetically perturbed supply pathway strains. To hypothesize bottlenecks within a synthetic pathway, we outlined an approach based on dynamic metabolomics experiments and simplified kinetic modelling to differentiate between different putative causes in limiting GlcNAc synthesis during the start-up phase of the pathway with near maximal rates. The potential of this generically applicable approach was demonstrated by identifying an unexpected futile phosphorylation-dephosphorylation cycle in the GlcNAc production with B. subtilis, genetic intervention of which greatly improved GlcNAc productivity and yield on glucose. Relieving the energy drain induced by this synthetic pathway restored the cellular energy homeostasis and healthy growth and thus in consequence volumetric GlcNAc productivity.
Methods
Strains and plasmids and cultivation. Strains and plasmids used in this study are listed in Supplementary Table 6. The previously constructed GlcNAc production strain BSGN 21 is characterized by (i) a block of GlcNAc catabolism through marker-free deletion of all relevant encoding genes and (ii) overexpression of the GlcNAc synthesis enzymes GlmS and GNA1. BSGN-Pfk* was constructed by introducing site-directed mutation in native pfk Arg252Ala via a markerless genome editing system 31 for B. subtilis with the primers listed in Supplementary Table 7. In brief, front and back homology fragments with the mutation (Arg252Ala, CGC to GCC) in direct repeated sequences were amplified with primers AL-F/AL-R and AR-F/AR-R, respectively. The mazF cassette was amplified with primers AZ-F and AZ-R from the B. subtilis 168 genome. Next, fusion PCR was performed to fuse the front homology fragment, mazF cassette, and back homology fragment. The resulting DNA fragment was transformed into BSGN0 via screening on a Luria-Bertani (LB) plate with spectinomycin. Transformants with mutation site integration were then plated on LB plates with 2% xylose to screen strains with the desired mutation and mazF cassette eviction via single-crossover between directed repeated sequences. To construct BSGN-GS, we constructed DNA multimer plasmids pP43-GNA1-GS via DNA multimerbased vector construction manipulation 32 transformed into BSGN0 via multimer plasmid cleavage of the host strain forming a circular plasmid and yielding BSGN-GS. BSGNK was constructed by knockout of glucose kinase encoding gene sequence glcK. In brief, primers GlcK-F/GlcK-R were used to amplify glcK disrupt cassette from B. subtilis 168 DglcK (Professor Jörg Stülke, Georg-August-Universität Göttingen). The amplified glcK disrupt cassette was transformed into BSGN, yielding BSGNK. Cultivation for genetic experiments was done in LB broth or agar plates (10 g l À 1 tryptone, 5 g l À 1 yeast extract and 10 g l À 1 NaCl) at 37°C. Cultivation for physiological experiments was done in M9 mineral salts medium with 2 g l À 1 glucose (1 g l À 1 NH 4 Cl, 0.5 g l À 1 NaCl, 8.5 g l À 1 Na 2 HPO 4 Á H 2 O, KH 2 PO 4 , 1 ml of 1 M MgSO 4 per litre, 1 ml of 0.1 M CaCl 2 per litre, 1 ml of 0.05 M FeCl 3 and 10 ml trace element solution containing 60 mg l À 1 CoCl 2 Á 6H 2 O, 43 mg l À 1 CuCl 2 Á 2H 2 O, 100 mg l À 1 MnCl 2 Á 4H 2 O, 60 mg l À 1 Na 2 MoO 4 Á 2H 2 O and 170 mg l À 1 ZnCl 2 ). Appropriate antibiotics were added to the medium at the following final concentrations: kanamycin (20 mg ml À 1 ) and spectinomycin (100 mg ml À 1 ). To evaluate effects of Gln feeding on cell growth and GlcNAc production, Gln was added to final concentration 0.5 mmol l À 1 in M9 mineral salts medium.
Steady-state targeted metabolomics. B. subtilis strains were cultured in M9 medium in a shake flask at 37°C and 300 r.p.m. Cells were harvested in the mid-exponential phase when the optical density at 600 nm reached 0.5. Fast-filtration was used to collect cells on a filter, and the cells were further quenched and extracted in acetonitrile/methanol/H 2 O (40:40:20) solution with 13 C internal standard addition. Next, the extract solution was dried and resuspended in H 2 O for ultrahigh-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) detection using the following conditions 33 : Waters Acquity UHPLC (Waters Corporation) equipped with Waters Acquity T3 C18 column (150 Â 2.1 Â 1.8 mm 3 , Waters Corporation) was tandemed with Thermo TSQ Quantum Ultra triple quadrupole instrument (Thermo Fisher Scientific) with a heated electrospray ionization source (Thermo Fisher Scientific). Temperature of UHPLC column was controlled at 40°C. Mobile phase A (10 mM tributylamine, 15 mM acetic acid, 5% (v/v) methanol) and B (2-propanol) were used for gradient elution for metabolite separation. The MS was operated in negative mode, and following parameters of MS were used: Ion spray voltage 2,500 V, ion sweep gas pressure 5 arbitrary units, auxiliary gas pressure 50 arbitrary units, curtain gas pressure 80 arbitrary units. Tube lens voltage, collision energy and fragment ions were optimized individually for all compounds 33 . Metabolite fold changes between recombinant strains and wild-type B. subtilis 168 were calculated based on metabolite concentrations. Metabolite dynamics analysis and dynamic labelling. Cells were cultivated in LB medium and harvested via centrifugation (4,500g, 5 min) when optical density at 600 nm reached 1.0. Next, the cells were resuspended in M9 medium without glucose and incubated at 37°C with magnetic stirring at 400 r.p.m. At t ¼ 0, glucose was added to a final concentration of 2 g l À 1 . Fig. 3a in the main text. It consists of four intracellular metabolites x (1) -x (4) and the extracellular product x (5) . Reaction kinetics of five reactions are: Influx of precursor: v ð1Þ ¼v ð1Þmax Michaelis-Menten kinetics are used to describe reactions kinetics of reaction v (2) , v (3) , v (4) and v (5) : Mass balances for all metabolites result in the differential equations: Parameters: The maximum reaction rates v (2)max , v (3)max , v (4)max and v (5)max are 1 and v (1)max is 0.5. The binding constants K m2 , K m3 , K m4 , K m5 were randomly sampled between 0 and 2. The initial concentrations of x (1) and x (2) were sampled between 0 and 0.1, initial concentrations of x (3) , x (4) and x (5) were sampled between 0.5 and 1.5. The models for the different scenarios (ii)-(vi) are: Model 2: To describe the scenario with feedback inhibition (Fig. 3b), intracellular product x (4) inhibits the first reaction v (1) , according to Hill kinetics. The Hill coefficient n was sampled between 0 and 4, and K i between 0 and 1. Model 3: To describe the scenario with a limiting enzyme abundance v (4)max was 0.1, and in Supplementary Fig. 3 as indicated. Model 4 and model 5: To describe the scenario with a futile cycle, an additional reaction was added v (6) , described again by Michaelis-Menten kinetics. The binding constants K m6 was randomly sampled between 0 and 2.
Model 6: To describe the scenario with insufficient intracellular product exportation v (5)max was 0.1.
Transcriptional level comparison of GlcNAc pathway enzymes. The following cells are harvested, frozen in liquid nitrogen and stored in À 80°C freezer for RNA extraction and quantitative real-time PCR analysis: (i) exponentially growing BSGN cell in M9 minimum medium, (ii) exponentially growing BSGN cell in LB medium, (iii) exponentially growing BSGN cell in LB medium with resuspension in M9 minimum medium without glucose for 15 min substrate depletion and (iv) exponentially growing BSGN cell in LB medium with resuspension in M9 minimum medium without glucose for 30 min substrate depletion. Total RNA of above cells was extracted via Qiagen RNeasy Mini Kit (QIAGEN). cDNA was obtain for RNA using PrimeScrip RT reagent Kit (Takara). Primers used for qRT-PCR were listed in Supplementary Table 6. LightCycler 480 II real-time PCR instrument (Roche Applied Science) was used to quantify cDNA with SYBR Premix Ex Taq (Takara). Gene expression levels of exponentially growing cell without incubation were defined as 100%. Relative gene expression changes were calculated based on normalized data to 16s rRNA. Triplicate experiments were done for relative gene expression assay.
Data availability. The Matlab code used in this study and data that support the findings of this study are available from the corresponding author upon request. | 2018-04-03T03:24:14.697Z | 2016-06-21T00:00:00.000 | {
"year": 2016,
"sha1": "38ac5f61a7918608af83f5be6cee2fa2ca1f94c1",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/ncomms11933.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "38ac5f61a7918608af83f5be6cee2fa2ca1f94c1",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
4638926 | pes2o/s2orc | v3-fos-license | The Fatty Acid Profile Analysis of Cyperus laxus Used for Phytoremediation of Soils from Aged Oil Spill-Impacted Sites Revealed That This Is a C18:3 Plant Species
The effect of recalcitrant hydrocarbons on the fatty acid profile from leaf, basal corm, and roots of Cyperus laxus plants cultivated in greenhouse phytoremediation systems of soils from aged oil spill-impacted sites containing from 16 to 340 g/Kg total hydrocarbons (THC) was assessed to investigate if this is a C18:3 species and if the hydrocarbon removal during the phytoremediation process has a relationship with the fatty acid profile of this plant. The fatty acid profile was specific to each vegetative organ and was strongly affected by the hydrocarbons level in the impacted sites. Leaf extracts of plants from uncontaminated soil produced palmitic acid (C16), octadecanoic acid (C18:0), unsaturated oleic acids (C18:1-C18:3), and unsaturated eichosanoic (C20:2-C20:3) acids with a noticeable absence of the unsaturated hexadecatrienoic acid (C16:3); this finding demonstrates, for the first time, that C. laxus is a C18:3 plant. In plants from the phytoremediation systems, the total fatty acid contents in the leaf and the corm were negatively affected by the hydrocarbons presence; however, the effect was positive in root. Interestingly, under contaminated conditions, unusual fatty acids such as odd numbered carbons (C15, C17, C21, and C23) and uncommon unsaturated chains (C20:3n6 and C20:4) were produced together with a remarkable quantity of C22:2 and C24:0 chains in the corm and the leaf. These results demonstrate that weathered hydrocarbons may drastically affect the lipidic composition of C. laxus at the fatty acid level, suggesting that this species adjusts the cover lipid composition in its vegetative organs, mainly in roots, in response to the weathered hydrocarbon presence and uptake during the phytoremediation process.
Introduction
Inundation of land with hydrocarbons from oil spills causes long-term contamination of the soil and severe effects on the biodiversity of the ecosystems in the impacted areas [1][2][3]. The plant community in an oil spill impacted site usually disappears after several months, leaving a large amount of weathered hydrocarbons, such as polycyclic aromatic hydrocarbons (PAH) and asphaltenes [4][5][6][7][8]. The harmful effect of oil spills on the plant community is slowly attenuated by the aging phenomenon, which causes a gradual sequestration of hydrocarbons by soil particles, lowering their accessibility, bioavailability, and biodegradability [9]. After some time, especially in bare areas arising within aged-affected sites, revegetation by the emergence of putative oil-tolerant plant species occurs [5], and it has been hypothesized that revegetation in these oil contaminated sites is a result of phenophases, ecological, and biochemical adjustments of these pioneer plant species to the hydrocarbons' presence [3,8].
Cyperus laxus L. is a member of the Cyperaceae family, which is considered to be a weed cosmopolitan weed located in tropical and subtropical regions [10,11]. It was also recently reported that this and other Cyperaceae species have been identified as pioneer plants in tropical aged and long-term oil spill-impacted sites [12,13]. The ability of this plant species to grow under such stressed conditions may be due that many Cyperaceae species have biochemical traits for using the C4 photosynthetic pathway [14], the C18:3 fatty acid eukaryotic biosynthetic pathway [15], and they also produce underground storage organs such as corms [16]. These characteristics should impart to this plant species greater photosynthetic, biological, and reproductive advantages over other plants to survive in disturbed areas [10]; perhaps for that reason, they are commonly found in both natural disturbed areas [17][18][19] and anthropogenically disturbed sites [20][21][22]. Indeed, in similar oil spill-impacted areas previously explored for this work [13], the natural plant community vanished 6-12 months after the oil spill event. Subsequently, in aged or long-term contaminated areas (more than three years), the presence of light oil hydrocarbons was almost negligible, and high amounts of weathered hydrocarbons such as PAH had accumulated. In close sites, Gallegos et al. [12] reported that the average composition of the total hydrocarbon mixture in such areas was: asphaltene (32.4%), aliphatic (39.8%), PAH (18.9%), and polar hydrocarbons (9.1%). Later, we reported that the pioneer plant species found in that aged contaminated areas were C. laxus, C. esculentus, Ludwigia peploides, Echinocloa polystachya, and Carex crus-corvis; however, in bare areas emerging from such sites, C. laxus, C. esculentus, and Carex crus-corvis were the pioneer and dominant species [13]. Interestingly, it has been reported that these Cyperaceae species infest large areas of agricultural lands by producing underground storage organs, such as basal bulbs, corms, and tubers, that enable the regeneration of plants after adverse conditions by acting as perennating organs [23,24]. We have also observed that the Cyperaceae species used for phytotreatment studies of soils from the same areas disturbed by oil spills do produce underground organs [25], and it has also been demonstrated that C. laxus significantly reduces the hydrocarbon levels from soils containing up to 325,000 mg THC kg -1 soil [13,25,26]. Regarding plants cultivated in contaminated soils, during the phytoremediation process, the hydrocarbon removal was associated with changes in the fatty acid composition and the production of unknown conjugated compounds of PAH with some plant metabolites; this suggested that important biochemical adjustments at the fatty acid level in these plants had been accomplished for adapting to the hydrocarbons' presence. The reason these Cyperaceae species grow in areas disturbed by oil spills and whether there is a relationship between their growth areas and the above cited biochemical characteristics or their production of underground storage organs are unknown. It was hypothesized that the combination of the C18:3 fatty acid biosynthetic pathway with the production of corms might provide C. laxus with a superior ability to tolerate the weathered hydrocarbons' toxicity by adjusting the antioxidant capability, thereby producing a higher amount of unsaturated fatty acids in a similar way as reported for the fatty acid content in chicory root cultures grown in presence of benzo(a)pyrene [27]. However, because no information was found about the effect of hydrocarbons on the fatty acid profile of C. laxus or on the fatty acid profile of other plant species, it is uncertain if C. laxus uses the potential of the eukaryotic fatty acid pathway to adjust the antioxidant capability against the hydrocarbons in the aged oil spill-impacted sites. Thus, the main objective of this work was to study the changes in the fatty acid profile of C. laxus plants from the phytoremediation systems to investigate if this is a C18:3 species and if the hydrocarbon removal during the phytoremediation process has a relationship with the fatty acid profile of this plant.
Phytoremediation systems and plant sampling
Seeds of Cyperus laxus were harvested from a three-year greenhouse phytoremediation system established with native plants from long-term oil spill-impacted sites located in the tropical region of Tabasco, México (Fig 1), as previously reported [13,25]. In addition, seeds harvested from plants growing in soil collected from a close unimpacted site (SL) were used for the uncontaminated control system. The seeds were sown in pots (60x20x20 cm) with soil from the control unimpacted site (SL), or with soil from the long-term oil spill-impacted sites containing 16 g/Kg (S163), 140 g/Kg (SSR), and 340 g/Kg (S205) total hydrocarbons (THC), according to a three-stage nested experimental design with five levels of hydrocarbons content and three to five replicates ( Table 1). The pot systems were cultivated under greenhouse conditions at 32°C/12°C day/night, and flooded daily with distilled water. Individual plants of [14][15] weeks age from each experimental treatment were harvested and washed with cold distilled water. The whole root, basal corm, and leaf tissue were separated (Fig 1D and 1F). Whole organs from individual plants were then ground under liquid nitrogen for lipid extraction and fatty acid analyses. Additionally, freshly collected plants were used to estimate the humidity content in the organs using a thermobalance (Kern MLB 50-3, Kern & Sohn GmbH, Germany).
Lipid extraction. A modified version of the method reported by Bligh and Dyer [28] for the extraction and purification of lipids from biological materials was performed. The total root (600-800 mg), basal corm (100-300 mg), and leaf (1000-3000 mg) tissue from individual plants were ground in the presence of liquid nitrogen and used to extract the lipidic fraction by vortex mixing (30 seconds) with the proper amount of chloroform-methanol-water (1:2:0.8). This ratio was adjusted according to the humidity content in the tissue (typically 77% for leaf and 51% for corm and root) to have a proportion of 200 mg tissue per mL solvent mixture. Afterward, chloroform (1:1) was added and homogenized by vortex for 20 seconds. Finally, 1 mL of deionized water was added to reach a final proportion of 2:2:1.8 chloroform-methanolwater. The mixture was vortexed for 20 seconds, and the homogenate was filtered to remove the debris. The organic fraction containing the lipids was collected, dried, and resuspended to 1 mL with chloroform.
Fatty acid methyl esters preparation
The fatty acid profile and composition were determined by preparing the methyl esters of fatty acids (FAME) using a combination of standard procedures reported by Paquot and Hautfenne [29] and Burja et al. [30] for the analysis of oils, fats and derivatives. Samples of 200 μL from the lipid extracts were transferred to 100 mL reaction flasks and evaporated to dryness for saponification by sequential addition of solid sodium hydroxide (2 flakes or 250 mg), water (2 Table 1. The saturated-unsaturated ratio and the total C12-C24 fatty acid content in the vegetative organs of Cyperus laxus cultivated in the phytoremediation systems of soils from oil spill-impacted sites containing several amounts of total hydrocarbons. mL), and methanol (20 mL). The reaction mixture was heated (70°C) until it was almost dry, and the unsaponifiable lipids were extracted twice by gentle shaking and decanting with hexane (5 mL). The saponification product was resuspended in hexane (1 mL) and acidified by the addition of 3 mL of concentrated hydrochloric acid (37% p/v) in 20 mL of methanol and the mixture was heated for 2 hours at 70°C for the methylation of the free fatty acids. The mixture was heated until it was almost dry, the volume was adjusted to 20 mL with water, and the FAME were recovered by extraction 3 times with 10 mL of hexane. The organic fraction was concentrated and adjusted to 20 μL with hexane for the gas chromatography (GC) analysis.
FAME Analysis
The FAME evaluation was performed using a GC and a GC-MS with a mixture of true fatty acid derivatives as reference compounds (SUPELCO 37 Fame Mix, Sigma-Aldrich, México Statistical analysis. A three-stage nested experimental design was used according to a single-fixed factor with four levels of hydrocarbons content and three to five replicates ( Table 1). The effect of the fatty acid type (FAME) is nested within the levels of the organ factor, and the effect of the organ factor is nested within the levels of the soil factor [Fatty acid content = Soil + Organ(soil) + FAME(Organ) +FAME(Organ(Soil))]. For the statistical analysis we stated the soil as fixed effect, and the organ and fatty acid type as random effects. The SPSS V 15 statistical package (SPSS Inc., 2006) and the Microsoft Excel 2002 software were used for the statistical analysis and the estimation of the marginal means. The hydrocarbon effect was evaluated using the general linear models (GLM) utility. The post hoc analysis to evaluate the statistical differences between means was performed using the Scheffé test, and the Dunnett test for comparison each of the phytoremediation treatments against the control SL; both at the 0.05 probability level.
Results and Discussion
Fatty acid profile in the organs of Cyperus laxus cultivated in uncontaminated soil acids. The high prevalence of C18:3n3 and the absence of the unsaturated hexadecatrienoic acid (C16:3) in this leaf extract should be noted. These results are consistent with reports for the fatty acid composition of leaf from other Cyperaceae species, such as Carex sp. [31] and Cyperus alternifolius [15], and demonstrate for the first time that C. laxus is a C18:3 plant. In C18:3 plants, part of the C16:0, C18:0 and C18:1 fatty acids product of the plastidic prokaryotic fatty acid synthesis are exported to the cytoplasm and incorporated into the endoplasmic reticulum lipids and the polyunsaturated fatty acids through the eukaryotic pathway of glycerolipid synthesis [15,32]. In this work, the presence of hydrocarbons correlated positively with the accumulation of unsaturated fatty acids in leaf (Table 1): the saturated/unsaturated ratio in leaf changed from 1.04 in SL to 0.87 in S205. In contrast, it has been reported that fatty acid desaturation in plants and cyanobacteria are inversely correlated with temperature, and was suggested that such increment in unsaturated fatty acid fraction might improves the fluidity of membrane [32].
The total C12-C24 fatty acid content in the leaf, corm, and root from these SL plants (Table 1), was significantly different (F = 17.3): 1.09, 4.46, and 0.31 mg FA/g FW, respectively. Which, on the basis of a humidity content of 77% for the leaf and 51% for the corm and root, corresponds to 4.8, 9.1, and 0.6 mg FA/g DW, respectively. Because there are no reports for the fatty acid content neither in leaf of C. laxus nor for the leaf and root of other Cyperus species, the results from the present study cannot be compared with data from the literature. However, for corm, the value from this work is in the lower limit of the range reported for tubers or corm from other Cyperus species, such as C. rotundus (6 mg/g DW) or C. esculentus (51-74 mg /g DW), which usually range from 1 to 300 mg FA/g DW [33][34][35][36].
Fatty acid profile of Cyperus laxus from the phytoremediation systems The effect of the THC level on the fatty acid profile (Fig 2), the fatty acid content (Fig 3 and Table 1), and the cumulative chain length distribution (insets in Fig 3) was different for each organ (Fig 4). As shown in the fatty acid profile in the lipid extracts from the leaf of plants from the phytoremediation system of the soil from the impacted site containing 340 g/Kg THC (Fig 2), unusual fatty acids, such as odd numbered carbon (C15, C17, C21, C23) and uncommon unsaturated chains (C20:3n6 and C20:4) were observed together with a remarkable enhancement of the C22:2 and C24:0 chains in the corm and the leaf of the plants from the contaminated soils. The low level or the absence of hexadecatrienoic acid (C16:3) in leaf of plants from the phytoremediation systems (Fig 3), are consistent with the estimated content of C16:3 fatty acid for leaf of plants from the overall phytoremediation treatments of soils from the oil spill-impacted sites based on predictions using the fit model Content = Soil + FAME (Fig 4). This observation agrees with the above results and confirms that C. laxus is a C18:3 plant.
As shown in Table 1, the total C12-C24 fatty acid content was significantly different between sites (F = 39.6), and between organs for each site (F = 27.1). In plants from the phytoremediation systems, the fatty acid contents in the leaf and the corm were negatively affected by the THC level, but in contrast, for the root tissue, the effect was noticeably positive for a THC content below 340 g/Kg soil. The effect observed for the root tissue is consistent with reports for monoaxenic cultures of Cichorium intybus grown in the presence of Benzo[a]pyrene [27] and Zea mayz seedlings cultured in the presence of monoterpenes [37], where increases in the total fatty acid content in the roots after the xenobiotic application were observed. However, the results for the corm in the present study contrast with the study of Stoller and Weber [33], where significant increases in the fatty acid content in tubers from a cold tolerant variety of C. esculentus was reported after a 6-week exposure to a temperature of 2°C.
As expected from the above results, the saturated/unsaturated ratio between organs was positively affected by the hydrocarbon presence (Table 1). For plants from uncontaminated soil (SL, F = 17.), the prevalence of both saturated and unsaturated fractions was equivalent (50%) in the leaf; however, in the root, the unsaturated fraction was predominant (56%) with an absence of C16:2, but with similar amounts of C16:3 and C18:3. In contrast, in the corm, the unsaturated fraction had a noticeably lower prevalence (45%), with a similar content of C16:3 and C18:0-C18:3. These results for corm and leaf from C. laxus plants grown in uncontaminated soil contrasts with reports of the fatty acid composition of tubers [35,38,39] and leaves [33] of C. esculentus, which contain high amounts of monounsaturated fatty acids with a considerably lower prevalence of saturated ones: the saturated/unsaturated rates in the tubers ranged from 0.12 to 0.71, with palmitic acid as the main saturated acid (15%) and oleic acid as the predominant unsaturated acid (72%), and from 0.38 to 0.52 in the leaf, with palmitic acid as the main saturated acid (30%) again, but now with linolenic acid as the predominant unsaturated acid (50%).: For C. laxus grown in contaminated soils, as discussed previously, the presence of hydrocarbons correlated positively with the accumulation of unsaturated fatty acids in leaf: the prevalence of the unsaturated fraction changed from 49.1% in SL to 53.6% in S205 (Table 1). However in root the effect was not clear and the unsaturated fraction ranged from 52% to 67% regarding to 55.6% in SL. In contrast, in corm the saturated fraction was prevalent and noticeably enhanced by the hydrocarbon presence: from 54% in the SL control plants to 70% in the plants grown in soil with a THC of 140 g/Kg soil (SSR). As expected, the fatty acid profile was also organ-specific and dramatically affected by the hydrocarbon presence (Fig 3). It is evident that the chain length distribution of the C18 fatty acid group was predominant for each organ (see the insets in Fig 3, and marginal means in Fig 4). However, it should be noted that the major cumulative content of the C18 chain fatty acids was the result of the prevalence of higher and similar amounts of C18:0, C18:1, C18:2, and C18:3N3, in conjunction with low or absent levels of C16:1, C16:2, and C16:3 (Fig 4). For instance, in the leaf from the control plants (SL in Fig 3) the amount and profile of short chain fatty acids (C12-C14) was typically lower than those for mean chain length acids (C16-C18). However, in corm and roots, the amount of C12-C14 was similar to the amount of C16. Interestingly, the presence of hydrocarbonsresulted in an increase in the ratio of short chain fatty acids in the leaf and the roots; however, Cyperus laxus Fatty Acid Profile in the corm a general decrease in these fatty acids was observed. Thus, although the fatty acid profile in each organ was similar between plants cultivated in the same treatment, the presence of hydrocarbons affected the content of some specific fatty acids in the organ. This effect was particularly evident for leaf fatty acids, reducing the content of the C14-C20 group by more than 50%. In contrast, the content of the fatty acids in the roots increased noticeably with the presence of hydrocarbons (Table 1), including the long chain fatty acids C20:2, C22:0 and C22:1, and frequently even in soils containing high levels of THC (Figs 2 and 3). In addition to the typical fatty acids, the presence of small amounts of uncommon uneven and branched carbon chain fatty acids in the roots and the leaf from the phytoremediation systems were detected in the chromatograms (Fig 2), but they were not identified. In summary, the content of fatty acids in the vegetative organs of plants from uncontaminated sites decreased in the order corm>leaf>root; however, for plants grown in soils from contaminated sites, the order was corm>root>leaf (Fig 4).
As outlined in Fig 5, the above results suggest that in leaf and in root the hydrocarbons' presence did not significantly affect the partition of the carbon flux toward any of the fatty acid biosynthetic pathways, keeping both prokaryotic and eukaryotic fatty acid pathways working in synchrony to maintain a balanced growth. Therefore, for plants grown in contaminated soil, the increment in the fatty acid content observed in roots in association with a concomitant decrement of total fatty acids in corm and leaf, suggests that the hydrocarbon may have negatively affected the translocation process of the micronutrients from the root to the corm and the leaf, and that the metabolism might have been redirected to the biosynthesis of wax and suberin monomers giving the root cells a higher resistance to the xenobiotic presence. However, the way that the translocation process of the carbohydrates from the photosynthetic tissue to the root tissue was affected is uncertain, and therefore the increase of the fatty acid level in the root by the hydrocarbons' presence deserves further study. Nevertheless, these results imply that the observed changes in the fatty acids profile were dependent on both hydrocarbons amount and the ability of C. laxus to adapt the intrinsic cellular lipidic metabolism of each organ in response to the environmental challenge, such as has been reported for other plant species grown under stress conditions [40,41]. Such metabolic adaptation may differentially enhance also the production of uncommon compounds, such as the observation of long chain fatty acids and the presence of branched and uneven carbon chain fatty acids. Long chain fatty acids (C20-C34) are common components or precursors of cellular structures, such as membranes, cuticle, suberine, and waxes [40,42,43]. Variations in their profile and content in plants grown in contaminated soils is congruent with reports that suggest that important biochemical and physiological changes at cellular, organ and whole plant levels are involved in the response to the presence of weathered hydrocarbons such as PAH in phytoremediation systems [44][45][46]. However, because phytoremediation is a very complex system involving interactions between xenobiotics, microbes, plants, and soil, alternative experimental procedures are needed to evaluate the involvement of specific PAH uptake regarding the fatty acid content and profile by plants without microbial interactions.
Why C. laxus grows in areas disturbed by oil spills, and whether there is a relationship between their growth pattern and the C18:3 biochemical characteristic or their production of underground storage organs is unknown. It is uncertain also whether the increment in the unsaturated fatty acid fraction in the leaf and the root of plants cultivated in the presence of hydrocarbons produce changes in the physical properties of the membrane; however it can be hypothesized that the incorporation of long-chain unsaturated fatty acids to membrane might improve the absorption of such compounds, and therefore their degradation.
Finally, most Cyperaceae species show the Krans anatomy related to C 4 photosynthesis, and also produce underground storage organs [10,16]; however, it has been reported that C. laxus lacks the Krans leaf anatomy characteristic of the C4 plants [11,47], although it does produce corms (Fig 1), and according to its C16:3/C18:3 fatty acid balance in leaf tissue this species is a C18:3 plant. Because some species with C4 photosynthesis do not show a clear Krans anatomy [18,19], it is uncertain if C. laxus uses the C4 photosynthetic pathway to grow in the oilimpacted sites, and this subject deserves further research.
Conclusions
Cyperus laxus is a C18:3 plant species that produces corms and is able to survive in soils with high levels of hydrocarbons. The fatty acid profile in the vegetative organs of plants from the phytoremediation systems were noticeably affected by the hydrocarbon levels; showing an increase in the unsaturated fatty acids and the long chain fatty acids in the leaf and root tissue, suggesting that the hydrocarbon uptake during the phytoremediation process depends on the cover lipid composition of the roots. The incorporation of such unsaturated fatty acids in cell membrane of root tissue might improve the absorption and degradation of the hydrocarbon compounds. The main type of hydrocarbons found in the aged oil spill impacted sites was PAH 25 . Some of these PAH could be moved through the plant organs by the apoplastic route, promoting a general increase in the content of long chain fatty acids (C18 and C20-C24), mainly in the root; where may negatively affect the translocation process of the micronutrients from the root to the corm and the leaf. In leaf and root tissues the hydrocarbons' presence did not significantly change the partition of the carbon flux toward any of the fatty acid biosynthetic pathways, keeping both prokaryotic (plastidic) and eukaryotic (ER) fatty acid pathways working in synchrony to maintain a balanced growth. Instead, the major cumulative content of long chain and the C18 chain fatty acids was the result of the prevalence of higher and similar amounts of C18:0, C18:1, C18:2, and C18:3N3, in conjunction with low or absent levels of C16:1, C16:2, and C16:3. This suggest that this Cyperus species might direct the intracellular fatty acid metabolic flux to reinforce cellular structures such as plasma membrane (MP), cutine, suberine, and epicuticular wax (Ew), to protect the integrity of the whole plant. Such changes in the fatty acid metabolic flux should involve an important biochemical and physiological adjustment of the plant as response to the hydrocarbons presence. These adjustments may be commonly auto-regulated by metabolic-flexible nodes submerged in the metabolic map of the plant. doi:10.1371/journal.pone.0140103.g005 | 2018-04-03T03:56:53.593Z | 2015-10-16T00:00:00.000 | {
"year": 2015,
"sha1": "931298949308ecab0b8cf9e0881e7d583b813b0c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0140103&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "931298949308ecab0b8cf9e0881e7d583b813b0c",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
210958448 | pes2o/s2orc | v3-fos-license | Effect of Insulin on Proximal Tubules Handling of Glucose: A Systematic Review
Renal proximal tubules reabsorb glucose from the glomerular filtrate and release it back into the circulation. Modulation of glomerular filtration and renal glucose disposal are some of the insulin actions, but little is known about a possible insulin effect on tubular glucose reabsorption. This review is aimed at synthesizing the current knowledge about insulin action on glucose handling by proximal tubules. Method. A systematic article selection from Medline (PubMed) and Embase between 2008 and 2019. 180 selected articles were clustered into topics (renal insulin handling, proximal tubule glucose transport, renal gluconeogenesis, and renal insulin resistance). Summary of Results. Insulin upregulates its renal uptake and degradation, and there is probably a renal site-specific insulin action and resistance; studies in diabetic animal models suggest that insulin increases renal SGLT2 protein content; in vivo human studies on glucose transport are few, and results of glucose transporter protein and mRNA contents are conflicting in human kidney biopsies; maximum renal glucose reabsorptive capacity is higher in diabetic patients than in healthy subjects; glucose stimulates SGLT1, SGLT2, and GLUT2 in renal cell cultures while insulin raises SGLT2 protein availability and activity and seems to directly inhibit the SGLT1 activity despite it activating this transporter indirectly. Besides, insulin regulates SGLT2 inhibitor bioavailability, inhibits renal gluconeogenesis, and interferes with Na+K+ATPase activity impacting on glucose transport. Conclusion. Available data points to an important insulin participation in renal glucose handling, including tubular glucose transport, but human studies with reproducible and comparable method are still needed.
Introduction
Diabetes global prevalence almost doubled in the last three decades. This disorder is a major cause of kidney failure, up to 44% of world end-stage renal disease, beyond ten times dialysis need and renal transplantation [1]. Kidneys are the leading organs involved in insulin clearance from the systemic circulation [2]. They contribute to endogenous glucose production through gluconeogenesis, primarily in proximal tubule (PT) cells [3] under glucose and insulin regulation [4]. Furthermore, PTs reabsorb glucose following its glomerular filtration, through the sodium-glucose linked transporters (SGLTs), mainly the SGLT2 located on the luminal surface of PT cells [5]. Consequently, renal glucose handling also depends on glucose glomerular filtration [6,7] and on the degree of kidney damage [8].
The insulin effect has been extensively studied in renal sodium handling [9]. There is also evidence of direct [10] and indirect [11] insulin effect on glomerular filtration and modulation of renal glucose expenditure [12][13][14]. However, its action on renal glucose transport is still little understood.
High glucose absorption and flux, as in diabetes, may induce tubular damage via an SGLT2 dependent pathway [27,28]. The enhanced SGLT2 activity causes mitochondrial dysfunction through a more extensive glucose flux inside PT cells [29], resulting in high oxidative stress and cellular apoptosis [30][31][32]. Since insulin signalling directly preserves mitochondrial metabolism and function, insulin resistance can trigger mitochondrial dysfunction and damage [33] contributing to renal injury. Reciprocally, impaired mitochondrial function reduces insulin sensitivity [33]. These findings may explain the protective effect of SGLT2 inhibition on kidneys and suggested an intrinsic relationship between renal glucose transport and insulin signalling.
Insulin has been used as a diabetes therapy since 1921 [34]. It is the principal resource to treat type 1 diabetes (T1D) as well as type 2 diabetes (T2D) patients under oral treatment failure. New therapy options include the SGLT2 inhibitors (SGLTi) that block renal glucose reabsorption and can be used as monotherapy or as add-on oral antihyperglycaemic drugs or insulin, at least in T2D patients [35]. In this way, knowing the interactions between insulin and glucose transport by PTs is important to understand not only renal diabetes impairment but also interactions among therapy drugs, mainly of insulin with SGLT2i.
This review is aimed at describing and summarizing the current understanding of the insulin effect on PTs and at discussing the main points involved in this process.
Methods
Original studies, written in English, assessing primary or secondary insulin action on glucose handling by PTs in humans, animal models, tissues, or cell cultures were eligible for inclusion. Data source, between 2008 and June 2019, from Medline (PubMed) and EMBASE, was used. Articles important to the review understanding published before 2008 and described in the references of at least one selected article were included as well.
Search terms included the following: insulin, diabetes, T1D, T2D, renal, kidney, proximal tubule or tubules, GLUT, GLUT1, GLUT2, SGLT, SGLT1, SGLT2, and Na + K + ATPase derivative terms (for example, NKA, NaKAtpase, or NKpump). We performed a triple-term search in databases with insulin, diabetes, T1D, and T2D as the first term; renal, kidney, and proximal tubule or tubules as the second; and the transport proteins as the third one. After that, an inclusive, double-term search without the second designative term was performed only in PubMed.
Two reviewers (R.P.M. and E.M.) independently evaluated the titles and abstracts and then the full text for inclusion eligibility.
Intervention studies with SGLT2i that did not evaluate insulin effect on PTs as well as those regarding glomerular function or diabetic nephropathy not related to glucose transport were excluded. Studies about renal gluconeogenesis and renal insulin resistance were included because of the possible influence of those processes on PT glucose handling.
We developed a data extraction table considering the methods and outcomes of the selected studies. One investiga-tor extracted the data (R.P.M.) and the other reviewed it (E.M.). The extracted data included general information (title, authors, and year of publication), type of study, objectives, methodological characteristics (humans, animal models, cell cultures, renal site of evaluation, insulin intervention, isolated insulin effect, type and duration of diabetes, and insulin therapy length), and main outcomes related to the review aims.
Results
The articles were selected as described in Figure 1.
A total of 2385 articles were selected. After title evaluation, 1983 articles were excluded (review, not related to kidneys, to insulin action, or to glucose handling) resulting in 402 articles for abstract selection. After abstracts analysis, 228 articles were excluded with the same criteria and 174 articles were selected for a full reading. Full reading resulted in 126 selected articles from the initial search, and more 54 articles were obtained from their references. Then, a total of 180 articles were included in this review. Other 32 papers including some reviews were used to introduce and explain our aim and the result topics. The selected articles were clustered into topics and used to construct the summary of evidence described below.
3.1. Renal Insulin Handling. Insulin handling by the kidneys and the hormone concentration differences along the renal capillaries and tubules will be described before its action on PTs to facilitate the understanding of insulin effect at PT level and emphasize its importance.
While the liver removes around 50% of portal insulin during its first pass [36,37], kidneys are the major organs responsible for the insulin clearance from the systemic circulation removing about 35% of total secreted insulin [2]. Most of this clearance occurs in the glomerulus impacting the hormone bioavailability in the tubular lumen and peritubular capillaries at PT level and other downstream nephron segments [2]. The majority of insulin is freely filtered in glomerular capillaries being virtually totally recovered by PT cells, predominantly across the brush border membrane (BBM), where insulin translocates through endocytic vesicles to vacuoles and then is degraded [2,38]. Endocytosis occurs after insulin binding to the megalin-cubilin complex and, to a lesser extent, to the specific insulin receptors (IRecs) present on PT BBM. Megalin is a protein of the transmembrane complex that recovers the majority of serum proteins, including insulin. It is expressed in the PT proximal segment S1 and slightly in the intermediary S2 and distal S3 PT segments [39,40]. Insulin increases its own uptake and degradation by inducing a rise in megalin content [41]. Less than 1% of filtered insulin undergoes transcytosis to the basolateral membrane [42], and only 1% is excreted in the urine [43][44][45]. The remaining nonfiltered insulin reaches the postglomerular peritubular circulation where insulin clearance takes place through specific IRecs binding mainly at PT level. In PTs, insulin reaches its highest concentration and acts on gluconeogenesis suppression [12,46,47] and, possibly, on glucose transport [21,48,49]. Furthermore, insulin is disposed in other tubular sites where IRecs are found in high density, like the medullary thick ascending limb of Henle's loop and the distal convoluted tubules where it stimulates sodium reabsorption [50,51].
Insulin is degraded mainly by the enzyme protein disulfide isomerase, cathepsin D, and especially the insulindegrading enzyme (IDE). IDE is upregulated by insulin in the central nervous system, but little is known about its renal regulation [52,53]. SNX5, a sorting nexin protein family, regulates intracellular trafficking and the expression of IRecs in PTs and upregulates IDE expression and function. The colocalization of IDE and SNX5 next to the BBM reduces insulin levels while deficiency of one or both regulators leads to increased circulating insulin levels decreasing IRec expression and inducing insulin resistance [54,55].
Proximal
Tubule Glucose Transport. In this section, results of experimental and clinical studies are described aiming at exploring the relationship between insulin signalling and its effect on PTs, on glucose excretion, and on renal glucose transporters, particularly in diabetes. The first topic is a brief description of renal glucose transporters, their localization, and function.
3.2.1. Renal Glucose Transport Proteins. Two protein families, GLUTs and SGLTs, are in charge of the glucose transport in PT S1, S2, and S3 segments [5,56]. GLUTs, highly expressed in kidneys, are facilitative glucose transporters present ubiquitously on cellular surfaces, composing a saturable, stereoselective, and bidirectional transport system. While GLUT1 has a high affinity for glucose, GLUT2 is a low-affinity and high-capacity transporter also mediating galactose, mannose, and fructose transport [56]. SGLTs utilize the electrochemical sodium gradient to move glucose against its concentration gradient [5]. The two types of renal SGLTs, SGLT1 and SGLT2, differ in sodium to glucose stoichiometry, sugar selectivity, sites of expression, and regulation [5,57], even if one electrophysiological study has demonstrated similar affinities [58]. SGLT2 has higher transport capacity and is more able to adjust its glucose transport proportionally to glucose concentrations than SGLT1 [5].
In rats, GLUT1 is located in the S3 segment. It is also found in the thick limb of Henle's loop and collecting ducts [59,60], metabolically active sites that consume large amounts of glucose as substrate [61]. GLUT2 expression has been demonstrated in the S1 segment [60,62]. SGLT1 is found along all PT segments [59,63], and its density in the BBM and intracellular organelles increases from S1 to S3 being higher in the outer medulla than in the cortex [63,64]. SGLT2 is situated in the renal cortex [65], especially in the S1 and S2 segments [66,67], and its expression is higher in the former [66]. In humans, expression of SGLT2 protein occurs in S1 and S2 whereas SGLT1 is expressed in the S3 segment. The two proteins are present only on the BBM side [57]. To our knowledge, studies regarding GLUT2 tubular localization were not performed in human but its mRNA has been demonstrated in PT cells [68][69][70].
Tubular Glucose Transporters in Animal Models of
Diabetes. In this topic, studies in diabetes models involving quantitative modification of a specific glucose transporter mRNA or protein were clustered (Table 1). This kind of study does not quantify the real dynamic function of the glucose transporters and their activity variation. However, all together, they can suggest transporter impairment in diabetes.
Most of the studies for GLUT1 evaluation in these models were carried out in streptozotocin (STZ) rats. They predominantly reported higher GLUT1 protein [79][80][81][82] and corresponding mRNA [81][82][83][84][85] contents in the whole kidney and increased GLUT1 protein [86,87] and mRNA [86,88,89] in the cortex. Nonetheless, these studies are yet controversial [22,67,[90][91][92][93]. In STZ rats, S3 GLUT1 mRNA availability raised and returned to its normal values after one month of diabetes induction, while cortical (mainly S1 and S2 segments) GLUT1 remained at low levels until six months. Subsequent insulin treatment increased the cortical but did not change the S3 GLUT1 content [24]. On the contrary, in insulin-resistant animals, GLUT1 in the S3 segment decreased in the first 3 months of diabetes and increased in the next 3 months, when cortical GLUT2 activity enhanced [25]. So, GLUT1 seems to have a differentiated regulation depending on which tubular segment is evaluated, the insulin deficiency or resistance, and the diabetes duration.
Regarding GLUT2, the results of diabetes murine models are debatable [22, 24-26, 79, 83-85, 87, 90-100]. In addition, many studies have been carried out in STZ rats [24, 26, 79, 83, 85, 87, 90-93, 96, 98], and STZ induces diabetes through beta-cell apoptosis after being transported by GLUT2 [101]. Theoretically, the same can occur in the proximal portions of PTs where GLUT2 is coupled to SGLT2. This toxicity could change the proportions of active cells impairing the evaluation of these transporters [101,102]. In a STZ model, the increased cortical GLUT2 mRNA availability was normalized after seven days of insulin replacement [24], but glycaemic changes could have modified the results, making their interpretation problematic.
In summary, T1D models showed increased GLUT1 in both the whole kidney and cortex. These changes can be transitory and site-specific. GLUT2 results are still controversial. SGLT1 results were concordant only regarding the upregula-tion of mRNA expression in T2D models. Studies frequently reported SGLT2 contents as increased in both models, a plausible reason for the higher renal glucose uptake of diabetic patients. However, whether transporter changes are due to high glycaemic levels or reduced insulin signalling or both is still an open question.
Tubular Glucose Transporters and Renal Glucose
Handling in Diabetes: Human Studies. Renal glucose reabsorption is proportional to glycaemic increments until blood glucose levels exceed the renal threshold for glucose (RTG) when glucose starts to appear in the urine [114,115]. As glucose concentration rises above the RTG limit (around 10mmol/L [5,15,16]), the increment in the rate of tubular glucose reabsorption slows down in an initial nonlinear curve termed splay [116,117]. It is followed by a constant glucose reabsorption rate that has been studied since 1940 and Results were compared to the corresponding controls; numbers are references; the study model is inside the parentheses. * Results for GLUT1 were specified for whole kidney (WK) or cortex (C) due to the different availability of GLUT1 in distinct nephron sites, while GLUT2, SGLT1, and SGLT2 are available only at proximal tubules level. a Short-duration diabetes. b Initially reduced followed by a partial recovery but maintaining lower levels. c Protein activity was also reduced. STZ: streptozotocin model; db/db: leptin receptor mutation model; GK: Goto-Kakizaki diabetic rats; HFD: high-fat diet; OLETF: Otsuka Long-Evans Tokushima Fatty rats; MG: monosodium glutamate treatment. § Mix model with insulinopenic and insulin-resistant rats. # Insulin resistance without changes in glycaemic levels compared to controls.
defines the maximum renal glucose reabsorptive capacity (Tmax). After Tmax is reached, increments in blood glucose result in equal linear increments in glycosuria [48,116,117]. Tmax for glucose is 15 to 20% higher in diabetic patients (356 to 463mg/min) compared to healthy subjects (303 to 404 mg/min) [15,16,48,118] despite RTG variability in the former overlapping the expected RTG of the latter [17][18][19]119]. The RTG seems to be increased in patients with T2D, especially in the elderly and those with long diabetes duration and higher body mass index [19]. In these patients, supposed to be the best candidates for SGLT2 inhibition because of their high RTG, the damaged kidney structure and its reduced function may impair the expected glycosuric response. In fact, a better SGLT2i effect is observed in younger diabetic patients [28].
The few studies carried out in kidneys from T2D patients reported decreased [70] or unchanged [68] GLUT1 mRNA levels while GLUT2 mRNA was described as reduced [68] or raised [100]. In exfoliated PT cells, isolated from the urine of T2D patients and cultured in a hyperglycaemic environment, GLUT2 and SGLT2 protein and mRNA were increased compared to healthy controls [69].
In diabetic patients, SGLT1 mRNA levels in tissues from biopsies [68,100] or nephrectomies [70] were unchanged [70,100] or raised [68] without any data about protein levels. Regarding SGLT2, its mRNA levels were described as increased [100] or reduced [68,70] while increased protein content was reported [100]. These very conflicting results can be explained by methodological differences in tissue collection and storage, diabetic status, and possible kidney abnormalities of the control group.
Glucose Effects on Renal Glucose Transporters.
In animal models, plasma and luminal glucose concentrations have been shown to stimulate GLUT2 expression [26,120] and, even, to translocate the transporter from basolateral to BBM side [26]. In canine PT polarized cultures with apical and basolateral cell layers, GLUT2 migrated to the apical side exposed to isolated glucose stimulus [20].
Both SGLTs also seem to be under glucose influence. In cultures of human embryonic kidney (HEK) cells, glucose promoted trafficking of SGLT1 proteins to plasma membrane without changes in the total pool [23] but did not change SGLT1 mRNA levels in PT cultured human kidney-2 (HK2) cells [22]. In addition, glucose stimulated SGLT2 mRNA transcription and amplified SGLT2 protein pool in cultures of human PT cells [22] and promoted its translocation from the intracellular compartment to the membrane in HEK cell cultures [21]. One study, on the other hand, reported a neutral glucose effect on SGLT2 content and/or activity in cultures of human PT cells [121].
3.2.5. Insulin Effect on Renal Glucose Transport. Insulin effects on cells and tissue metabolism result from a highly integrated network of different pathways [122]. IRecs on cell surface, after the insulin binding, phosphorylate the insulin receptor substrate proteins (IRS) that, in turn, activate two main signalling pathways: the phosphatidylinositol 3-kinase (PI3K)/protein kinase B (AKT) pathway, which regulates the majority of insulin metabolic effects, and the Rasmitogen-activated protein kinase (MAPK) pathway. MAPK participates in the control of cell growth and differentiation through gene expression regulation [122][123][124]. Insulin itself is the utmost inhibitor of its own signalling [123].
In 1951, Farber at al. demonstrated that insulin decreased Tmax in diabetic patients but under very high insulin plasma levels [48]. However, in a recent trial, physiological insulin levels increased urinary glucose excretion under hyperglycaemic conditions in healthy but not in diabetic volunteers [125]. Both studies separated the insulin effect from glycaemic variation. Thus, emerging questions are as follows: in which way is the higher Tmax of diabetic patients related to insulin resistance, hyperinsulinaemia, or insulin deficiency; and which are the relationships between the insulin signalling and PTs glucose transport proteins activity. That provides a rationale to investigate an insulin effect, isolated or combined to glucose and insulin resistance, on glucose transport proteins, mainly on their function.
Assessing insulin action by itself, a dual temporal insulin effect on glucose uptake was reported in murine PT cultures: raised in the first twenty minutes and returning to the initial rate after thirty minutes [49]. In these cultures, insulin increased GLUT1 mRNA and membrane protein contents but other glucose transporters were not evaluated. Accordingly, regulatory proteins involved in pathways triggered by insulin upregulates the cell surface GLUT1 expression in HEK cell cultures [126]. GLUT1 traffic to the apical membrane in HEK cells has been demonstrated under PI3K/AKT signalling with elevated glucose uptake [127]. The AKT signalling interacts with megalin and the AKT substrate of 160 kDa (AS160), the most downstream insulin signalling step related to insulin-stimulated glucose transport [126,128]. This signalling reproduces the same insulin-dependent GLUT4 traffic demonstrated in adipocytes [129] and myocytes [130] and could justify GLUT1 raising in HEK and PT cultures exposed to insulin. Concerning renal GLUT2 expression, it was elevated in the presence of insulin resistance, visceral obesity, high triglycerides, and low high-density lipoprotein cholesterol concentrations even under normal glucose levels, in Otsuka Long-Evans Tokushima Fatty (OLETF) rats [25], a T2D model. About the SGLT system, insulin seems to regulate SGLT1 directly [21] and indirectly [131,132]. In HEK cell cultures, two hours of insulin exposition inhibited SGLT1 activity [21]. In contrast, the serum and glucocorticoid inducible kinase (SGK1), which is activated by both glucose [133] and insulin [131], stimulated SGLT1 function [132]. The reported findings are very important since the SGLT1 system is virtually fully activated after SGLT2i use or in high glycaemic levels as in diabetes. Besides, SGLT1 is the predominant BBM glucose transporter at PT S3 portion [63,64], the nephron site with the highest IRec level (46).
Experimental studies indicate SGLT2 activation by insulin. In fact, IRecs seem to be required for maximal SGLT2 expression and SGLT2-mediated glucose reabsorption as evidenced by studies in mice knockout for renal tubule-specific IRecs [134]. Insulin also raised the SGLT2 activity [21,121] and protein levels [121] independently 5 Journal of Diabetes Research of glucose concentrations in cultures of human kidney cells. In HEK cells, insulin increased SGLT2 glucose transport by 200 to 300%, probably by stimulating the SGLT2 translocation from an intracellular pool to the S1 and S2 BBM segments [21]. A similar finding was reported using cultured human PT cells where insulin increased SGLT2 content and/or activity in a dose-dependent response [121]. However, in HK2 cells, the activation of the liver X receptor decreases SGLT2 protein and its function. The liver X receptor is a nuclear receptor family that plays a major role in energy metabolism and regulates several membrane transporters. As insulin activates liver X receptor, it could indirectly decrease SGLT2 content [135]. Furthermore, in an Alloxan T1D rat model, insulin reduced SGLT2 mRNA independently of glucose levels [112]. Despite the conflicting data, all these findings open the possibility that the higher SGLT2 levels in diabetic states can be attributed not only to elevated glycaemic concentrations but also to a direct or indirect insulin action. Moreover, insulin resistance perhaps modulates SGLT availability and activity, but this issue was not well evaluated until now.
As insulin resistance is associated with an imbalance of the autonomic system, insulin could indirectly modulate the RTG and Tmax through sympathetic system stimulation. In fact, the reduction of renal sympathetic activity limits SGLT2 excessive transcription in rat models enhancing urinary glucose excretion [22] as well as reducing renal gluconeogenesis in pigs [136].
Organic anion transporters (OAT), proteins situated in the basal membrane of PT cells, contribute to cellular uptake and secretion of multiple molecules to the luminal side, including the SGLT2 inhibitors [137,138]. The SGLT2i action is related to the SGLTi luminal concentration reached in the S1 and S2 portions and thus depends primarily on the glomerular filtration [139]. However, tubular secretion of SGLT2i [140] mediated by OAT proteins increases its tubular concentration and action [137]. OAT type 3 (OAT3), through its colocalization with SGLT2 but not with SGLT1, enhanced the empagliflozin glycosuric effect [140]. The insulin effect raising [141] and the insulin resistance decreasing [142] renal OAT3 activity on the renal cortex suggest a link between insulin action and pharmacological inhibition of SGLT2. Indeed, a better understanding of insulin effects on tubular glucose transport and its interaction with SGLTi is imperative.
3.2.6. Na + K + ATPase (NKA). The ubiquitous NKA protein and its activity have been intensively studied for some decades before our review interval. This transporter is under the influence of many factors, including glucose, catecholamines, C-peptide, insulin, and other hormones [143,144]. The insulin effect on NKA activity is cell type-specific and depends on the time and intensity of exposition displaying acute and chronic responses [144,145].
NKA maintains a sodium gradient across the basolateral membrane of PT cells that provides the driving force for the SGLT activity [146]. In this way, changes in NKA activity presumably have an impact on SGLT function and glucose recovery. As insulin influences NKA function and that function directly modulates the SGLT glucose uptake, to evaluate the NKA activity in diabetes can give important information concerning the mechanisms of renal glucose handling regulation by insulin.
Old studies in diabetes models evaluated the NKA activity in the whole kidney and nephron segments, but not in isolated PTs [147], and were inconclusive. Results of recent studies in the whole kidney are still contradictory [148][149][150][151][152][153][154] probably because of mixed tissue responses and discrepancies in disease duration and glucose levels.
In the renal cortex of murine STZ models, NKA activity was reported as increased [155][156][157] or as reduced due to impaired insulin binding to its receptor [158]. In two of those reports with increased NKA activity, insulin treatment reduced it [155,157]. The duration of disease, i.e., sustained hyperglycaemia or chronic adaptation to it, could have contributed to the differences, as in one study diabetes lasted twice as long as in the other. A specific study on PTs of T2D rats showed a raised NKA activity [159]. In any case, none of these studies investigated the insulin and glucose effects separately.
Although these do not fully represent the real in vivo process, cell culture studies evaluating isolated insulin and glucose effects can give a better understanding of the interaction between NKA activity and insulin signalling. Glucose reduced NKA membrane protein and its activity in cultured tubular cells from human nephrectomies [143], and an indirect effect of glucose was demonstrated in HK2 cell cultures where advanced glycation end products reduced NKA activity [160,161]. An inhibitory glucose effect was also demonstrated in cell cultures of proximal tubule lines from porcine kidneys (LLC-PK1) associated with a downregulation of the surface expression α1 subunit, the NKA active site [162,163]. Thus, glucose seems to be a negative regulator of its own uptake.
Regarding insulin, a short exposition to it (until 30 minutes) raised NKA activity [160,161,164], whereas exposition for more than 24 hours reduced NKA activity in rat PT cultures [165]. In the same way, in a culture complex model, insulin exposition raised renal NKA activity in the first 30 minutes, but it returned to the baseline levels after 2 hours and was even lower at 48-hour measurements [166]. This reduction was likewise observed after one hour of insulin exposition in another study [167]. Taken together, these results suggest a dual temporal insulin action on NKA activity. In the NKA low activity second phase, insulin could limit SGLT function by reducing the sodium gradient across the BBM. However, once glucose impacts NKA activity too, the described limiting insulin effect should be evaluated also in the presence of variable and elevated glucose levels, as in diabetes states. Besides, it should be assessed considering a possible renal insulin resistance.
C-peptide is another reported NKA modulator of interest. It increased NKA activity in cultures of human tubular cells from nondiabetic patients [143] and increased NKA alpha subunit mRNA in the renal cortex from STZ rats [168].
Insulin Regulation of Renal Gluconeogenesis.
Another important insulin action on PTs is gluconeogenesis inhibition. Liver and renal cortical cells, primarily the PTs [3], are classical tissues that have the enzymatic apparatus necessary to significantly release glucose into the circulation. Hence, PTs contribute to the total endogenous glucose production in fasting and even in postprandial states [47]. Renal glucose release under normal conditions is about 20 to 25% of total systemic glucose production in fasting and 60% in the postprandial state [169].
As kidneys are not able to store significant amounts of glycogen and as glycogenolysis enzymes are lacking, the renal glucose production is provided basically by gluconeogenesis that generates 15-55 g of glucose and kidneys metabolize 25-35 g of glucose per day [47,170]. Insulin suppresses the renal gluconeogenesis to a lesser extent than it does in the liver probably because of the lower kidney sensitivity to this insulin effect. However, such a difference could be the result of lower insulin delivery to the renal tissue. Furthermore, glucagon has little to no effect on renal gluconeogenesis [170][171][172]; hence, catecholamines are the major counter regulator of insulin-induced inhibition of gluconeogenesis in the kidneys [170,172].
Reabsorbed glucose from tubular filtrate [4,178] and insulin [4] seems to have a complementary inhibitory effect on renal gluconeogenesis. In fact, the higher postprandial insulin levels reduce PT gluconeogenic enzyme transcription in wild mice [4] and rabbits [179], and gluconeogenic gene expression was reduced by the glucose counterregulatory effect in insulin-resistant and insulinopenic models [4]. In addition, SGLT1 plus SGLT2 inhibition by phlorizin restored gluconeogenic activity in these models [4] and isolated SGLT2 inhibition in normal mice activated renal gluconeogenic gene expression [178]. Therefore, the reduction of glucose flux across PT cells stimulates gluconeogenesis. Moreover, in HK2 cell cultures, insulin and glucose inhibit gluconeogenic enzymes across distinct pathways [4].
In accordance, PT cells from human nephrectomies [176] and HK2 cell cultures [4] exposed to insulin undergo gluconeogenesis reduction. However, a high gluconeogenic enzyme content in human renal biopsies from T2D was reported [46] which could be interpreted as an impairment of insulin action on kidneys, maybe a kidney-specific insulin resistance. The intracellular glucose generated from highintensity gluconeogenesis might impact the glucose transport through modifications of SGLT2 transcription or its pool mobilization, as described for the extracellular glucose stimulus in PT cells of diabetes models. That could mean an additional indirect insulin regulation of glucose transport, in this case, through gluconeogenesis.
Renal Insulin
Resistance. Despite the higher Tmax for glucose in diabetic patients compared to healthy subjects, it is not clear if renal insulin resistance could impact glucose transport. Even the concept of renal insulin resistance is still debatable. Insulin resistance, in general, is characterized by an attenuation of its triggered biologic processes inducing metabolic impairment [123,180,181], and the insulin resistance phenotype is variable among organs and even among tissues from the same organ. For example, the liver has selective insulin resistance, and metabolic pathways diverge according to specific spatial zonation near or distal to the portal space [123]. The same may be possible in different renal segments according to the presence and density of IRecs and insulin availability considering the hormone filtration, extraction, and degradation.
The variability of protein isoforms of the insulin signalling cascade (IRecs, IRS, PI3K, and AKT) [122] and of diabetes phenotypes, mainly in T2D [182,183], is partially due to genetic variations [184][185][186] and may be related to specific tissue resistance differences. In addition, insulin signalling determines several phenotypic characteristics regarding cell size and proliferation in PTs [187]. Therefore, another question is if the insulin action on PT glucose transport is impaired in insulin resistance.
The two IRec isoforms differ in affinity to insulin binding and metabolic effects [188,189]. In humans, IRec type B, available mainly in insulin-sensitive organs (skeletal muscle, liver, and adipose tissue) [188,190], is abundant in kidneys too [190]. In rat models, insulin binding [50] and IRecs are present along the whole nephron with the highest levels at PTs, especially in the outer medullary S3 portion [46]. The distal convoluted tubule is another nephron segment where insulin binding is high [50] and where insulin stimulates sodium reabsorption [180,191,192]. At PTs, insulin stimulates sodium uptake also through Na + H + exchanger type 3 (NHE3) [180].
The differences in IRec density and of insulin concentration along the nephron indicate a specific site and variable hormone action. Some findings in animal cell cultures demonstrated variations of nephron or PT IRec densities. In PTs of normal rats, IRecs are localized in the basolateral membrane where it may sense insulin from capillaries while IRec on the apical membrane is involved in insulin reabsorption [44,46]. IRecs accumulate into the cytoplasm during fasting and in the two membranes after refeeding consequent to both insulin and glucose oscillations [46]. Insulin decreases its own receptors in murine PT cultured cells [165]. Reduced IRec protein expression in all nephron segments in either insulin-resistant [193] or insulinopenic rats [46] has been described. The latter had a stronger reduction in the renal cortex and distal tubules [46]. The increase of membrane IRecs after feeding was also lost in diabetes models [46]. In humans, IRec protein expression was also significantly reduced in renal biopsies from T2D patients with a pronounced downregulation observed in PTs and slightly in distal tubule cells [46] again suggesting reduced insulin action on PTs.
Impairment of another step of the insulin signalling cascade in PTs has been described. After the IRS phosphorylation triggered by the insulin binding, the IRS tyrosine residues serve as anchoring sites for regulatory subunits of PI3K at the cell membrane cytoplasmic side [194]. The IRS1 and IRS2 isoforms, widely expressed in human tissues, have distinct physiological roles in vivo [33] and are frequently decreased in insulin-resistant states [124]. Hyperinsulinaemia 7 Journal of Diabetes Research induces IRS1 and IRS2 protein degradation [195] across different pathways [124], according to the target organ where the insulin resistance takes place. In PTs of insulin-resistant murine models, the stimulatory effect of insulin via IRS1 is impaired in contrast to a preserved IRS2 insulin signalling [180]. IRS2 has a role in PT sodium transport not related to the SGLT system [121,196]. On the other hand, IRS1 impaired signalling may be associated with a lesser inhibition of renal gluconeogenesis [46,47,197]. While IRS1 expression and phosphorylation are normal [198] or reduced [199], IRS2 has normal levels in diabetes models [27,191]. IRS2 expression is preserved in the renal cortex of insulinresistant patients [191] or even enhanced in tubules of patients with diabetic nephropathy [200]. These findings corroborate the renal insulin resistance hypothesis as well as a site-specific and selective resistance. It is reasonable that a PT insulin resistance, beyond being related to an impaired gluconeogenesis regulation, could impact renal glucose transport and thus hypothetically contribute to the higher Tmax found in diabetes.
Summary of Evidence and Discussion
The review objective was to describe and summarize the literature data about the insulin effect on renal glucose transport. We aimed to construct a sequence of evidence to facilitate the reader access to the current understanding of insulin action on renal proximal tubules, the nephron site responsible for the glucose uptake from glomerular filtrate, and where renal gluconeogenesis takes place. In the following paragraphs, the main findings are summarized.
Kidneys, mainly PTs, play a significant role in insulin metabolism. Insulin upregulates its own PT uptake and degradation [41], thus changing insulin availability in the whole body and specific renal sites [54,55].
Regarding glucose transporters in diabetes, T1D models showed increased GLUT1 protein availability and mRNA expression in the whole kidney and higher cortical GLUT1 mRNA expression. These changes can be transitory and site-specific. Results concerning GLUT2 are controversial. SGLT1 studies agreed only in the upregulation of its mRNA expression in T2D models while protein and mRNA SGLT2 contents in both T1D and T2D models are frequently reported as increased (Table 1). Elevated SGLT2 levels could explain the higher glucose uptake capacity of diabetic patients. Human studies, however, are scarce and contradictory with few studies demonstrating raised SGLT2 protein availability in diabetic patients.
Insulin alone [21,121] or with glucose [24,25] can modulate availability and/or function of PT glucose transporters beyond changing renal gluconeogenesis [4,178]. The insulin effect in murine PT cell cultures seems to increase GLUT1 content and trafficking [49,126]. Insulin resistance, on the other hand, is associated with increased GLUT2 in animal models [25] while insulin replacement reduces this transporter availability [24]. However, glucose level variations may have confused the results in these models. While glucose has promoted SGLT1 trafficking [23], insulin seems to directly inhibit the SGLT1 activity in renal human cell cultures [21] but could activate it indirectly [131]. Furthermore, glucose seems to amplify membrane SGLT2 protein availability in these cultures [22]. It was reported that insulin raises SGLT2 protein availability and activity independently of glucose and additionally regulates SGLT2i bioavailability [140][141][142]. Differences in IRec density along the nephron [46] and in the type of IRS expressed in diverse tubule segments, or the same segment but under distinct insulin sensitivity [27,191,[199][200][201]203], point to a renal site-specific selective insulin action and, possibly, to a spatial selective insulin resistance.
NKA activity might impact SGLTs by providing the driving force for their activity [146]. In murine models of diabetes, changes in NKA function are probably due to high glycaemic levels [155][156][157]159] and impaired insulin signalling [158]. Nevertheless, the results' heterogeneity does not allow to clearly define the insulin effect on NKA. In complex models of animal PT cultures, NKA activity increased after short exposition to insulin but decreased under sustained stimulus [160,161,[164][165][166][167]. In human tubular cell cultures, glucose inhibited while C-peptide stimulated NKA activity [143,168].
All of the above findings are summarized in Figures 2(a) and 2(b).
Therefore, the elevated Tmax of diabetic patients [16,18,19,48,118] yet so far not completely known is possibly associated with an upregulation of glucose protein transporters and may be related to insulin in many ways. Human studies with reproducible and comparable methodology are needed to understand the real impact of insulin on glucose transport in healthy and diabetic subjects, independently of glucose influence.
Our review has limitations. It is circumscribed to publications in the last 10 years. The literature search using specific terms and the limitation to publications in English may have missed some papers related to our aim. Other difficulties are related to the issue itself. In fact, most studies did not have the insulin action on glucose transport as their first objective. Results are not always comparable taking into account differences among species [102,189,205] and study models. In human studies, one limitation is the inclusion of subjects with other kidney diseases as the control group rather than : Na + H + exchanger type 3; NKA: Na + K + ATPase; IRecs: insulin receptors; IRS: insulin receptor substrate proteins; NF-κB: nuclear factor kappa-light-chain-enhancer of activated B cells; TNFα: tumor necrosis factor alpha; IL-6 and IL-10: interleukins. ? Scanty or conflicting data; // reduced effect; * enhanced in animal models but conflicting human data; * * enhanced in murine models but reduced in cultures; § temporal dual action according to exposition (short time = stimulatory and sustained = inhibitory); # total NKA function increased despite inhibitory GLU effect and mitochondrial dysfunction. 9 Journal of Diabetes Research just healthy ones. Moreover, frequently, insulin and glucose effects were not evaluated separately. Cell culture models are able to isolate these effects although they do not consider the microenvironment of the whole organ, possibly influencing transcriptional regulators of genes involved in glucose utilization [49,206], and do not consider the hormonal [49,[207][208][209] and neural [22,209] crosstalking among organs. It is still important to take into account that mRNA or protein measurements do not necessarily reflect their dynamic function. At the same protein content, its function can be enhanced or diminished by modification of serum lipids and fluidity in the cytoplasmic membrane [144], by transporter conformational changes [5] or by subcellular spatial arrangement [67,210]. Furthermore, protein interactions in the cytoplasmic membrane side, as described for SGLT2 and its anchoring protein [211], can be related to variation in glucose transporter function without any change in the protein content [210,212].
In conclusion, the upregulation of renal glucose transporters, mainly SGLT2, associated with sustained hyperglycaemia, or to a disrupted renal insulin signalling, can be related to the increased maximum renal glucose reabsorptive capacity observed in diabetes. The several effects of insulin on distinct kidney sites can modify glucose transport directly, through changes of glucose transporter availability and function, or indirectly through Na + K + ATPase activity modulation. Thus, there is evidence of insulin effect not only on renal gluconeogenesis but also on renal glucose transport. However, until now the scarcity and the heterogeneity of the studies limit an accurate proposal of the implicated mechanisms.
Conflicts of Interest
The authors declare no conflict of interest regarding the publication of this paper.
Authors' Contributions
RP-M and EM designed the research. RP-M performed the article search. Both authors selected the articles. RP-M performed the data extraction and EM reviewed it. Both authors wrote, revised, and discussed the manuscript. | 2020-01-16T09:03:46.883Z | 2020-01-10T00:00:00.000 | {
"year": 2020,
"sha1": "01da21af5ebbd8cb7eed46b6373973b90a52a7f0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2020/8492467",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "139afee98a1254c7e8d78bb0a79e92cee40014e1",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55530432 | pes2o/s2orc | v3-fos-license | Translational pharmacology: role and its impact
Translational Pharmacology is a newly evolved branch as an extension of clinical pharmacology. Translational Pharmacology aims to move the results of the molecular pharmacological research to the patient level, which is focussed on developing the new drug that correlates with the patient needs. The basic objective is to study the changing trends from experimental to clinical pharmacology. It also helps to gather data from the preclinical studies, so as to have accurate and effective dosing in the critical clinical trials. Thus, we can conclude that Translational Pharmacology tries to bridge the gap between the basic molecular research studies in pharmacology to the clinical trials. This also reduces the time and economic burden on research. Thus, it helps in translating the knowledge from the basic animal studies to the bedside patient studies.
INTRODUCTION
In the present days with advancement in field of science and technology research has greatly progressed. Despite of the progress, there has been delay in introduction of a new drug into the market and difficulty in successful implementation of it in the clinical use. Hence, an integration in the field of basic science, its use in clinical practice and its application in the community at large scale. This is possible with the evolution of translational pharmacology as an extension of clinical pharmacology.
Clinical pharmacology: translational pharmacology
Clinical pharmacology was found by Harry Gold in 1950s which involved scientific study of drugs in man, their rational use, safety, efficacy, cost and benefit availability and personalized medicines. It contributes to the development of new drugs, clinical research, and clinical trials for new drug regimes rational use of medicines, pharmacogenetics, pharmacoeconomics and pharmacovigilance.
Academic institutions, the pharmaceutical companies and the institutions such as Council for Scientific and Industrial Research (CISR), the World Health Organization (WHO) and Indian council of Medical Research (ICMR) all contributes to the various branches of clinical pharmacology. 1 The basic biological research work undertaken at various institutions, government organizations such as National Institute of Virology, National Institute of Immunology etc. may be found to be inadequate in product development. With all efforts taken by the government there exists a gap between the need, development and the availability of the medicines. As suggested by the pioneer of clinical pharmacology from UK, Dr CT Dollery, the development of experimental and translational medicine contributing to personalized medicine serve as a support to pharmacotherapy education of the medical students and the practioners. 2 It would further contribute in the growth of clinical pharmacology and such growth in clinical pharmacology would enable to develop expertise training in developing safe and effective economic products for healthcare management in the country particularly in the rural and developing areas.
Advanced information technology and nano technology are the fields that are currently considered as newer technologies in focused research and targeted therapy as a solution for many areas of healthcare management. Thus, clinical research and translational medicine units should be essentially set up to carry out the proof of concept studies which will be further confirmed through translational studies in the form of medical college networks, hospital studies, specialized database studies and disease based studies.
Translational pharmacology: pharmacoeconomics
Globally the estimates on the drug development have shown an unsustainable economic status pandemic. The exponential inflation on the expenses incurred is exemplified on pharmacotherapy which is expressed as cost effective component in treating a disease. The treatment that costs few hundred dollars two decades ago is replaced with the new drugs which cost in excess hundreds of thousands of dollars adding to the economic burden in treating the same disease conditions. 3,4 Far beyond the established pharmacotherapy of various medical conditions many diagnostic and therapeutic products of uncertain impact are offered to the patients that are known to concourse with several factors such as the ageing process, unestablished clinico-pathological correlation etc. In such instances translational medicine would be dynamic to meet the optimum healthcare requirements that range from diagnosis, therapeutic strategies, discovery and development of drug.
However, there is a potential for translational medicine to add economic stress in clinical management of the condition with the existing over burdened health care system which makes it difficult to understand whether there is an adequate evidence to adapt all the changes into the clinical practice.
It integrates the disease pathophysiology with the disease management based on targeted diagnostics and therapeutics. Thus, it is pivotal in linking between the laboratory to the patient and the population. 5 On integration it has been possible to apply the translational molecular insights and biological innovations into the clinical practice and the case management. Further, it has provided tools to define the underlying disease initiation, its progression, diagnosis and therapeutic management. Thus, translational medicine is pivotal in quantifying molecular diagnostic aspects, their clinical outcomes, therapeutic response and idiosyncratic behaviour.
Translational pharmacology: transformation of health care
The integrated personalized therapy is tailored based on the genetic profiles, diagnosis and therapeutic specificities thus benefitting the community to the maximum with minimum adverse effects. Personalization has thus become a part of daily practice. [6][7][8][9] With the application of the translational medicine, it has been possible to bridge the disease management from palliation to cure. Such transformation with integration from the basic science along with research and pharmacotherapy of the clinical condition has reached beyond the individual patient to the global population. 6,7 It anchors the principle with evidence-based case management and adds value to the treatment of individual patient, population society and the health care system. The genotyping and phenotyping would enable to identify the individual at greater risk for the adverse drug reactions, which would enable to focus on the safety measures in the therapy of the given clinical condition. Whereas, for academia this transformation encourages the individuals in the field of research, to test their novel ideas, that are generated from basic investigation hoping for their clinical applications. While for physicians or the clinical practioners it facilitates to capture the research benefits and to understand what is known and what is practiced. While, as for the commercial pharmaceutical industry it measures assessment of the new entities at earlier phases, to identify the problems early, so as to solve them at an early stage to reduce the cost of product development.
With the evolution of science and technology, translational pharmacology has evolved as a new branch to meet today's healthcare need and is considered as an extension of clinical pharmacology. 10 Translational Pharmacology, translational research and translational medicine are the interchangeable terminologies where the word translational focuses on development of a new drug which deals with the patient needs and targets to deal the specific issues. 11 It has varied roles and applications in the present era in aspects to the drug development, which tries to bridge the gap between the basic science i.e. molecular pharmacology and extends to reach the patient requirement to heal the relevant clinical problems. It covers the areas of molecular research, animal experimentation and their application with reasoning in patients to treat the clinical condition. It is a rapidly growing discipline that aims to expedite newer diagnostic and therapeutic measures being highly collaborative ranging with research work from laboratory to patients.
Such translation would focus on ensuring with evidence based proven strategies in management and prevention of the clinical conditions which can actually be implemented within the community. Hence, translational pharmacology is a bridge between the bench to the As defined by the European society for translational medicine the term translational research is an interdisciplinary branch of a biomedical field that are greatly supported by three main pillars bench side (laboratory research), bed side (clinical practice) and the community (population needs). 14 Such kind of integration and interdisciplinary research is useful to qualify the efficacy of drug molecule introduced along with which one gets to know the limitation in the use of introduced molecule.
Translational Pharmacology encompasses three principal components such as the laboratory research, clinical practice and the population needs in the community which are often described as two stage process that refers to laboratory to clinic and clinic to community with the main objectives to identify the relevant problems and design the drugs to address the specific therapeutic issues directly. Hence, translational pharmacology can be appropriately used to describe successful integration of the scientific discoveries with their clinical application in treating various diseases of the human beings. 15 The objectives of translational pharmacology are to discover the origin, pathway and mechanism of the diseases including the responsible biomarkers, to discover and develop new diagnostic and therapeutic measures and to discover the newer drugs in short duration of time. The translational pharmacology includes the pharmacokinetic and pharmacodynamic modelling in translational research that provides better understanding as per the drug efficacy and safety. The mechanism based pharmacokinetic and pharmacodynamic models would describe the drug specific properties and system specific property which include the routes of drug administration, drug exposure, plasma protein binding, unbound concentration species dependent variation metabolism of the drug, active metabolites of the drug, dosing schedule and difference between the concentration of the drug used and its response. The drawback with the basic science research with experimental pharmacology lies in reproducibility of the findings. The success rate of which is being reported as low. 16 This problem could be attributed to poor specified methods variation in the behavioural methods poor post hoc analysis or the significant statistical finding. Thus, quantitative pharmacology studies will help to analyze the concentration response and the response time relationship with special impact of the drugs on the target disease. The translational pharmacology thus involves 5 major steps such as 17 • Basic research with the drug discovery and designing the molecule and study the physiochemical properties of the designed molecule in relation to the biology, • The preclinical development includes the study on its pharmacodynamics, pharmacokinetics, toxicology and the safety concern, all these are performed on the experimental animals.
• Establishing the relation between the observations made at the pre clinical studies with the early clinical trials which would involve the healthy volunteers. • Evaluation of the gathered data from the early clinical trial in terms of dose, safety, efficacy so as to formulate the guidelines for their use in clinical practice. • Lastly, to bring the result of basic research by minimizing their adverse effects and benefitting to treat the focussed cause.
In translational research, the bench side study includes; in-silico studies, in-vitro studies and in-vivo studies. Insilico studies have the potential to speed the rate of discovery of a new molecule which reduces the need for expensive laboratory work and clinical trials for e.g. in 2010 using protein docking algorithm EADock, potential inhibitors to an enzyme associated cancer activity were found by in-silico studies, 50% of which were shown to be active inhibitors through the in-vitro studies. 18 Similarly, in 2007, an in-silico model of tuberculosis that aided faster drug discovery to benefit than its real time (in minutes to rather than month), which was achieved by computer model of cellular behaviour. 19 Thus, the virtual screening cell models, digital genetic sequence all may be used for in-silico studies. The in-silico models include developing and validating complex mathematical models that are capable of reflecting the human diseases and response to the therapeutic intervention. The results obtained then can be refactored into integrated mathematical models which may enhance their translational potential.
Hence, the translational research would be helpful to prepare a model based on the pharmacokinetic and pharmacodynamic studies so that the safety and efficacy of newly introduced drug can be assessed prior to its use in the clinical trials. 20,21 This can be achieved with the help of the mathematical tool called pharmacometrics, which is a branch of science that involves mathematical models of biology, pharmacology and disease. It also helps to describe the interaction between the xenobiotics and the patients which may include beneficial and harmful effects. 22,23 Considering the safety and efficacy of any newly introduced molecule the translational pharmacology also further considers the relationship between the doses, systemic exposure its effects with the maximum benefit and minimum adverse effects. 21 Therefore, a careful study and a proper bridging from preclinical to clinical helps a proper translation of the research from the bench to the bedside, so as to get focussed benefit from the targeted clinical condition. Thus, eventually the translational pharmacology will be helpful in maximizing the chances of success of a new drug development. Thus, successfully helping evaluation of the newly developed drug balancing the positive benefit and the risk involved in the developed drug. It also helps in achieving effective post marketing surveillance.
However, the scientific problems, the ethical issues regulatory concerns inadequate financial support, shortage of investigators, inadequate samples, conflict of interest, right to privacy, fragmented poor infrastructure, incompatible databases and public support would be the major limitations in translational pharmacology. Lack of incentives on translational research in academia differentiates with the group of researchers in industry with large financial incentives in translational research. Therefore, indicating excessive expenditures by the industries in translational research as compared to academia.
CONCLUSION
Development of a new drug has been expensive and a long process yet unclear about the fate of the developed molecule. Translational pharmacology benefits with the translational medicine by focussed research and a plan for a new drug project. Use of database translates to reduce the cost involved and reduce the time for the drug development.
Thus, it helps to combine firmly the principles of basic and clinical pharmacology with the modern research technologies, so as to provide a proper bridge with the basic molecular pharmacological research in the research field and provide opportunities to the academic centres to commercialize their discoveries. | 2019-04-02T13:14:15.386Z | 2018-04-25T00:00:00.000 | {
"year": 2018,
"sha1": "c7253bd8d7ffa9211fed6302887d294742f814f0",
"oa_license": null,
"oa_url": "https://www.msjonline.org/index.php/ijrms/article/download/4566/3894",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8ccd235f7f3ee665203d38568ce9fb43a390c15f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237330179 | pes2o/s2orc | v3-fos-license | Report on the First African Swine Fever Case in Greece
African swine fever (ASF) poses a major threat to swine health and welfare worldwide. After several European countries have reported cases of ASF, Greece confirmed officially the first positive case on 5 February 2020. The owner of a backyard farm in Nikoklia, a village in Serres regional unit, Central Macedonia, reported a loss of appetite, weakness, dyspnea, and the sudden death of 6 domestic pigs. Necropsy was performed in one gilt and findings were compatible with acute to subacute septicemic disease. Predominantly, hyperemic enlargement of spleen and lymph node enlargement and/or hemorrhage were observed. Description of vague clinical signs by the farmer suggested a limited resemblance to ASF-acute infection. However, the disease could not be ruled out once septicemic condition including splenomegaly, was diagnosed macroscopically at necropsy. In addition, considering the farm’s location near to ASF protection zones, a further diagnostic investigation followed. Confirmation of the disease was obtained using a series of diagnostic tests on several tissue samples. Further clinical, molecular, and epidemiologic evaluation of the farm was performed. According to the contingency plan, authorities euthanized all 31 pigs on the farm, whilst blood testing revealed ASF virus infection. Further emergency measures were implemented to contain the spread of the disease.
Introduction
African swine fever virus (ASFV) is a large, double-stranded DNA virus in the Asfarviridae family, genus Asfivirus. ASFV is regarded as the only DNA virus that can be classified as an ARBO (arthropod borne) virus [1]. It infects only members of the Suidae family of all age groups and is a harmless companion of the warthog (Phacochoerus aethiopicus) and the bushpig (Potamochoerus porcus) [2,3]. The virus can be transmitted via direct contact with infected animals or indirectly through consumption of infected pork products, through infected soft ticks (Ornithodoros spp.) bites, or after contact with fomites contaminated with virus-containing materials/biological fluids such as blood, feces, urine, or saliva [4]. Carcasses of infected wild boars maintain the live virus for a long time, especially during winter, allowing for indirect transmission when in contact with susceptible wild boars [5].
African swine fever (ASF) is a highly contagious viral hemorrhagic disease of domestic pigs and the wild boar population with a severe economic impact on pork production and meat supply [6,7]. It is an acute-to-chronic, febrile disease characterized by high fever, cutaneous hyperemia, abortions, edema, and hemorrhage in internal organs, particularly lymph nodes [8]. Up today, due to the lack of treatment or vaccine development, the early detection and implementation of strict preventive measures constitute the only way to eliminate the disease [9]. Genetic diversity and the very large and complex genome of the ASF virus increase the difficulty of vaccine development, however, efforts with live attenuated or subunit vaccines from various research groups, have started with respective pilot studies [10][11][12][13].
Clinical and gross features of ASF depend on the virulence of the virus isolate, the route and dose of infection, and the host's immunological status [14]. However, a recent experimental study in Poland by Walczak et al. 2020, suggested that the same virus isolate might cause various clinical forms of the disease [15]. ASFV strains are usually classified as highly, moderately, and low virulent [16]. Four clinical forms of ASF have been described up to today. Peracute ASF usually occurs as a result of highly virulent strains. In that form, pigs die suddenly without any sign of the disease, or infected pigs develop loss of appetite, lethargy, high fever and die 1-4 days post-infection [14,17]. Moderately or highly virulent isolates are accountable for the development of the most usual form of the disease, the acute ASF, which includes vomiting, mucoid to bloody nasal discharges, melena, inactivity, and a tendency to crowding, as well as erythema or cyanosis of the skin, and abortion in sows [14,[18][19][20]. Affected farms may show up to 100% mortality rate seven days after the initial clinical signs develop. The subacute form of ASF occurs after the infection by moderately virulent strains and clinical features resemble those of acute ASF, but they are less severe [14]. Infection by moderate-to-low virulence strains leads to the chronic form of ASF [19,21]. However, this form has not been detected in countries where moderately and highly pathogenic ASFV strains have been present for a long time. It has probably been associated with low virulent strains of ASF employed in early vaccine trials carried out in the Iberian Peninsula in the 1960s [14].
At the post-mortem examination, the most remarkable finding of acute ASF is the hyperemic splenomegaly followed by multifocal hemorrhagic lymphadenitis observed mainly in renal and gastrohepatic lymph nodes [17,19]. Severe pulmonary edema is a characteristic finding which occurs in animals affected by highly pathogenic strains [22][23][24]. Additionally, petechial hemorrhages may be detected in the cortex and renal pelvis of the kidneys as well as in the epicardium, the endocardium, the pleura, and the mucosa of the urinary bladder [19,[25][26][27][28][29]. In the subacute form, hemorrhage and edema are more intense [20,25]. Features of chronic ASF are mainly associated with secondary bacterial infections. These include fibrinous pleuritis and/or pericarditis, pleural adhesions, necrotic or chronic pneumonia, fibrinous arthritis, and necrosis of the skin, tongue, and tonsils [18,30].
In Europe, the first outbreak of the disease occurred in the second half of the 20th century. In the first half of the 20th century, ASF cases were primarily restricted in Africa. The infection was eradicated via drastic control measures from all non-African countries, except the Italian island of Sardinia. In 2007, the ASF virus was introduced into the Caucasus and Eastern Europe where it has become endemic [31,32]. Since 2018, several Asian countries have also reported ASF virus infections [33]. From June 2019 to January 2020, the neighboring country Bulgaria reported 225 and 49 ASF cases in wild and domestic pigs, respectively. Some of those cases were detected close to the Greek borders [34]. Therefore, Greek authorities have established protection and supervision zones in two regional units next to the borders (Xanthi and Drama prefectures) since November 2019. The objective of this article is to describe the clinical, pathological, and epidemiological features of the first African swine fever case on a backyard farm in Greece, in 2020.
Postmortem Examination
On 3 February 2020, the carcass of an 8-month-old gilt was admitted to the Laboratory of Pathology, School Veterinary Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki. The owner of the farm reported that the animal died after 6 days of anorexia, weakness, and dyspnea and claimed that 6 domestic pigs had died the past two weeks and 3 more were sick with similar clinical signs. The pigs resided in an olive grove in two adjacent, but separate fenced holdings. Cases expanded gradually from the first to the second holding. According to the farmer, all pigs were not vaccinated, and they had received antibiotics as treatment for the above-mentioned clinical symptoms, which were attributed to a possible "respiratory disease". Like for any animal presented for necropsy, biosecurity measures were implemented to prevent the spread of potential pathogens and a post-mortem examination was carried out.
During necropsy, external examination of the carcass revealed no changes except peripheral lymph node enlargement ( Figure 1a). A few pinpoint hemorrhages involved the epiglottis while tonsils were slightly hyperemic ( Figure 1b). Body condition, orifices, and skin were normal. Visceral lymph node enlargement mainly due to hyperplasia and/or hemorrhage was a constant finding. On the right side of the thoracic cavity, adhesions between the parietal and the visceral pleura were observed. Pulmonary edema was so severe that white frothy fluid was noticed up to the larynx region, while the caudal mediastinal lymph nodes were dark red in color and moderately swollen ( Figure 1c). The pericardial cavity contained a small amount of serosanguineous fluid. Multiple petechial and ecchymotic hemorrhages were present on the left auricle and the subendocardial layer of the right ventricle (Figure 1d,e). In the abdominal cavity, abundant serosanguineous fluid with scattered fibrin fibers was observed (Figure 1f). Hemorrhages on the gallbladder wall and blood clots within the gallbladder lumen were observed (Figure 2a,b). The spleen was remarkably enlarged and hyperemic (Figure 2b,c). Furthermore, noticeable findings were the presence of linear serosal hemorrhages of the stomach as well as the intense gelatinous edema in the submucosa of its lesser curvature. Gastric lymph nodes were enlarged and dark red and black (Figure 2d). Moderately hyperplastic mesenteric lymph nodes were also noted. Finally, kidneys showed a few scattered petechiae throughout the cortex, in papillae and calyces (Figure 2e,f).
The lesions were compatible with those observed in septicemic and other diseases, such as erysipelas, salmonellosis, porcine reproductive and respiratory syndrome (PRRS), Aujeszky's disease, pasteurellosis, ASF, classical swine fever (CSF), etc. Since, no pathognomonic macroscopic features are described to establish or rule out ASF, further laboratory investigation was requested to identify the causative agent. Immediately, the laboratory contacted the veterinary authorities of Central Macedonia and the Farm Animal Clinic (Swine Medicine and Reproduction Unit). Tissue samples were properly collected from lymph nodes, tonsils, lungs, heart, spleen, kidney, and liver and submitted to the National reference laboratory in Athens, Greece.
Clinical Presentation in the Farm
The backyard farm consisted of two linked subunits. The first housed unit included one boar, four sows, 13 piglets, and 11 fattening pigs. The second open-air unit was an olive grove where animals were introduced for grazing purposes. The grove was surrounded by an electrical fence. At the time of on-site confirmation, another 2 fattening pigs were found in subunit 2, thus a total of 31 animals were present on the whole farm. Based on the case history, fattening animals were moved from the housed to the open-air unit approximately one month prior to the dispatch of the sick animal to the laboratory. Three animals died with vague symptoms (diarrhea, anorexia) during the last 18 days prior to the ASF case recognition. The last dead animal was transferred to the Laboratory of Pathology of the Aristotle University of Thessaloniki for investigation [35].
According to information provided by the farmer, he was the only person with access to both subunits, whilst animals were fed corn from local producers and grazed in the olive grove (subunit 2). Moreover, the farmer reported that animals were not purchased from external sources during the past two years and a second empty backyard farm was identified at a close distance. However, according to an on-site inspection by local veterinary authorities, kitchen/food leftovers were identified in the olive grove, whilst the possible approach of subunit 2 by wild boars could not be excluded at least for the time period prior to the placement of the electrical fence. Also, the hypothesis of contact between animals and foreign personnel from a nearby greenhouse, which could be the source of food wastage provision to animals in subunit 2, could not be excluded also [35]. The lesions were compatible with those observed in septicemic and other diseases, such as erysipelas, salmonellosis, porcine reproductive and respiratory syndrome (PRRS), Aujeszky's disease, pasteurellosis, ASF, classical swine fever (CSF), etc. Since, no pathognomonic macroscopic features are described to establish or rule out ASF, further laboratory investigation was requested to identify the causative agent. Immediately, the laboratory contacted the veterinary authorities of Central Macedonia and the Farm Animal Clinic (Swine Medicine and Reproduction Unit). Tissue samples were properly collected from lymph nodes, tonsils, lungs, heart, spleen, kidney, and liver and submitted to the National reference laboratory in Athens, Greece.
Laboratory Investigation
Laboratory testing was performed by the National Reference Laboratory (NRL) for African swine fever (Dept. of Molecular Diagnostics, FMD, Virological, Rickettsial and Exotic Diseases) located in Ag. Paraskevi, Athens, according to the official testing guidelines and EURL recommendations. The samples were received by the NRL on the 4th of February were immediately analyzed. Positive results were obtained on the evening of the same day, and the Chief Veterinary Officer was informed. On the 5th of February, the first positive case of ASF in Hellas was officially confirmed. In detail, testing for viral antigens was performed by a commercially available ELISA kit (Kit African Swine Fever Antigen; Ingenasa; Madrid; Spain). In addition, viral genome was detected by a commercially available real-time PCR assay (ID Gene™ African Swine Fever Duplex; IDvet; Grabels; France). ELISA testing for the ASF antigen demonstrated 7 positive samples out of 30 samples tested, and PCR-based testing indicated 12 out of 13 fattening pigs positive. In addition, anti-ASFV antibody detection was performed via a commercially available ELISA (ID Screen ® African Swine Fever Indirect; IDvet; Grabels; France), reporting 2 out of 31 samples positive and one sample as inconclusive [35,36] The methods used were according to those reported in the terrestrial manual of OIE [36].
Epidemiologic Results/Assessment
The same day, on February 5th, Veterinary and governmental authorities issued an alert about the ASF diagnosis. To avoid any potential contamination during the farmer's entrance in the clinic premises disinfection was applied in the facilities of the Clinic and 7 pigs used for an experimental study in a stable at the Clinic were euthanized. Prior to euthanasia blood samples were collected, and results were negative for ASF. Simultaneously, a scientific committee was summoned by the authorities to manage the emerged issue. The committee visited the farm and under its supervision, clinical evaluation and blood samples were taken from the 31 animals of the farm for laboratory examination. Subsequently, immediate stamping out was performed, and all pigs were euthanized and appropriately buried at the site, whilst disinfection of the area was also performed. Additional control measures (creation of double surveillance zones, forbiddance of commercial use and transport of pigs and products thereof from and to the area of infection, etc.) according to the National contingency plan were taken into action in the respective prefecture in order to prevent the disease from spreading to neighboring disease-free farms and mainly in wild boar populations.
Discussion
In this outbreak, clinical signs did not strictly correspond to an expected acute ASF case [14,17]. Clinical signs could be attributed to several pathogens, thus differential diagnosis included a variety of possibilities. Based on the vague clinical picture-the gradual increase in mortality, and the proximity to a bordering region with active ASF cases-the possibility of an ASF case was among the probable causes in the differential diagnosis. Necropsy further revealed evidence that suggested ASF was a very possible etiology.
Pathological findings, although nonspecific, appeared to share some similarities with those described in an acute and subacute form such as hyperemic splenomegaly and nodal hematomas [14,15,17]. Nevertheless, pulmonary edema has been previously demonstrated in the acute form [14], and it has also been described along with ascites in the subacute form of the disease [17]. A significant observation was the absence of skin erythema or cyanosis, the sparsity of renal petechiae, and the detection of hematoma-like lymphadenitis only in the gastric and caudal mediastinal lymph nodes. This observation, in combination with the animals testing positive for ASF, but not displaying clinical signs of the lesions described in chronic disease, led us to suggest that the current disease might correspond to a less virulent strain [31,37]. In the absence of findings related to the chronic form of the disease and secondary infections, adhesions detected in the right thoracic cavity were probably not attributed to ASF. A previous trauma incidence was a more probable explanation.
There are a few swine diseases with high mortality, similar clinical signs, and gross lesions as in ASF, but the most remarkable similarities are observed in CSF from which ASF cannot be differentiated by clinical or postmortem examination. However, the ASF virus is unrelated to the classical swine fever virus (CSFV), and there is no cross-protection conferred by infections from CSFV and ASFV. Further differential diagnoses include erysipelas, salmonellosis, pasteurellosis, and other septicemic diseases [8].
Conclusions
Our report provides evidence of ASF cases with vague clinical symptoms and necropsy findings that could be attributed to ASF and other septicemic conditions. Based on the findings, in this case, it is strongly advised to carefully judge the clinical picture even in cases with mild symptoms, especially in areas neighboring high ASF-risk areas. The contamination source could not be fully evidenced in this case; thus, an alert is needed in cases of possible contact with wild boars in the backyard or housed units. Appropriate application of respective farm biosecurity measures should be the major priority in order to reduce the risk of infection. Finally, continuing efforts to develop an effective vaccine have shown promising results and should probably further contribute to the control of the disease [13]. | 2021-08-28T06:17:15.504Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "090f8dd48309158700264496a125d3407ec8b3e8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/vetsci8080163",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a47473dab4267511c446964ba86e8ce09b180c26",
"s2fieldsofstudy": [
"Medicine",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.